title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
829
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Non-primitive non-linear data structures: Trees
Speaking of Christmas trees, let’s shake one-two before the carols — like our fathers and their fathers before them. Literally. Now, when it comes to data storage, using linear structures, be it linked lists and arrays or even stacks and queues, we have accountability over time complexity as the size of our data increases. That is, the larger the data being stored, the more time it takes to extract this data or manipulate it. ‘ Ohh well great! You’ve shown me how to store data only to give me the cons afterward?! What on earth!’ Hold on for one moment, will you! That’s why this piece is here. The non-primitive non-linear data structures. As with family lines, from grandpa to great grandpa and yourself, so is the same applied to data structures. A little theory before we get into the why. We’ll start off with the parent; the root. It’s from here that everything emanates. Just like a Linux directory structure. The / directory. Our tree root has two ‘children’, A and B , defined by engineers as nodes . From here, we see B with its own pair of children, 'C' and 'D' of whom C proceeds to get a single child node 'E'. We might as well call C a parent at this point. The little lines marking the links between parent and child would be named edges . Following a specific path to the final child, for instance, following through to E would name this bottom descendant a leaf node or external node. It's the last child in this specific path with no children. The same applies to A and D. Depth of nodes The number of edges from the root to the node. In our case, E would have a depth of 3. Height of a node How far is the smallest child from it in this specific path? B, for instance, will have a height of 2. Given there are two edges before we get to the furthest descendant in its path. Depending on how many children it has, how far is the furthest child of a node? How many nodes does this child have? Height of Tree The height of a tree’s root node. Degree of Node The number of children of a node. Above, C has a degree of one while B has that of two. A practical train of thought. We have data to be queried from an API REST endpoint. Our system has users, profiles, and articles. Thinking logically, a user profile is a child node of the user’s endpoint. We will have to get a user to log in, get the token, and query the profile based on the token. Something like this: So in your thought process, seeing as it is, software development is more of thinking than writing code, it would make sense to create a user model before a profile . Our User model is the parent, the root. Compare this to having a linear data structure. We could go on to break trees into more individual types: a) General Tree b) Binary Tree c) Binary Search Tree d) AVL Tree Binary Tree A tree, as displayed above, in which case parents have at max, two children. Due to this, the children are referred to as the left and right child. Think of how we might have this in decision trees, where we perform an action based on a condition. To elaborate further, we have the below groups. Full Binary tree Every node of the tree has two or zero children. Perfect Binary Tree The tree internal nodes have two children. All leaves also have the same depth. A perfect uniform. Balanced Tree If the height of the left and right sub-tree at any node differs at most by 1, then the tree is called a balanced tree. So in the case of the perfect binary, if either of the last child nodes had one more child, one and only one, it would become a balanced tree. It’s a strange way of calling something balanced, I know! General Tree Consider a binary tree. Any type of binary tree, but without limitations to what number of children a node can have. Do you want 4? Do you want 17? Go ahead and fill your database or whatever it is your structuring. Binary Search Tree A different way of looking at Binary search trees if you may. Here, the left nodes are always smaller than their parents in terms of the value they possess. The right nodes, however, are a little cocky, always having more value than their parents or being equal. ‘ Well, that’s not right. I’ve never seen this man in my life!’ Used maps/dictionaries before? Yeap. binary search trees you deserter! Searching for a specific item in an array of 100 or 200 might not be a problem. Increase this to 10 000 or 1 000 000 items, however, no one wants to be the user. AVL Tree Named after Adelson, Velsky, and Landis, this is a self-balancing binary tree in which each node has extra information — the balance factor- whose value is either -1, 0, or 1. You remember heights, right? Well, the balance factor comes in as below: BalanceFactor = height_of_left_subtree — height_of_right_subtree Whenever a tree is out of balance, AVL trees work around this by rotating nodes. Whether moving the right node to the left or the right, or left-right rotation or perhaps right-left rotation. What this means is that at one point, the right node which seemed heavy/ larger to the left will become the parent and hence the parent becomes the left child while the left child becomes the new right child and so on. think of it in terms of the hour clock. It will rotate itself in either direction until the tree is balanced. More detail on this can be elaborated on in a later conversation. Keep an eye up! Conclusion We’ve taken a bucket load of information. Trees and searching, children, and siblings. What is all this knowledge without implementation? Fire up that laptop one last time this year and think of how you could have consumed that endpoint differently. Take a sabbatical, if you may, and have the rest you need, then come back and break the time complexity barrier. Write more efficient code, more productive code, and let’s make the user experience smoother. Here, TheGreenCodes
https://medium.com/@marvinkweyu/non-primitive-non-linear-data-structures-trees-b7a19693cb3f
['Marvin Kweyu']
2020-12-16 18:42:38.401000+00:00
['Software Architecture', 'Datastrucutre', 'Software Engineering', 'Software Development', 'Algorithms']
The Heat In Freezing
There is heat in freezing. I used to think I was weak or depleting; Too little to be worth greeting, But it is not that I am not good enough to be heating. I am the heat in freezing. I am enough. I am a scarce resource That no extraction could force. I am the barrier, buffer, and boundary Between you And absolute zero. I am what I create in my foundry. There is nothing you can do To be my hero, Because I am the heat in freezing. I do not exist for appeasing; My purpose is not to be pleasing. Though I may seem contradictory, I will achieve my victory Because no blizzard could quell The blazing fires of hell Radiating, emanating, from my being. I am the heat in freezing. I am liberated. I am freeing. I am the heat in freezing. My existence is exquisite and I won’t quit. I follow my own directive. I am not inadequate. I am not defective. For the heat in freezing, Is precious, Is sacred. 12/16/2020 HM, Jasper, Marie
https://medium.com/@hmloving/the-heat-in-freezing-32353f50d6fd
['H.M. Loving']
2020-12-16 21:31:31.813000+00:00
['Personal Development', 'Life Lessons', 'Resilience', 'Self Love', 'Poetry']
Mighty Bear 2020 Review: What We Did, What We Need To Do Better, What’s Next
Introduction Every December we send a letter to our friends and investors reflecting on the previous year and sharing our thoughts and learnings. This year we decided to go a step further and share our thoughts with everyone. Photo by Markus Winkler on Unsplash Diversity Companies sometimes make a big deal of their “diverse” workforce as a way of whitewashing (sometimes literally) a lack of diversity at senior levels by presenting only aggregate (company-wide) data. Our opinion is that if a company’s decision-makers all look the same or come from similar backgrounds, then you have no real “diversity” worth speaking of. As a company which was founded and built in Singapore, our commitment to diversity includes having Singaporean talent at all levels, as well as ensuring a healthy mix of different types of people across the organisation. To evaluate our performance on this front, we sorted all full-time staff into three buckets — “Steering”, “Core”, and “Contract” — and took a look at the gender and nationality splits of each group. The studio is almost at an even 50/50 split between male and female team members. There’s a good level of diversity at senior and junior levels, and both the Steering Group (leadership) and wider teams have a strong Singaporean core. We’re pushing to ensure the Steering Group becomes even more diverse (the gender split needs to improve), as it’s the one group where true diversity will have the widest-reaching impact. For the sake of transparency we’ve included tables with with the raw data on diversity at Mighty Bear at the bottom of this post.* *If your employer doesn’t share this data, you should ask. If they’re not tracking it, you should probably look for a new job. Detailed Diversity Data Total Studio Diversity One of our goals for the year ahead is to increase the amount of female representation within our ranks, and double down on our commitment to creating an equitable workplace where everyone has access to the opportunities and support they need to succeed. Everyone at Mighty Bear is here on merit (we don’t hire people simply to “tip the scales”), we look for people who are best in class and bring a unique perspective — this means having a diverse team. We also have a duty of care to the local ecosystem and we’re working on bringing through even more local talent (more on this below). Steering Group We’re committed to improving the gender ratio at this level. It is skewed by the three company founders all being male, but that means we have to make an additional effort to ensure everyone is getting access to the mentoring, development, and leadership opportunities they deserve. Our leadership team has a Singaporean core (64%) and a healthy mix of backgrounds, nationalities, and perspectives. Core Group Only one person on this list is not in a development role — we don’t pad out the ratio of Singaporeans/PRs to foreigners by hiring lots of admin staff. This is a gender split we’re comfortable with. It’s likely that it will skew closer to 50/50 in the coming months as we continue hiring and scaling the studio. There’s a global shortage of gaming talent and to be able to staff the core team with close to 80% local staff would be considered strong even in countries with a more established games industry and larger pool of local talent available. We have 6 nationalities in total (some staff qualify for more than one passport): Singaporean, Indian, Indonesian, Norwegian, Turkish, and Vietnamese. Contract This is a mix of interns, part-timers, and one team-member who is full-time remote (the number of remote workers will no doubt increase in the post-COVID world). The sample size is fairly small, but this will no doubt end up looking more like the core team over time. Culture and Employee Wellness With the shift to remote work we had to develop new ways to make sure the team were coping during lockdown and that our company culture wasn’t being neglected. Culture As CEO I took time out to check-in on a 1:1 basis with our employees at various points since lockdown began in February. I made a point of trying to meet personally with everyone on the team (in a responsible and socially-distanced manner), as people are often not as open about what’s on their mind over Slack. We arranged meals (no more than 5 in a group allowed in Singapore) with employees so they got to see each other from time-to-time. New members got to meet their colleagues at least once IRL. Aside from this, we’ve been fully remote since before lockdowns became mandatory to help do our bit against COVID-19. We moved our Friday meetings online. In the Friday meeting employees are updated on all the company’s KPIs, financial position, plans, and can ask the founders anything — even the founder salaries are open information within Mighty Bear. Without the benefit of face-to-face interactions, we had to be more intentional in making our online meetings more engaging and providing a real sense of fun. One of my pet projects this year was creating an anthem for our weekly Zeitgeist sessions. It was a surprise to the team, and the memes and impromptu karaoke sessions it generated made it well-worth it. Someone actually wrote up the lyrics so everyone could sing along. Some employees are shocked by this level of openness when they first join, but we make a point of hiring professionals who we can trust. We think the best way to get the most out of people is to be open with them and give them the full picture to properly inform their decision-making. This allows us to not have to micromanage staff: we give them context and let them run with it. We realised that a lot of what we’d been doing in-person on an ad-hoc basis did not translate to a remote format. Some of our processes ended up having to be formalised and documented to ensure they weren’t neglected. We were over-confident in our ability to continue 100% as-is with the switch from in-person to remote working. A couple of hires we made in the past did not go well. This was through no fault of their own; it was simply not a good fit. As a result of this we overhauled our hiring process, standardised questions to eliminate biases, and added multiple layers of cultural interviews. This has made hiring harder and slower. (The new process uncovers a lot and is especially strong at detecting bullshit.) However, since implementing the new process we’ve seen an improvement in levels of cultural alignment and performance with the new hires. (You know who you are! 😌) Even if it makes hiring really hard, it’s 100% worth keeping a culture-oriented process to avoid hiring mistakes and maintain a standard of excellence. Wellness We operate a flexible/unlimited leave policy. During lockdown we realised people were not taking as much leave as they should. This is not surprising as taking leave in Singapore usually means taking a vacation overseas, and it’s not straightforward to leave and get back into Singapore at the moment. As CEO I started mandating that certain people *had* to take leave (they simply hadn't taken enough). In the year ahead we’ll have to work on a way to formalise this process and ensure that everyone in the studio has at least 20 days off. Another aspect of this is studio opening times. As people had total flexibility in their leave policy we never saw the need to close the studio unless there was a public holiday. We realised that this is counter-productive as people don’t want to take leave if their colleagues are working. This month we’ll be closing during normal working days (not public holidays) for the first time, and we will likely do this for around specific holidays (Chinese New Year, Christmas and NYE) going forward. We provide unlimited paid mental health support to all our employees. Mighty Bear has an arrangement with a mental health practitioner who invoices us if our employees decide to visit them. We don’t monitor use of the service and we don’t ask questions of anyone using the service. We encourage anyone who’s dealing with whatever is going on in their lives to get the help they need and not worry about the cost. We sent people semi-regular reminders of how much we value and care about them. These were both verbal (acknowledgment and gratitude) and via care packages we surprised people with. Bears that care. These are a good foundation but there’s more we could be doing. In the year ahead we’ll explore new options to make sure that people are healthy, happy, and looked after. Product This year has been very eventful. We shipped Butter Royale on Apple Arcade, and started on two big new projects, both of which are planned for release in 2021. Butter Royale — What We Did The game went live on January 24th. It’s out on iOS, Mac, tvOS, and iPadOS. By the end of this year we will have released 8 additional updates. The game has grown hugely in terms of content and features. The roadmap has been dictated by our thoughts into what improvements were needed as well as by regular dialogue with the community. Butter Royale — Learnings Without going into specific KPIs for Butter Royale, here are some high-level insights: Making a game for a subscription service is fundamentally different to building a Free-to-Play (F2P) game. It’s not a case of “just build the best possible game”. Each subscription service (be it Xbox Games Pass, PlayStation Plus, Apple Arcade, etc.) has its own way of identifying successful titles in their catalogue. Being part of a subscription service means your game will be part of a curated collection. You will need to demonstrate technical excellence and balance the needs of the feature roadmap alongside changes necessitated by OS updates, new hardware, updates to the services and changes to the platform holder’s Technical Requirements (TRCs). When you make a mobile F2P game you don’t have to worry about TRCs — your development team will be 100% focused on releasing updates and improvements. You will probably ship fewer big features per year, but players will have a much better overall experience. Apple Arcade is the first gaming platform to be 100% ad-free and have privacy at its core. This is great but it also presented us with a challenge we’d never encountered before: How could we acquire users without resorting to traditional User Acquisition (UA) practices? We’ve focused our efforts on community-led growth by engaging directly with our community and potential players on social media channels, rather than pouring money into ads. The masters of this approach are Proletariat (Spellbreak), MediaTonic (Fall Guys), and BetaDwarf (Minion Masters), and there’s lots online about how they do things. We’re still iterating and optimising our approach with our community. Stewardship As one of the larger and more prominent studios in Singapore we feel a sense of duty to the wider ecosystem.
https://medium.com/mighty-bear-games/mighty-bear-2020-review-what-we-did-what-we-need-to-do-better-whats-next-475fdc604d3c
['Simon Davis']
2020-12-11 15:05:02.534000+00:00
['Game Development', 'Diversity', '2020', 'CEO', 'Year In Review']
Tanker: A Multi-Datacenter Customer Data Migration Framework
Illustrative diagram of multi-dc data migration Intro Every successful or growing SaaS company usually reaches the point where serving customers from one datacenter (DC) isn’t enough and a change in infrastructure, architecture, and software applications will be necessary. As you likely know, serving customers from multiple DCs can be quite challenging and it requires proper design and implementation from the lowest levels of networking all the way to considering external requirements like customer data protection rules. This is all done for the goal of a faster, more secure, and more reliable service. Pipedrive is no exception to this rule. In this article I touch on the history and reasons behind implementing our own mechanisms for customer data lifecycle. I will also go deeper into the data migration platform we have built and use, and lastly touch on future enhancement possibilities. History Let’s start with the “Megaparsec” project (as we call it) that was brought to the table in 2017 when we decided that a second DC is inevitable. From that point forward, our Infrastructure team and a few others have begun preparing for a giant leap. One of the main contributors behind needing a new DC, was the introduction of GDPR, a regulation that forced us (beginning May 28th of 2018) to store European Union (EU) customer data within the EU — all while we were continuously getting more and more customers from that region. From day one, Pipedrive has had EU clients that were stored in the United States based DC. To abide by GDPR rules, their data had to somehow be sent back to and served from the EU, to comply with new data protection policies (and to make our EU customers happier with faster response times and increased security). Before data migration we needed to solve the multi-dc request routing problems. This is where Barista — our own public requests gateway service — came into play. It provided us with an easy request routing from DC to DC, centralized authentication mechanisms, rate-limiting, service discovery and more. This service deserves an article of its own, so I won’t get into details now. After we solved the cross-continent network issues, it was time to move the customer data. First, customer DBs were migrated manually by our devops, but this wasn’t a scalable solution going forward. The good part is that there were only few a microservices with their own DBs at this time — some already had them in addition to main customer data that was in company dedicated DB. [You can read more about the database architecture in this article by our very own Infra Architect, Vladimir Zulin.] With the number of services and dedicated DBs growing every month, manual data migration started to become a pain. It came as no surprise when Pipedrive’s CTO Sergei Anikin, approved development of Tanker — a multi-dc customer data migration framework and orchestrator service. Illustration of a tanker ship Why Tanker? Because tankers are ships used to transfer goods, and our service is one that transfers valuable customer data. My team started this project in Q4 of 2017 and then launched it live in Q2 of 2018. An interesting factoid, is that Tanker became the first big service written in golang in Pipedrive and it opened a route to this stack for other services, teams and now tribes. The problems we wanted to solve As stated earlier, moving customer data from one DC to another became a necessity for us. This involves not only movement to a European DC, but also EU to US data movement as well. Requirements for location of data storage change either due to a customer’s legal address change, hardware performance reasons or because a majority of users are geographically located near a specific DC. Those changes force us to do data migrations on demand and sometimes at a big scale — moving thousands of customers between DCs or database servers in single DC. Additionally, our support and infrastructure engineers did a lot of manual work to migrate customer data. Automation was required to reduce the cost and save us from basic human mistakes. What this all comes down to, is that the need for data migration service was justified, but as time passed, we also wanted to use the migration framework as the basis for company data lifecycle management. Customers migrate, cancel accounts and often come back after some time has passed. For our systems, this translates into the following main operations: Main multi-dc migration framework operations I won’t cover registration or bootstrapping operations of customer data here as those don’t directly belong to our data migration framework. We have a separate set of services that take care of all customer registrations and billing. How the migrations were done The Lifecycle of Customer Data starts directly after the company account registration. After an application trial usage period has passed or a company has decided to leave Pipedrive and closes their account, we need to delete all their data (to follow GDPR) and end the lifecycle. Unfortunately, this can cause problems because, the first 30 days following an account removal, users sometimes ask us to have it restored and resume using their previous data (data protection policies prevent us from storing backups for a longer period). For the sake of making this article shorter, I will describe only a part of the migration process, leaving some details about account removal and restoration out. To be able to move/migrate any data, one needs to make an export of it in a source location and only then transfer it to a target. It sounds simple, but there’s an issue — we have multiple services that deal with customer data, multiple storages, multiple types of storages, data that is being continuously written from numerous integrations & sources, and it comes from public API calls etc. Basically we can’t simply dump a single DB, create an sFTP connection, copy data and then re-import. The migration process is much more complicated in the microservice world. Multi DC migration framework consists of an orchestrator (Tanker) that runs in an independent region, migration agents (RoRo — a type of tanker ship) that run in specific geo-DCs where customer data is located, company data provider (CDP) services that actually access customer data and backup storage service (AWS S3 in our case) where we store the exports. Simply put, Tanker as the Orchestrator allows us to: - holds a connection to its agents (RoRo) in DCs via gRPC - schedules sub-tasks and monitors/updates their status - tracks overall progress of the main task and reports it to our Backoffice UI for engineers. Orchestrator and agents use MySQL to data store and queue for tasks, subtasks and related log records. Simplified migration process of a “Happy” flow Multiple steps must be covered for a successful migration: To ensure data integrity, we stop writing to storages and serving customer API requests after a process of backup/migration/delete/restore was initiated. We basically completely lock the company’s public and internal traffic. Additionally, as we use event queues (Kafka, RabbitMQ) and orchestrator waits for some time before starting exports until customer events get processed. During the lockdown, no one from the customer’s company can use the application (no integration calls will pass through) so operations can safely be done without breaking or losing any data. Create exports from all customer related storages via related company data provider (CDP) services and save them in Amazon Web Services S3 bucket for easy access from target region. These different exports should not be out of sync, meaning all data should be considered as a slice in a specific time (the time a process of a backup was initiated). We make sure that all export operations have finished successfully, and start to import tasks in target DC and stream data to CDPs for importing from AWS S3. tasks in target DC and stream data to CDPs for importing from AWS S3. Only if all import operations in all customer related services are successful, will we configure our systems to serve customer requests from the target DC, unlock the traffic and consider migration as a success. After some time, we cleanup the source DC as account data there is no longer in use. If something goes wrong, we cancel all imports, unlock the company in source region, rollback all successful imports (delete data) in the target region and consider migration task as failed. On every step we confirm that the operation was successful, that the number of bytes received from and sent to S3 was as expected, that the number of imported rows in tables was as planned and so on. This ensures customer data safety and integrity. We store exports/backups on S3 as it is relatively cheap, has a built-in TTL mechanism for automatic files cleanup after a period of time (GDPR), and is accessible from any geo location. The format of exported files is determined by the CDP service so the teams behind each service have freedom and responsibility to come up with any format depending on the data structure they own and the storage type used on their side. The Multi DC migration orchestrator and its agents don’t modify contents of backups and act as a proxy between CDP service and S3, leaving memory consumption of agent instances low. Agent does go through the file content received from CDP or S3 but only uses chunks of streamed data to calculate checksums to ensure integrity of file transfers and does not store the whole file in memory. Every CDP has standardized, well-documented API endpoints that were agreed on beforehand with all teams, to make export, import, delete and post-migration operations. This means that every new feature supporting a microservice that deals with and own customers data, should have those migration API endpoints implemented before production starts so at any point, we can provide our customers with reliable migration, backup, removal or restoration flows. Thoughts for the Future Tanker and RoRo were the first services written in Golang for us, and at that time nobody really knew how to use them properly. That said, there hasn’t been many issues since we do tests (dedicated testing service & unit tests), but the code is pretty complex and not easy to modify or read. This also happened before with NodeJS, where we learned it by using it, collecting best practices and experimenting, making services more reliable, self-healing and becoming less prone to mistakes. Nowadays NodeJS is a pretty mature language in Pipedrive and we are on the same path for Golang. We create unified/reusable libraries for our Golang ecosystem and as our expertise grow, I could see a significant improvement for data migration services in terms of reducing technical debts we originally introduced when we rushed into adding more capabilities (restoration, on-demand backups, manual and automatic obsolete accounts data removal). Additionally, we could make backups without the need for total customer lockdown, but this may be extremely hard to achieve as data may quickly become out of sync if we do not stop incoming requests during backup phase. Our plan is to constantly improve Tanker, RoRo, and other framework services especially in terms of reliability, readability, maintainability and visibility. When a migration requires to operate with tens of CDP services in parallel, involves connections to multiple DCs, monitoring always is the helping hand in understanding what happened, if something goes wrong. One future use case of a framework that could help engineers with investigating issues/bugs by restoring customer data backups in dedicated temporary environments — testboxes — and basically replicate the production environment for a specific customer. This way engineers can perform deeper investigations without breaking customer data consistency or interfering with normal customer activity. Conclusion I have shown you only a glimpse into the main flow of the migration process, but in reality, it is more advanced and includes additional steps that sometimes, for different CDPs, need to be run before and after import processes are complete, e.g. reinitialize some synchronizations, reinstate integrations, clear caches, etc. During the years of migration framework usage, we have helped to restore many different customers accounts so they could continue from the point where they left us. We’ve moved thousands of them from one DC to another, conducted numerous DB servers maintenances (yes, we can migrate data between DB servers in the same DC as well), cleaned up our infra from unused data and more. Obviously, it is a huge investment into developing something like Tanker, but once you master customers data management properly, this opens up possibilities to serve users in ways that were either impossible before or were done manually thus raising happiness and quality of experience, optimizing customer support and devops engineers time by automation. Happy migrating!
https://medium.com/pipedrive-engineering/tanker-the-story-of-multi-dc-customer-data-migration-framework-ad842c2c6b9c
['Sergei Kretov']
2020-06-16 10:09:51.053000+00:00
['Multi Data Center', 'Framework', 'Customer Data Management', 'Data Migration', 'Engineering']
Coming Here Was A Mistake
Said through the nose of a grandfather clock because the time wasn’t bright Coming through a mistake still standing still loving the company of misery You did pull away & I pulled back into the swamp of memory where I cherry-picked the best reeds to blow The sound of that wasn’t anything like the remorse we’re so used to Then hiding our Canadian fists in an effort to make it to the door first holding open empty expressions with the confidence of geriatrics So you shovel your argument along with the other shit & let the night take hold of your hair for the separation that isn’t a mistake
https://medium.com/scrittura/coming-here-was-a-mistake-32372c4aa8f9
['J.D. Harms']
2019-12-11 15:39:24.718000+00:00
['Relationships', 'Breaking Up', 'Disappointment', 'Poetry', 'Hurt']
The Unforgettable Christmas Memory
Best Wishes and Regards from Proud Patricians The festival of Christmas has always been filled with joy and festivities. But if you have had a convent education during your high school days, you will definitely be able to relate to the memories we have highlighted here. Etched strongly in our minds, the next few paragraphs fondly relive each part of the celebration. The spirit of celebrating Christmas proceeds soon after Good Friday. Beginning with the morning assemblies with the reading of the Bible by our Principal, to Hymns & Carol singing, part of the celebration rang in a festive vibe. Once the students were notified about all the competitions, the participation from each house would start flooding in. The excitement would flourish not just within the student groups but amongst the teachers and staff members as well. The enthusiasm outspread the account of preparations in all the students and teachers who worked dedicatedly not only to win the competition but to boost the level of enjoyment that would take place during practice sessions. With grinding practices and the selection of students would bring in the Christmas week of joy, competitions and celebrations. Every day would bring with it new winners, outshined by hoots and cheers. The skit competition by all the different groups portraying the birth of Jesus Christ by highlighting different chapters and verses from the bible. The melodious, ‘Joy to the world’, ‘Oh come, all ye’, ‘Ghungroo baaje’, ‘Silent Night’ chorus carol singing competition which would just end the day with wishing you an euphonious Christmas. The art competitions which included the Christmas tree decoration competition, bell decoration competition, the chart making competition, the christmas wreath making competition, the backdrop making competition, which allowed a platform for everyone to showcase their talent and provide a visionary aspect to the ingenious creative and innovative minds. Every student used to be so excited for this time of the year, the winners of all the competitions would get the opportunity to perform in the final event. Rewards were the most awaited part of every celebration….but Christmas being ‘Christmas’, all the students would doll up in sync with the theme of red and white, standing in a queue for receiving those imported chocolates by striking through the names in attendance. With our legs dancing to the beats of music played by the DJ, gave us the chance to dedicate our favourite songs to our beloved groupmates. Christmas was not just a celebration, it was much more than that. It was about love, it was about joy, it was about the gesture, and of course it was about spreading happiness, so here we are getting you to feel the nostalgia and wishing you the treasure of heart. Merry Christmas & A Happy New Year folks.
https://medium.com/@aawaziilm/the-unforgettable-christmas-memory-34f429736589
['Aawaz- The Literature Club']
2020-12-25 09:40:50.830000+00:00
['Christmas', 'Happiness', 'Festivals', 'Joy']
When does toeing the party line become insanity?
Before I even finished this article I could already anticipate the comments I will get— they’re criticisms I’ve received many times before from other Republicans when I ask the sometimes difficult or uncomfortable questions. Comments with vague (or explicit) references to socialism, or theories that I’m actually a democrat, or *gasp* a RINO (how ridiculous is it that we are so concerned with policing others inside of our own party that we feel the need to create derogatory terms for those aligning with us? — but a different tangent for a different article). So let me preface this by assuring you that I am none of those things; I have spent the better part of the last 15 years working in various capacities for the Republican Party, and it is work that for the most part I am very proud to have participated in. I am a firm believer that we have value to contribute politically to the country, and that we are an integral part of both America’s historical and future success. But to fully embrace that potential we have to be willing to face the difficult conversations; so it’s out of love for the party, and more importantly for the country, that I feel compelled to ask the question: why do we, as a party, keep pushing the same fundamental “trickle-down” economic narrative that data and history have shown us do not work? Trickle-down economics, or trickle-down theory, is a niche manifestation of a broader economic theory called supply-side economics. While supply-side economics argues for lower taxes across the board, trickle-down specifically focuses on decreasing taxes for the higher end of the income/economic spectrum. The basic concept is that if you decrease taxes on the rich and powerful those tax benefits will translate into increased wages, jobs, and overall economic growth. This approach was pushed as a policy agenda most notably for conservative Demi-god Ronald Reagan during his presidency, though his budget director later publicly criticized the approach in retrospect. But it’s this period where the underlying loyalty to trickle-down messaging really began for the Republican Party. However, after giving “trickle-down” policies a solid test run, studies conducted by David Hope of the London School of Economics and Julian Limberg of King’s College London in 2020 which retroactively assessed the efficacy of these policies as exhibited through fifty years of data, the numbers show that these policies have failed to promote jobs or economic growth, and may have actively contributed to a reduction in development and growth. In fairness to these policies the world looks very different today than it did in Reagan’s day. There are a wide array of factors that these policies did not, and could not reasonably, have accounted for; most namely, the misdirection of personal and corporate income to outside countries in order to shelter revenue from liability and taxes. But despite the many reasons why the reality created by these policies may differ significantly from the anticipated impact, the numbers are staring us straight in the eye. The data has definitively shown that this principle, as historically applied, does not work. It fails to help middle to low income individuals grow and sustain wealth, it fails to create wide spread economic development, and it fails to facilitate financial prosperity across the board. Ultimately the rich have become richer, and everyone else has remained the same; in a historical period where income inequality is at an all time high, and with economic disparity and instability being the number one historic indicator of civil unrest, this is not an issue Republicans can continue to ignore. Yet despite fifty years of data, observation, and trial and error we continue to push the same economic narrative despite knowing how it has played out. To put it bluntly, and with as much love as I can, this is total insanity — and it’s time we talk about it. Now if you were to look at the principles of the Republican Party, as articulated in the national platform, and the characteristics of both trickle-down economics and supply-side economics in a line, you’d probably be surprised to find that they do not align well. The first general policy point articulated in the party platform from 2016 explicitly emphasizes rebuilding the economy and creating jobs, followed closely by the articulation of party tax principles. Under the section entitled “Rebuilding the Economy and Creating Jobs” it discusses providing opportunity to learn, work, and realize the prosperity freedom makes possible. This is a great thing, and if you’ve ever read Arthur Brooks’ book entitled “The Conservative Heart” then you are likely aware of the truly compassionate underlying principles that this policy represents. However, as our domestic population increases, and jobs continue to be lost to technology and innovate production methods, the concept of driving opportunity for prosperity has become evermore illusive to middle America. While it was great in theory to believe that if we cut taxes to the rich they’d reinvest that capital into the markets through job creation, the data has shown us that is not what’s happened. Those benefitting from the system haven’t reinvested into economic development, they’ve hoarded those funds and oftentimes even moved them outside of the country. Why do I reference the platform? Because I want those of you that identify as conservatives, those that are already itching to call me a RINO, to realize that I am not pulling this from thin air. I am a Republican just like you, and because of that identity I want the party to live up to the principles that we articulated and approved as a unified group. Trickle-down is not doing that; trickle-down is working against us, and we’ve developed such a loyalty to it that we’ve become willing to illogically forsake our own fundamental beliefs to avoid publicly denouncing this approach and moving to something that can more effectively get us what we want. We are not the party of trickle-down economics, we are the party of economic opportunity and growth for all — because history shows us that creates strong families, strong communities, and a strong nation. So where do we go from here? The new kid on the block, Velocity Economics, is a term coined in 2018 by Chris Draper for an economic framework that expanded upon Milton Friedman’s later work on the velocity of money. Expanding beyond Friedman’s macroeconomic frame, velocity economics. is a production focused theory that seeks to maximize the rate of production, distribution, and consumption of goods and services by removing “decelerators” that inhibit individual achievement. While supply side economics suggests that if you protect/supply the rich with more resources and money the wealth will trickle-down, velocity economics suggests that if you instead help individuals to succeed faster, the result will be sustainably equitable wealth accumulation. This modern alternative to economic policy thrives when applied to the economic portions of the Republican platform where supply side policies are currently failing. The focus of velocity economics is on encouraging policies that allow the government to remove impediments to individual productivity, which inhibit the accumulation and attainment of individual wealth and prosperity. As the Republican platform explicitly states, “Government cannot create prosperity, though government can limit or destroy it.” Velocity economics provides an alternative viable solution for fostering a robust economy while simultaneously facilitating long term economic development and growth. After having many conversations with Chris Draper, and my friend Graham Klemme, I came to realize that velocity economics and the Republican Party and principles have the potential to be great friends. For Republicans specifically, it provides a means of escaping from the cyclical and failing narrative of trickle-down economics, and bridging the gap between our ideals as articulated in the platform and enacted in policy. From the start I was confident that this was a concept both liberals and conservatives alike could latch onto, but ultimately people like to hear it from their own. For that reason, I’m leading the charge on opening up this conversation to moderates and conservatives throughout middle America and beyond — and on January 11th, we’ll be publicly launching a nation wide collaborative policy organization called MiddleOut which will focus on supporting policy and initiatives that foster American productivity and economic growth without leaving middle America behind. Our work will be bipartisan, and seek to incorporate voices from each side of the aisle. But most importantly, it will drive economic development that is both sustainable, and effective for providing every American with individual opportunity for long term prosperity. As the party of fiscal conservatism, Republicans have naturally imbedded ourselves in the political fabric of America as the guardians of economic growth and development. But for far too long we have denied the data-driven realities of trickle-down economics and continued to perpetuate policies that fail to protect American prosperity long term. It is time for us to break our loyalty to this ineffective economic approach, and transition to modern policies which will ensure commitment to our fundamental ideals, and foster the economic opportunity which serves as a cornerstone of the American dream continues as a reality for generations of Americans to come. Whether you agree or disagree, the best part about this grand democratic experiment we call America is that you have the right to engage in conversation and debate. If you want to continue this conversation with me and my colleagues, I encourage you to join us at www.MiddleOut.org where you can subscribe to our mailing list, and stay up to date on the work we’re doing and get involved in the conversation yourself.
https://medium.com/@jvittorio/when-does-toeing-the-party-line-become-insanity-6519c14b40f9
['Jessica Vittorio']
2020-12-22 22:31:06.128000+00:00
['Policy', 'Republican Party', 'America', 'Economics', 'Politics']
How does Jest work inside?
Test function Let’s start with the basic structure of a test, the test method itself. I need to be able to write a test implementation, give it a title, and flag if it went well or not. We wanted to have something like this: test('Passing test', () => {}) // ✓ Passing test and: test('Failing test', () => new Error('Error message')) // ✕ Failing test // Error message So we can implement: function test(title, callback) { try { callback() console.log(`✓ ${title}`) } catch (error) { console.error(`✕ ${title}`) console.error(error) } } Expect Now we need to be able to assert the values we want to test. For simplification, we will just show the toBe function. We will test the following function: function multiply(a, b) { return a * b } And we want to test: test('Multipling 3 by 4 is 12', () => { expect(multiply(3, 4)).toBe(12) }) // ✓ Multipling 3 by 4 is 12 test('Multipling 3 by 4 is 12', () => { expect(multiply(3, 4)).toBe(13) }) // ✕ Multipling 3 by 4 is 12 // Expected: 13 // Received: 12 We could have: function expect(current) { return { toBe(expected) { if (current !== expected) { throw new Error(` Expected: ${expected} Received: ${current} `) } } } } Mock When we want to avoid running a function that is being called inside another one, we mock it. It means, we override the original implementation. Jest has for this purpose a function called jest.fn (fn from function). Let’s see how it works: // random.js function getRandom(min, max) { return Math.floor(Math.random() * (max - min) + min) } export { getRandom } and: // cards.js import { getRandom } from './random.js' const getRandomCard = (cards) => { const randomCardIndex = getRandom(0, array.length) return cards[randomCardIndex] } export { getRandomCard } we want to test: // cards.test.js import * as randomGenerator from './random.js' import { getRandomCard } from './cards.js' test('Returns 7♥', () => { const originalImplementation = randomGenerator.getRandom randomGenerator.getRandom = jest.fn(() => 2) const result = getRandomCard(['2♣', 'K♦️', '7♥', '3♠']) expect(result).toBe('7♥') expect(randomGenerator.getRandom).toHaveBeenCalledTimes(1) expect(randomGenerator.getRandom).toHaveBeenCalledWith(0, 4) // we keep the test idempotent randomGenerator.getRandom = originalImplementation }) To make this possible, we need jest.fn: function fn(impl) { const mockFn = (...args) => { mockFn.mock.calls.push(args) return impl(...args) } mockFn.mock = {calls: []} return mockFn } and we implement: import assert from 'assert' function expect(current) { return { toHaveBeenCalledTimes(nrTimesExpected) { if (current.mock.calls.length !== nrTimesExpected) { throw new Error(` Expected: ${expected} Called: ${func.mock.calls.length} `) } }, toHaveBeenCalledWith(...params) { // this is a simplified version if (! assert. deepStrictEqual(current.mock.calls[0], ...params)) { throw new Error(` Expected: ${expected} Called: ${func.mock.calls.length} `) } } } } jest.spyOn If we want to make sure that our tests are idempotent, we need to restore the mocked function to its original value. With the mock implementation described above, we have to keep the original value and then reapply at the end of the test. Let's try to see how jest.spyOn makes our lives easier. // cards.test.js import * as randomGenerator from './random.js' import { getRandomCard } from './cards.js' test('Returns 7♥', () => { jest.spyOn(randomGenerator, 'getRandom') randomGenerator.getRandom.mockImplementation(() => 2) const result = getRandomCard(['2♣', 'K♦', '7♥', '3♠']) expect(result).toBe('7♥') expect(randomGenerator.getRandom).toHaveBeenCalledTimes(1) expect(randomGenerator.getRandom).toHaveBeenCalledWith(0, 4) // we restore the implementation and keep the test idempotent randomGenerator.getRandom.mockRestore() }) And for this to be possible, we can think of: function fn(impl = () => {}) { const mockFn = (...args) => { mockFn.mock.calls.push(args) return impl(...args) } mockFn.mock = {calls: []} mockFn.mockImplementation = newImpl => (impl = newImpl) return mockFn } and function spyOn(obj, prop) { const originalValue = obj[prop] obj[prop] = fn() obj[prop].mockRestore = () => (obj[prop] = originalValue) } And that's it. Jest is not a mystery anymore. I hope you had fun with this article as much as I had while thinking and researching about it.
https://medium.com/dailyjs/how-does-jest-work-929d0de0fa03
['Flávio H. Freitas']
2020-03-13 15:45:22.470000+00:00
['Programming', 'JavaScript', 'Front End Development', 'Web Development', 'Testing']
Mind Your Nose
Mind Your Nose How smell training can change your brain in six weeks — and why it matters. By Ann-Sophie Barwich When it comes to training your brain, your sense of smell is possibly the last thing you’d think could strengthen your neural pathways. Learning a new language or reading more books (and fewer social media posts) — sure. But your nose? That’s because the olfactory system is one of the most plastic systems in your brain. Neuroplasticity describes how the brain flexibly adapts to changes in the environment or when exposed to neural damage. Stimulating the brain strengthens existing neural structures and further adds fuel to the brain’s capacity to remain adaptive, thereby keeping it young. And your smell system is particularly adept at repair and renewal. (Olfactory cells have recently been used in human transplant therapy to treat spinal cord injury, for example.) One reason for the olfactory system’s adaptive responsiveness is that it undergoes adult neurogenesis. Humans grow new olfactory neurons every three to four weeks throughout their entire life, not just during child development. (These sensory neurons sit in the mucous of your nose, where they pick up airborne chemicals and send activity signals straight to the core of the brain.) If it weren’t for this ongoing regeneration of sensory cells in your nose, we would stop detecting smells after our first few colds. Neural plasticity weakens as we grow old — and so does our sense of smell. Olfactory performance decreases around the age of 70 as the regeneration of olfactory neurons slows down. Yet this process of regeneration never stops entirely. Training your nose helps slow down that decline and offers a great way to increase your brain’s plasticity. That said, increasing your sensitivity to odors in the environment does not always sound desirable. Smell usually comes with negative connotations: that whiff of urine in the metro, that overpowering literal skunk, or that trail of body odor from the person walking in front of you. But paying more attention to the smells around you also has benefits, and not just for a greater enjoyment of food aromas and neighbors’ gardens. Recent studies show that olfactory abilities correspond with differences in cortical areas involved in smell processing in the brain. Johannes Frasnelli, an olfactory scientist at the University of Quebec in Trois-Rivières, explained: “We did some studies where we saw that there is a link between the structure of certain brain regions-like the thickness of the cortex and the thickness of the gray matter layer in certain brain olfactory processing regions-and the ability to perceive.” Frasnelli and his colleagues found that people with better perceptual capacities had a thicker cortex. When they looked at people who had lost their sense of smell, they also saw a reduction of cortical matter in areas involved in odor processing. That raises the question: Could you change the structure of your brain simply by smelling things? In 2019, Frasnelli’s group discovered that undergoing as little as six weeks of intense olfactory training results in significant structural changes in some regions of the brain (namely, the right inferior frontal gyrus, the bilateral fusiform gyrus, and the right entorhinal cortex). Participants were given three tasks with a cognitive component. The first task was a classification task. Participants had to organize two simple odor mixtures by ordering each from lowest to highest concentration. The second was an identification task. Participants were presented with a target odor blended with a citrus scent in a specific ratio (4%). Then they were given the same blend in different ratios and asked to order them according to quality (more citrusy or less?). Lastly, the detection task: Was the learned target odor present in a range of 14 samples of different odor mixtures or not? This entire exercise was undertaken each day for 20 minutes during the six weeks. Responses were monitored and evaluated on speed and accuracy. Such intense olfactory training led to a general improvement in olfactory performance. Plus, the increase of olfactory skill was not restricted to the training exercises but also transferred to other olfactory abilities-abilities that had not been tested as part of the training. These perceptual tests included: the detection threshold of an odor, accuracy in odor discrimination (same or different?), cued odor identification (which of these four descriptors is correct?), and even free odor identification (identifying an odor without cues!). Increasing insight into what the nose knows, and how it communicates with the brain, has broader implications-even philosophical ones. Old (yet still prevalent) cookie-cutter views of the mind coax us to believe that our senses are passive-indifferently picking up signals in the world that are then processed by the brain. Perception, in such views, is a process separate from cognition. Highly plastic systems such as olfaction present us with a much more intriguing and interwoven picture of the mind: Training your nose’s performance (just like other cognitive capacities) fundamentally shapes what you perceive by rewiring the system. Your senses are far from being impartial transmitters; what you are able to perceive in the world ultimately hinges on the depth of your cognitive engagement with it. In other words, your mind does not emerge apathetically as a product of some remarkable, intricate molecular twists performed by the brain. The mind is enhanced by what you can train your brain to do. Just like strength is a result of muscle training, cognitive training of the senses is the bodybuilding of the brain.
https://medium.com/neodotlife/mind-your-nose-f0b097d533bb
[]
2020-10-10 20:17:37.132000+00:00
['Biotechnology', 'Neuroscience', 'Brain', 'Wellness', 'Science']
İnce Dar Yolda Geceleyin
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/t%C3%BCrkiye/i%CC%87nce-dar-yolda-geceleyin-bc4013aa455d
['Eray Erkin']
2020-12-27 19:26:16.069000+00:00
['Türkçe', 'Erkindt', 'Şiir']
Build an Interactive Choropleth Map with Plotly and Dash
Build an Interactive Choropleth Map with Plotly and Dash Last week, I finished my final assignment in IBM Data Science course, which is to find an ideal suburb for opening an Italian restaurant based on location data. During the process, I web-scrapped property median price (i.e. House buy/rent and Unit buy/rent) for each suburb in Sydney and plotted them on Choropleth maps, respectively. Choropleth maps for different parameters However, I was wondering if it is possible to combine all these maps in one and select one of them just by clicking name from a dropdown menu. In addition, I want to add one more plot next to the map to show the top 10 suburbs with the highest median prices accordingly. These add-ons will make the map more informative and user-friendly. In this post, I will share my notes about how to create an interactive dashboard with a choropleth map and bar plot using Plotly and Dash. In addition, I assume that you have prior experience with Plotly. Prerequisites Install plotly, dash , and pandas on the system. I created a virtual environment using conda to keep the system organised and avoid mess up with other packages in the system. I have introduced conda virtual environment in my previous post if you want to know more about conda env . The following code will create a virtual environment that have all required packages for plotly and dash . conda create -n <whatever name you want for this env> -c plotly plotly=4.4.1 -c conda-forge dash pandas To be able to draw a map in plotly , we also need a token from Mapbox, which provides various nice map styles and most importantly is for free. In addition, two datasets were used in this dashboard and they can be download from my github (← You can also find the dash app code here). Bullet train path If you want to explore the dash app now, after finishing above steps, you need to assign your Mapbox token to mapbox_accesstoken in this script and run it in the same directory with the two datasets. Once the following message popped up, just open this address http://127.0.0.1:8050/ in your preferred browser and dash will be loading in there. Running on Debugger PIN: 880-084-162 * Serving Flask app "dash_project" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on $ python dash_project_medium.pyRunning on http://127.0.0.1:8050/ Debugger PIN: 880-084-162* Serving Flask app "dash_project" (lazy loading)* Environment: productionWARNING: This is a development server. Do not use it in a production deployment.Use a production WSGI server instead.* Debug mode: on Steaming train path As shown in the following figure, I have labelled the key functions used in the script for creating corresponding elements in the dashboard. Dissecting the dashboard The general principle in building this dashboard via plotly and dash is to 1) arrange and combine different elements together on a defined canvas; 2) compile all elements in one container: fig=go.Figure(data=trace2 + trace1, layout=layout) ; 3) pass this container to dcc.Graph , in which dcc is the dash core components, and dash will create a html-based web application for the dashboard. A key function of this dashboard is to show a specific parameter (i.e. House_buy , House_rent , Unit_buy and Unit_rent ) in the two traces (choropleth map and bar plot) via the dropdown menu. My method is to create four layers with an argument visible=False for each trace. Then use the buttons feature of updatemenus to turn visible=True for a given parameter. Bar plot code Thus, there are four figure layers in each trace and only one is visible depends on which button is clicked in a given time point. Drop down menu bar code Since map are not plotted on a cartesian system of coordinates (x/y), which is the coordinates system used in the bar plot, I set up two coordinates systems in the dashboard for the map and bar plot, respectively. As for the bar plot( trace2 ), its axes are assigned to xaxis='x2' , yaxis='y2' . Instead, the map ( trace1 ) has its own features within Layout , which was assigned to the variable mapbox1 , the numbers following mapbox and x/y is arbitrary. Having said this, you can assign as many coordinates systems as you want, just make sure you anchor the right system to its trace in the Layout . Choropleth map code Then within the Layout settings, we do adjustments for these two coordinates systems individually. Such as the position of traces in the dashboard via domain , the appearance of ticks on each axis via showticklabels , and the ascending order of bar via autorange . Layout code After concatenating all elements within fig=go.Figure , I assigned fig to dcc.Graph , wrapped up all codes as a py file, and run it. Boom, here comes my first interactive dashboard. Dash app implementation I should note that I only used very basic Dash structure here, most of the codes are still written in Plotly. One drawback of my method is that stacking all four layers onto the same trace may slow down the app, this is because all data need to be loaded when the dash app is initialising. It will be more efficient to do real-time updating when a clicking dropdown menu event happened. Hopefully, further learning with advanced Dash code will find me a solution. Here are some resources for learning Dash: As always, I welcome feedback, constructive criticism, and hearing about your data science projects. I can be reached on Linkedin.
https://towardsdatascience.com/build-an-interactive-choropleth-map-with-plotly-and-dash-1de0de00dce0
[]
2020-01-09 01:57:17.671000+00:00
['Data Visualization', 'Dashboard', 'Dash', 'Plotly', 'Python3']
three reasons I’m not down with Upworthy and why they don’t care
First, if you care at all about digital content, how it works and how it’s shared, you need to study Upworthy and you need to try and understand what they do, how they do it, why, and why it works. To get a sense of all that, read this article from New York Magazine now. Did you read it? Good. Now here’s why Upworthy is not for me (and I’m not for them): I can’t stand the headlines, but mostly, I think, because they’ve been so often imitated by clones and spammers. This isn’t Upworthy’s fault, but those headlines have become a parody of themselves.This is referenced at the end of the able article, so I’m guessing this will be dealt with soon enough: Then he lets everyone in on his newest data discovery, which is that descriptive headlines — ones that tell you exactly what the content is — are starting to win out over Upworthy’s signature “curiosity gap” headlines, which tease you by withholding details. (“She Has a Horrifying Story to Tell. Except It Isn’t Actually True. Except It Actually Is True.”) How then, someone asks, have they been getting away with teasing headlines for so long? “Because people weren’t used to it,” says Mordecai. “Now everybody does it, and they do cartoon versions of ours.” (CNN, for instance, recently ended a tweet about a child-murder story with a ghoulish “the reason why will shock you.”) I’m too old and cynical for Upworthy (and I have a high opinion of myself) or, in other words, I’m not their target audience. Judging from the comments in this article, Upworthy is for hopeful, positive Millennials who live in the midwest — or something. I’m a jaded, cynical GenXer who lives in a major metropolitan area. Also, Upworthy seems to hate the media elites with the strange bizarro-world passion of a Fox news-watching Tea Partier. I read The New York Times daily. I don’t think Upworthy’s content elevates the discourse. This is simultaneously my toughest criticism to prove, and Upworthy’s toughest to refute. In other words, it’s subjective. The article states that Upworthy’s “mission is to ‘draw massive amounts of attention to the topics that really matter.’” That’s all well and good, and I can’t argue with the premise that the majority (but not all) of what Upworthy aggregates matters. However, what does “drawing massive amounts of attention” do for an issue if you’re not actually adding to the discourse around that issue. Raising awareness for awareness sake — even if it’s “massive amounts of awareness” seems about as compelling to me as bumper stickers and awareness ribbons. They’re clever and touching when you first see them, but after while they start to loose meaning. To me, this is the real danger for Upworthy. That, sooner rather than later, their message is going to be glossed over as we all move on to the next bright and shiny Internet thing that makes us feel all warm inside. All of that said, you have to admire Upworthy. They’re truly fantastic at what they do and they have real, genuine and completely uncynical zeal for doing it. Even with those horrible click-baity headlines that have spread like a plague across my Facebook newsfeed (I really, really hate those headlines), they — on balance — make the Internet a better place.
https://medium.com/david-connell/three-reasons-im-not-down-with-upworthy-and-why-they-don-t-care-633388c2e54e
['David Connell']
2016-08-20 13:06:13.101000+00:00
['Upworthy Headlines', 'Criticism', 'Upworthy Profile', 'New York Magazine', 'Technology']
Going Flat Out
JOURNAL Elves, suits, talent and old movies People are said to have talents, by which we mean something you are gifted with at birth and which you did not earn. Whereas a skill is something you hone over time, through hard work and determination. Aptitude might be considered the possibility that your talent, if honed, could become a skill. A talent is not always all that impressive. Sometimes it’s more like a party trick, like being able to whistle and hum at the same time, the type of thing you might do at a party if you’ve had too much a drink, and this party was in a rec-room in the 1950's. “I don’t hold with the theory that everyone is just using a little bit of his gray matter. I think we’re all going flat out.” — Calvin Trillin We are used to thinking of talents as things people perform, in a show or on a stage. A stupid human trick reserved for high school auditoriums and beauty pageants. We don’t often think of these things as being serious, or terribly worthwhile, and almost never worthy of a career. There are those who go on to a career in music, comedy, juggling or dance, but not many of us. For most of us, it ends right there, on that sad little stage, to the sound of quiet applause. If The Suit Fits I don’t own a suit. I used to own, and even wear, suits to work. Not everyday, but on special occasions when myself and a few colleagues would attend business meetings where I was called upon to present my ideas to a boardroom of people. I would get dressed in my business costume and perform my act. Even though I owned suits, really nice suits I might add, I never wore a tie. Even though I might have been wearing a $1500 Giorgio Armani suit, I was still wearing a $20 tee shirt underneath it. They were all black or charcoal grey. Crazy Like A Fox A central theme to a lot of comedy is that men are all big dumb animals, Neanderthal in nature, practically incapable of feeding and clothing ourselves, and entirely reliant on women to help us navigate life. It’s a funny trope, largely because it’s not true and it enables us to deflect attention away from what we’re really doing, which is running most of civilized society. Women like to imagine us as perennially helpless and yet somehow also defiantly oppressive. We don’t know how to do anything correctly, and yet somehow maintain control, presumably through sheer, brute strength. As if all life is just one big king of the mountain contest and we’ve just been face palming everyone, pushing them back down the hill. It’s hard to reconcile this idea that men are dumb but maintain control merely through our ability to bench press twice our weight. Men are stupid but invented the bulk of technology we use today, wrote much of the literature, music and scholarly work. Fine, it’s true we largely kept women from doing anything for centuries, didn’t educate them or let them hold jobs of any importance, and we continue to pay them less, but none of that is because men are stupid. Cruel perhaps, but not dumb. Teach Your Children Well The big problem I have with the elf on your shelf is that it is a blatant attempt to teach children to be good, when I think the central message of Christianity is, and therefore the message of Christmas should be, to be kind. Be kind. That’s the message of love that Christianity proclaims it instills, and not the idea of being beholden to the Law. The Old Testament and the Judaic tradition of living under the law, was supposedly altered permanently with the birth and death of Jesus Christ, the Messiah that they Jews had been promised, and had spent thousands of years waiting for. Forget the fact that they’re still waiting, or that Christians never really believed they’d been relieved of their duties under the law. Neither camp believes what they’ve been told. The Elf on the Shelf is a uniquely ironic perversion of the Christmas story, from the birth of a savior, to a spy sent by his secular imposter. The fact that so many Christians feel that their faith tradition is in danger of being diluted by liberalism, completely ignores the fact that they are entirely complicity in the commercialization of their own salvation story. The miracle of Jesus was the idea that you were not longer subject to the law but saved by Grace. Unconditionally loved as you were, and despite the fact that you’d done nothing to deserve it. You couldn’t earn it, you just had to accept it. We humans find this extremely difficult, which is pretty ironic. The Elf teaches kids to be good, not that they are loved. It teaches them to follow the rules, not to practice kindness, because the reward is for the individual. Santa doesn’t bring gifts for kids who are kind, but for kids who are good. And so when they grow up, they learn that if you make the rules, you will always be on the side of good, and kindness is for suckers. Writing I think of writing as a kind of magic. A bag of charms I’ve been developing, that one day I’ll be able to use to cast real spells, and not just the backyard shows for the neighbor kids. Old Movies As a young man in college, my movie knowledge was actually really limited because we had not been allowed to see a lot of the big movies of the day. Definitely no R rated movies. I remember my sister got into trouble for seeing Grease at a drive-in. Grease. Consequently, I had this idea that all old movies were happy go lucky saccharine tales with snappy dialogue and cheesy romance. I knew that the modern movies from the 70’s were different but I still hadn’t seen many because we didn’t have cable and I couldn’t exactly rent Scarface and watch it in the living room. I started trying to catch up and watched a lot of classics such as Lawrence of Arabia and Twelve Angry Men and realized I’d missed a lot. Then I watched Gone With The Wind and hated it because I was expecting a happy ending, something along the lines of a Shirley Temple movie. Casablanca was also a bummer. I learned that a lot of those movies were pretty dark and twisted and not just Bible school movies. I still haven’t seen Citizen Cane.
https://medium.com/@davidtoddmccarty/going-flat-out-34edd1afae19
['David Todd Mccarty']
2020-12-29 18:38:19.707000+00:00
['Essay', 'Writing', 'Journal']
How Trump’s Dysfunctional Days in the White House are Helping Me Heal
Photo by Andre Hunter on Unsplash Donald Trump’s antics don’t bother me. Like. At all. That doesn’t imply approval of any sort. It just means that, to me, antics and drama and yelling and bullying are completely normal. Finding a way to live “normally” amongst chaos and lies definitely comes in handy. It is one of the many survival skills that served me well growing up. It still does. It’s a product of being born to a toxic mother married to a broken, enabling father. Enabling grandparents, aunts, uncles, the like. Surrounded by multiple generations of enablers in a constant and enduring state of denial. So, yeah — that’s pretty much how my sister and I grew up. Living with and terrorized by a version of Donald Trump. Sounds interesting, right? But to be clear, what Donald Trump does and says, on a day to day basis, is not actually that damaging. It’s just noise. Now the enablers that surround him? That is where the real damage is done. Donald Trump would not be so darn noisy and disruptive if it weren’t for his enablers. A drum can only beat for so long without others to continue the cadence. Add in some cymbals and now there is quite a din. And a tambourine! Now we’re talking. The same is true for the dysfunctional narcissistic family. The initial wound may be inflicted by the toxic parent, but the real and lasting damage, the deep, deep wounds, are opened and reopened by the enablers. Intentionally or unintentionally, enablers give their blessing for the toxic person to continue inflicting harm without consequence. The enablers might even bring along the cow bell. Most recently I was struck by how similar the dynamics of Donald Trump and his enablers are to my own family. So it isn’t shocking that “important” people in our country cast their morals and ethics aside and do whatever it takes to keep Trump from exploding. Exploding on them, I mean. So do family members of a toxic person. The enablers, as a whole, don’t want any part of that. Trump can explode on anyone without concern, as long as he is not exploding on them. So could my mother. But in my family, my mother exploded on her children. Unchecked. But who cares what happens to anyone else, especially those without the power to defend themselves? It’s abhorrent. On the other hand, Trump and his enablers are teachers. For those unfamiliar with the characteristics and consequences of narcissistic family abuse, witnessing the crazy making behavior over and over again in the news coverage should get you up to speed rather quickly. Donald Trump and his enablers really do a terrific job bringing the dysfunctional family script to life. The dynamics also point out that people behave similarly whether they hold public office or they are family members not brave enough to stand up to the toxic person. Denial runs rampant here, and as witnessed every day in life as we know it, the enablers prioritize the emotional status of the toxic person beyond and above the physical and mental health of the toxic person’s target. They enable the toxic person and avoid doing the right thing. The hard thing. Enablers refuse to take a side. But…they must. “We must take sides. Neutrality helps the oppressor, never the victim. Silence encourages the tormentor, never the tormented.” — Elie Wiesel There are people that actually do take sides. And action. I just don’t know any. So it can be comforting to bear witness to Trump and his enablers act out this cycle over, and over, and over again without fail. Because it promotes awareness. It then becomes clear how the toxic person bullies everyone into submission. The enablers empower the toxic person. Everything and everyone revolve around the unstable and unpredictable moods of the toxic person. This is not safe. Life is dangerous. We need to talk about how narcissistic abuse happens in families. And in the public eye. It is pervasive. It destroys the spirit. When children are victims, it is incredibly damaging. People need to learn about narcissistic abuse and how it affects EVERYONE in a family. We need people to learn to stand up and stop the cycle of abuse. Although it sounds ridiculous, I’m not ready for the Trump show to end. Right now I can depend on a nightly — at least week nightly — opportunity to see what it looks like when people challenge the bully. Challenge the enablers. We need to help the toxic person, not empower them. We need to educate the enablers, not let them off the hook. So if you need a refresher, just watch the Trump election news coverage. It’s all there. Pick your favorite channel. Please also watch someone stand up for what is right. Watch Nicolle Wallace have America’s back. She has all our backs. No matter what.
https://medium.com/@elizabethrustemeyerdrake/how-trumps-dysfunctional-days-in-the-white-house-are-helping-me-heal-2a0019305237
['Elizabeth Rustemeyer Drake']
2020-11-25 20:10:56.981000+00:00
['Family History', 'Mental Illness', 'Alcoholism', 'Narcissistic Abuse', 'Narcissism']
Entropy Tinted Glasses
Entropy Tinted Glasses Photo by Elias Schupmann on Unsplash my eyes are the reverse of rose tinted seeing only flaws expecting the worst even from those who love me best what happened to me growing up to tilt my axis of esteem taint my worldview this way I don’t know but when I hear family I think enemy and fear
https://medium.com/being-known/entropy-tinted-glasses-f2b9ad3b7f10
['Doug Ecks']
2020-12-24 19:31:45.169000+00:00
['PTSD', 'Trauma', 'Self Examination', 'Poetry', 'Family']
Why is Data Science Losing Its Charm?
Opinion Why is Data Science Losing Its Charm? Data science was once the most loved career option but the trends are changing. Earlier every Computer Science student wanted to pursue a career in the data science field. The field also attracted students from many other educational backgrounds. While the hype around data science still exists, the job profile isn’t readily available for all. The past two decades have been a revolution for the data science community. The developments in the past two decades are phenomenal and have taken everyone by storm. The applications of data science have amassed all the industries and had readily increased the demand of data scientists. But the trends are changing nowadays. The demand is no longer the same as before. Even if there is a demand for data scientists, either people lack the skill set or the experience. I have tried to list all the potential reasons I think of when I see the community losing its charm. 1. People are not able to start their careers in this field. Once out of universities, people want to kick start their careers as a data scientist but the job offerings require a minimum of 2–3 years in most of the cases. People are not able to directly start as a data scientist and have to start their career in different profiles. Companies are not ready to invest their time on new incoming talents instead they want the ones with the excellent skill set and experience in the field. While almost all the tech companies have their own Data Science departments, the others which do not have one, need a person with a lot of experience in this field to start one. There’s only one way I think that could help them is doing internships while they study and gain experience to meet the demands of the companies. 2. People aren’t aware of the difference between Data Analyst, Business Analyst and Data Scientist Another major reason is that data science enthusiasts nowadays do not know the difference between different job profiles available in the field. All of them want the ‘Data Scientist’ title without knowing what the actual work of a data scientist is. They mistakenly consider Data Analyst, Business Analyst and Data Scientist as being similar profiles. Without knowing what they want to work upon they apply into roles they aren’t a perfect fit for and end up empty-handed. 3. People find Data Science too easy People directly start working on learning algorithms, ways of tweaking the data but what they do not consider is the Math behind the algorithms. With average programming knowledge and knowledge of the Machine Learning algorithms, they think they are ready to face real-world problems. People usually ignore statistics, the actual hard work just because they do not find it interesting. Data science is one such field where the development isn’t stagnant. Natural Language Processing has seen some massive developments in the past 2–3 years. One has to keep themselves updated with the state-of-the-art models. People also find data science easy because they haven’t worked on real-life data. All the years that they have spent learning, they have worked on structured data or some pre-processed data that was made available for people to learn. On the other, almost 99% of the data in the real world is unstructured. Data scientists need to spend most of their time pre-processing the data so that they can extract something meaningful from the data. 4. AutoML is making the road to landing a job even tougher. As soon as tech giants Google, Microsoft launched AutoML, it shook the aspiring data scientists. Companies’ interest and their curiosity grew into AutoML, while data scientists fear losing jobs. AutoML is automating the process of applying machine learning to the datasets. AutoML can preprocess and transform the data. It can cover the complete pipeline from working on raw data to deploying machine learning algorithms. AutoMLs are good at building model but when it comes to preprocessing, they cannot outshine humans. The major work of the data scientist lies in pre-processing the data. It is clear that as of now AutoMLs cannot replace human data scientist. Although the fact that AutoMLs reduce the costs cannot be overlooked. The average annual salary of data scientists in the US is around $120k whereas the annual cost incurred by Google and Microsoft AutoMLs is somewhere around $4k to $40k. Though the effectiveness of data scientists at pre-processing data cannot be denied because the data in the real world is highly unstructured and requires a lot of pre-processing. There so much to learn and no one is willing to do the hard work. It is difficult for someone to start with the basics and excel in this field. This would take a lot of time and people need to be patient. There is a lot of scope in this field but the lack of people with the actual skills needed is snatching the title of most promising job away from Data Science and people are walking away from it.
https://towardsdatascience.com/why-is-data-science-losing-its-charm-3f7780b443f5
['Harshit Ahuja']
2020-08-24 03:48:30.462000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Jobs', 'Data Scientist']
Sprawlball: A Visual Tour of the New Era of the NBA — Kirk Goldsberry
Sprawlball: A Visual Tour of the New Era of the NBA — Kirk Goldsberry A compelling narrative of the NBA’s recent past, troubling present, and possible future (Click here to buy on Amazon) Early in my college years, I read Michael Lewis’ Moneyball. I was never the same. And I mean that literally. There are only a few books that changed my intellectual life, and Moneyball was one of them. My love of baseball combined with my tendency toward analytics made me seek out other books to look at baseball in a different way (Baseball Between the Numbers and Jonah Keri’s The Extra 2%). That analytic approach soon extended to other arenas such as crime and even my readings in politics and history (s/o Bill James’ great Popular Crime, Nate Silver’s FiveThirtyEight blog-then-website, and John Fea’s Was America Founded as a Christian Nation?). The common thread through all of this is a tendency to disregard previous narratives and look at the facts of a case. It became my passion across all disciplines. As my love of basketball began to grow, I kept looking for something to read that filled this Moneyball space for the NBA, but there was nothing. For years, there was a dearth of book-level analysis of the “Morey-ball” era of the NBA (so called because Houston Rockets General Manager Daryl Morey has taken analytics much farther than anyone else in his team-building and philosophy). I know the literature wasn’t there, because I was looking for it. Michael Lewis almost wrote the book himself, but that became the terrific first chapter of The Undoing Project, where Lewis examines Morey and his impact on basketball. (The Undoing Project as a whole is about psychologists Daniel Kahneman and Amos Tversky, psychologists who uncovered a lot of the principles that Lewis was analyzing in Moneyball. This book is amazing and could not be more in my wheelhouse.) Well, now there is a Moneyball for basketball. Kirk Goldsberry’s Sprawlball is a portrait of the NBA in this new era, and it is just terrific. By saying it is Moneyball for basketball, I don’t mean that it has the deep investigative reporting and style that characterizes Michael Lewis. And I’m glad that’s not what Goldsberry was going for. I also don’t mean that Sprawlball is an unflinching praise song for the analytical era and what it is doing to the game, as Moneyball was for baseball. What I do mean is that Sprawlball is a worthy successor to all the books (not just Moneyball) in the analytic genre which Michael Lewis and Bill James led. What is Sprawlball about? In the words of Goldsberry (a necessary Twitter follow): “It’s about this s***, man.”: It’s about the disappearance of the midrange shot for the exact reason you see in that image: it’s not efficient. It’s about how 3-pointers have become more and more prevalent and big guys have suffered. And it’s about how to fix it. But that’s getting ahead of myself, because before his extended take on rule changes in the NBA Goldsberry spends almost 200 pages in a wonderful examination of the current game: its beauty and excitement but also its repetitiveness and predictability. He does this in both in graphic form (his trademark shot charts and other graphic representations are essential to this book, as well as the illustrations by Aaron Dana) and in narrative form. The narrative takes shape with each chapter highlighting a player at a different position (a subtle buck against the trend of position-less basketball) starting at point guard with Steph Curry and his transformation of the game, then analyzing the impacts of James Harden and Lebron James. Goldsberry then turns to how this new game has impacted big men by examining the evolution of Kevin Love (and the domination of the “stretch 4”) then the dueling philosophies of “centers” Anthony Davis and Draymond Green. The best image of the entire book, and not just because I’m a biased Spurs fan. (Credit to Aaron Dana) It is only after this deep discussion of the game that Goldsberry dives into possible fixes. Only after he has made a compelling case that, although basketball is still beautiful and fun and compelling, we’re not even as far down this analytical road as the numbers say we should be. That means that if you like the status quo, you should be a proponent of changes to the rules as well. Because there will be an even more drastic move towards the 3-pointer at the expense of everything else for years to come. Goldsberry makes that clear. And while I won’t give away any of his possible fixes, they are highly compelling. Goldsberry is true to the spirit of the analytical revolution while also being critical of where it leads when sports don’t evolve with the analytics. Baseball is just now realizing this, as its on-field product has suffered since the analytical revolution. Basketball may find itself in an even worse position without changes to its court structure and/or rules. Goldsberry illustrates (literally and figuratively) this concept wonderfully, and that’s why his book is a necessary buy for any NBA nerd or even non-nerdy fans out there.
https://medium.com/park-recommendations/sprawlball-a-visual-tour-of-the-new-era-of-the-nba-kirk-goldsberry-80e083931989
['Jason Park']
2019-06-06 10:17:31.512000+00:00
['Sports', 'Basketball', 'Book Review', 'NBA', 'Books']
Keyframe Confusion
Today I continued the adventure of working on The Great Fleece, the Stealth & Cinematography game. I placed the actors (called blocking) and through the course of completing the tutorials, reviewed the director notes on the shots we’re supposed to get. I set up two CineMachine virtual cameras with each positioned to capture the director’s desired angles. View from Over-The-Shoulder virtual camera. I then created the new animations in the timeline: an Over-The-Shoulder shot and what’s called a mid shot, which is a shot from the waist up. I manipulated the two shots in the timeline window so that they transitioned from the first to the second shot rather seamlessly and I was really impressed with Unity’s ability to “meld” the scenes together with simply adjusting a slider. Just by overlapping the scenes, Unity was able to smoothly pan from the OTS shot camera position to the Mid shot position. Mind-blowing! However, instead of just having a static camera rolling and capturing the character’s animation, you’re able to move the cameras and pan around the scene in whatever way you see fit. View from Mid shot virtual camera. I captured the first desired animation by panning the OTS camera up and over the shoulder of the main character. The second shot (so far) is just still the video of the main character from another angle but from a static point. And then here’s where I encountered the problems. This new panning animation (like all animations) can be manipulated in the Animation window of the editor and is stored in multiple keyframes for various positions at various times. The fatal mistake I made was to accidentally delete my very first keyframe and then UNDO/REDO so many times and start on different paths that I am so far unable to restore this very first starting keyframe. I’m going to be conducting some research on keyframes to see if I can make this initial keyframe on my own and if this doesn’t work, I’ll be starting this section over. Stay tuned for the final verdict.
https://medium.com/@kristintreder/keyframe-confusion-33373366ee6e
['Kristin Treder']
2020-12-24 21:12:14.975000+00:00
['Unity', 'Unity3d', 'Learning To Code', 'Cinematography']
Why SHM is so important?
Why SHM is so important? Structural health monitoring has the potential to completely change the way we manage the built environment. Encompassing everything from schools and hospitals to office buildings and tunnels, the built environment needs to be properly maintained to ensure safety and reduce costs. Structural health monitoring provides a way to achieve both objectives by harnessing the Internet of Things. It involves the addition of dynamic wireless sensors within structures, enabling engineers to monitor aspects of their health in real-time. In theory, this provides an unprecedented degree of awareness for city authorities and private companies, allowing them to allocated resources and prevent damage before it results in financial and human costs. So how is SHM being implemented in real world scenarios, and is it really changing the way we run cities? Using Structural Health Monitoring to Ensure Healthy Bridges River crossings are inherently vulnerable structures. As recent incidents like the Genoa bridge collapse have shown, weaknesses within these structures can cause concrete to crack, leading over time to catastrophic consequences. However, structural health monitoring offers a viable solution. For instance, the Sandia National Laboratories and Structural Monitoring Systems partnered in the US Midwest to implant eight separate IoT sensors in the metalwork of a bridge identified as at risk. These sensors are absolutely tiny, and take the form of Teflon vacuum pumps. If a crack appears in the metal underneath them, this prevents the sensor establishing a vacuum, causing the alarm to trigger. This informs engineers that the structure of the bridge may be at risk well before cracks become visible to the naked eye. Moreover, the sensors are placed in locations 100 feet above the road deck — the kind of place that engineers often struggle to access. With lithium-ion battery powered sensors in place, they can obtain even better analytical results without needing to ascend to assess the bridge’s metalwork. This is just one among many examples of how bridge maintenance is being enhanced by structural health monitoring. Many more projects are operating or under construction, such as the Queensferry crossing in Scotland, which will include more than 2,000 wireless sensors. From pylons and foundation stability to how elevated decks perform under high traffic loads or high winds, IoT sensing is set to transform how we manage bridge safety, but it also has many other potential applications. Making Smart Cities Safer As Internet of Things technology develops and becomes mainstream, cities all across the world are becoming “smarter”. Sensors are being deployed to monitor crime, traffic, economic factors, water and energy consumption. But they are also being used in sophisticated building monitoring systems. Portland offers a great example of how this works in practice. The Oregon city has embraced smart technology and is working with AT&T to implement potentially revolutionary new monitoring tools. This includes an array of LTE (Long-Term Evolution) sensors which use wireless tech to monitor the structural integrity of key buildings. These sensors should allow city engineers to measure the tilt of high-rise buildings, and to detect cracks as they develop. Thanks to trigger alerts, they can also set events to allocate resources when maintenance is urgent. Portland’s experiment is at an early stage, but their initiative is being emulated by forward-thinking urban governments and companies across the world. The reasons are easy to understand. Aside from promoting safer cities and preventing accidents, smart sensors make it possible to manage precious public funds more efficiently. Instead of routine maintenance which may not be required, work can be targeted at vulnerabilities as they occur. And there may also be productivity improvements for city engineers. Find a Structural Monitoring Solution for your Needs At NEXT Industries, we are skilled in unlocking the potential of the internet of things. If you are involved in the construction or management of critical structures and want to investigate the benefits of structural health monitoring, we will be happy to discuss what this technology can deliver. Just visit the application pages of our website and get in touch to discover what smart structural management can do for you.
https://medium.com/@info_77459/why-shm-is-so-important-9a04fdf70b24
['Massimiliano Bellino']
2019-03-27 12:20:10.688000+00:00
['IoT', 'Shm', 'Monitoring', 'Smart Cities', 'Bridge']
The Critics Of #MeToo And The Due Process Fallacy
For many of the victims who posted their experiences as part of #MeToo, their options were internet justice or no justice at all. The most persistent criticism of the #MeToo movement is that advocates have abandoned due process in favor of trial by the faceless internet mob. Critics accuse the women leading the movement of pursuing “vigilante justice” or worse, a witch-hunt. These critiques have dogged #MeToo from the beginning, and now that the backlash to the movement has reached a crescendo, we’re about to hear a whole lot more. But don’t listen. Social media is exactly the right place for #MeToo to play out. In fact, it’s the only place it ever could. The frequent invocation of due process ignores just how inadequate the American legal system is for protecting women against sexual violence and harassment. It is precisely because the courts of law and other traditional avenues of recourse have failed women that they’ve turned to the internet and the court of public opinion. Due process sounds great in theory. Zephyr Teachout, former Democratic candidate for the U.S. House of Representatives in New York, defined it as “a fair, full investigation, with a chance for the accused to respond” in her recent New York Times op-ed on this topic. It’s hard to argue with that. The concept of due process is a fundamental pillar of the American justice system and one that we pride ourselves on. The problem with #MeToo—according to its detractors—is that women have bypassed the courts, where due process rights apply, and gone directly to the public to seek out justice. The public, in turn, has rushed to judgment. Critics argue that justice can only be served by submitting these claims through the formal legal systems that guarantee basic fairness to the accused. Social media is exactly the right place for #MeToo to play out. We know from experience, however, that the systems currently in place to deal with complaints of sexual harassment and assault have systematically failed victims and have allowed far too many perpetrators to continue their abuse unchecked. This is true of the nation’s criminal and civil courts, forced private arbitrations, HR department investigations, and campus tribunals. There’s no great mystery as to why. We have shorthand for these kinds of impossible-to-prove claims: “he said-she said.” The phrase refers to the fact that all too often the only evidence in sexual harassment or assault cases is the victim’s word against the abuser’s denial. The incident of alleged abuse almost always takes place behind closed doors, so there are no other witnesses. With so little to go on these claims almost never result in a successful verdict. And while no database tracks the outcomes of employment discrimination cases nationwide, a review of a random sampling of cases by Laura Beth Nielsen, a professor at the American Bar Foundation and Northwestern University, revealed that only 2% of plaintiffs win their cases. Even when there are eyewitnesses, much of the mistreatment women are complaining about falls short of the legal definition of sexual harassment. There is a big gap between what the public believes is appropriate workplace behavior and what is considered egregious enough to warrant discipline, dismissal, or legal sanction under our existing guidelines and laws. For example, did you know that your supervisor grabbing your butt at work is not enough, on its own, to sustain a claim under Title VII, the federal law that prohibits workplace sexual harassment? The Equal Employment Opportunity Commission (EEOC) defines sexual harassment as “unwelcome sexual advances” that “unreasonably interfere with an individual’s work performance,” or that create a hostile atmosphere at work. Under Meritor Savings Bank v. Vinson the Supreme Court held that such conduct must be “sufficiently severe or pervasive” to “create an abusive working environment.’” As recently as 2014 a federal court dismissed the claim of an employee whose boss grabbed her butt twice in one day in front of co-workers because it was neither severe nor pervasive enough to offend the average woman according to the judge, a woman no less. Laws protecting women from sexual misconduct are much narrower than the commentators who want to redirect all these claims into the courts seem to realize. Annika Hernroth-Rothstein argues in National Review that “[i]f sexual harassment is a crime, it should be fought not with hashtags but with the full force of the law” in a piece titled, “#Metoo and Trial by Mob.” Sexual harassment is not, in fact, a crime. Title VII imposes only civil liability — i.e. money damages — on employers in cases of workplace misconduct. Further, only employers with more than 15 employees are covered. Employees of small businesses have no federal protection. The same goes for freelancers employed as independent contractors and unpaid interns. Some state and local laws are more generous, but these are few and far between. Sexual harassment claims against anyone but employers and, under Title IX, federally funded schools are not covered at all. Even if your claim is covered and meets the legal definition of harassment, there are still multiple barriers to seeking recourse through the courts. First, going through the formal legal system costs money. There are court fees and lawyers to pay, in addition to the time off work required. Second, sexual harassment claims are subject to statutes of limitations — meaning that victims cannot bring these claims after a certain amount of time has passed. In many cases, these time limits are very short. The federal statute of limitations under Title VII, for example, is only 180 days, or roughly six months. The New York State limit is three years. Many of the claims of sexual harassment—and worse—that are coming out now as part of #Metoo are many years, and in some cases, decades old. Victims of sexual harassment often have more pressing needs in the immediate aftermath of the experience than filing a lawsuit, including dealing with the resultant trauma and, all too frequently, job loss. For these men and women there is nowhere else to go but the internet to air the grievances that have long been buried. The calls for due process are often tied with calls for reform to the existing laws. Reforms can take years to pass, and even when they do, they almost always apply prospectively to new claims, not retroactively. Thus, for many of the victims who posted their experiences as part of #MeToo, their options were internet justice or no justice at all. Which would you have had them choose? Social media has no barriers to entry. It is free and open to all. The only thing women need is an internet connection and the guts to come forward. Unlike the federal courts which are bound by the strictures of a nearly 50-year old law, the public has shown great willingness to consider the whole wide range of women’s stories that run the gamut from rape to a squeeze on the waist during a photo op. Even better, social media has allowed for a dialogue among diverse voices about what kind of behavior is acceptable and desirable in the society we want to live in, rather than just what is legal or illegal. The recent engagement around Babe’s account of a young woman’s date with Aziz Ansari is the perfect example. That article engendered some of the most thought provoking discussions on today’s sexual politics despite the general consensus that the behavior described didn’t break any laws. One of the unique advantages of social media that makes it particularly well suited to this movement is the incredible power of hashtags to connect women with similar stories. The men who have been brought down by the #MeToo movement have not been felled by individual women tweeting out isolated claims. In each case consequences have been visited upon abusers based on the strength of a large number of women coming forward with often nearly identical allegations that show a pattern of misbehavior. Such is the power of #Metoo that it can aggregate the stories of women who have never met and who are separated by decades. Hashtags allow for the revolutionary possibility that sexual harassment will no longer be characterized by “he said-she said” allegations, but, as illustrated poignantly in a recent New York Times ad, “he said- she said, she said, she said,” cases, ad infinitum. (Though, of course, even one “she said” should not be dismissed.) For all its utility, the role social media played in the #MeToo movement has also been overstated. The stories that brought down industry giants like Harvey Weinstein, Louis C.K., Mark Halperin, and others did not originate on social media platforms, but rather in the pages of the nation’s finest newspapers. The allegations were thoroughly vetted by investigative journalists bound by a code of ethics that provides its own kind of due process. Journalistic ethics require corroborating sources before going to print with a story with serious allegations such as sexual harassment. Furthermore, journalists always seek comment from the accused, giving them an opportunity to speak out on their own behalf. Critics’ insistence on due process presupposes an answer to a still open question: What is “the point” of #MeToo? The courts are best at meting out punishment for violations already committed. What if #MeToo isn’t about punishment, or, more to the point, what if it’s about more than punishment? What if it’s about changing the system prospectively, not seeking redress for the past? What if it’s about prevention? The author of the Shitty Media Men list wrote that her goal was to warn others about men in her industry so they could protect themselves. What if #MeToo is about catharsis and about having a long overdue conversation where we all get to have a say? What if there are a multitude of points, and very few of them are well served by the courts? The reflexive outcry about the need for due process from #MeToo critics is not well considered. It’s time we stop telling women where, when, and to whom they can tell their own stories. If #MeToo is about anything, it’s about the end of the era of women and other victims suffering in silence.
https://medium.com/the-establishment/the-critics-of-metoo-and-the-due-process-fallacy-92870c87c0cd
['Becky Hayes']
2018-02-16 16:00:58.236000+00:00
['Sexual Assault', 'Metoo', 'Society Politics', 'Justices', 'Twitter']
RSG DESIROUS TO HELP YOUNG PEOPLE GROW THEIR BUSINESSES INTO LARGE CORPORATIONS
RSG DESIROUS TO HELP YOUNG PEOPLE GROW THEIR BUSINESSES INTO LARGE CORPORATIONS The Rivers State Government says it is desirous to help Young People develop their businesses into Large Corporations. The State Deputy Governor Dr. Mrs. Ipalibo Harry Banigo stated this during the Open Day Session of the Start-up Port Harcourt Week, at Woji Road, GRA, on Saturday, 31st August 2019. According to the Deputy Governor, Government had provided monthly revolving loans for young entrepreneurs and small business Start-ups and urged more young people to key into the program in order to grow their business. “We know that the future of our City, State and Nation lies on young people like you and soon we would be retiring for good and we expect you to take over and do much better”. She stressed. Dr. Banigo expressed regrets about the negative publicity targeted at the State which according to her is mostly false and politically motivated because there are lots of good things coming out of Rivers State. She urged organizers of the Start-up Port Harcourt Week to make conversations on how we can shape the future of our great city Port Harcourt and how we can unlock its potentials beyond oil. The Deputy said Rivers State Government’s ICT Department has a Tech Creek which is an innovation hub where young people can come together to brainstorm and explore the digital space in order to take advantage of the digital economy. She said the young people should disabuse their minds from the erroneous belief that there is no future if they do not secure a job immediately after graduation, noting that with their brains there are lots of things they can do without waiting for hand-outs from people. Also speaking the convener of the Start-up Port Harcourt Week Bereni Fiberesima, said the Start-Up Port Harcourt is an annual event that brings the tech Start-Up ecosystem, academics and policymakers to connect, inspire, showcase, explore, launch products and participate in conversations that would shape the city future and unlock the potentials of the digital and the creative economy. According to him, young people across the world are coming up with solutions that have turned into successful corporations and we believe we also have the potentials to turn our ideas into successful corporations. Owupele Benebo Head of Press Unit, Deputy Governor’s Office Port Harcourt. Saturday, August 31st, 2019
https://medium.com/@thenewriversstate2015/rsg-desirous-to-help-young-people-grow-their-businesses-into-large-corporations-d48d5a36f3b0
['Rivers State']
2019-09-01 19:26:31.233000+00:00
['Port Harcourt', 'Entrepreneurship', 'Start Ups', 'Start Up Port Harcourt', 'Startups Africa']
AYS Daily Digest 07/01/20— Council of Europe Says Danish Ellebaek Center “Unsuitable for Humans”
AYS Daily Digest 07/01/20— Council of Europe Says Danish Ellebaek Center “Unsuitable for Humans” Ellebaek Prison. Photo credit: Refugees.dk Feature — Danish Centers for Asylum Seekers Condemned by Council of Europe The Council of Europe’s Anti-Torture Committee published a report from its April 2019 visit of Danish prisons. Some of its harshest words were for the two detention centers for asylum seekers, Ellebaek and Nykobing Falster. Hans Wolff, the leader of the delegation, did not mince words and threatened Denmark with action in the European Court of Human Rights if they ignore the committee’s recommendations. The committee visits many countries, and one can imagine in which countries we would normally find critical conditions, but I must say when it comes to migrant detention centers, Denmark is — very surprisingly — one of the worst…Either Denmark must make some very fast and serious changes on all the areas we have mentioned in the report. Or they must close down Ellebaek and move the detainees to a place with better facilities. Conditions in Ellebaek The chief complaint of the committee was with the center’s material conditions, which they called “unacceptable.” The facilities are prison-like, even though people there have committed no crimes and are mostly rejected asylum seekers awaiting deportation. They should have living conditions “approaching normality” since they are only in administrative detention. Conditions in Ellebaek. Photographer: Ole Jakobsen, TV2 Many rooms and common areas need renovation desperately and have peeling paint, graffiti, or exposed electrical wiring. Some rooms for men in Ellebaek lacked even basic furniture, while others had furniture that was damaged. One of the biggest problems for people detained in both centers was lack of freedom. Female residents of Ellebaek only receive 30 minutes of time outdoors per day when more is needed to maintain physical and mental health. The CPT recommends at least two hours of outdoor access per day — four times what they currently receive. People are also denied access to their cell phones, despite the fact that people in similar facilities across Europe have the right to a mobile phone, which is vital to keeping in touch with family and friends. Having a mobile phone is punishable with 15 days in solitary confinement. One of the most shocking details is that people on suicide watch are forced to remain in observation rooms completely naked because the centers do not have rip-proof clothing. Many of these problems are caused by lack of staff. Some guards blame lack of outdoor access on lack of staff to provide supervision, especially female staff. People being detained often have to translate for other detainees since professional interpreters are not available, leading to misunderstandings and conflict. Existing staff are not trained to work with people on the move. The report also asked the Danish government to consider detaining fewer people in response to overcrowding in holding facilities, instead of scrambling to open new centers. Political Responses The Socialist People’s Party, the Social Liberals, and the Red-Green Alliance called on the ruling Social Democratic government to improve conditions very quickly, as expressed by spokesperson Soren Sondergaard. We cannot accept that Denmark violates our international human rights obligations. It is absolutely crucial that something is done so that no such criticism will be raised again. However, this was not a unanimous sentiment among Danish politicians and people. Some say that the people being detained brought these conditions on themselves by failing the asylum application and not returning to their countries voluntarily. Others doubt the report’s findings. Danish People’s Party’s spokesman Peter Skaarup claims the committee was exaggerating. However, the report’s findings were confirmed by a former prison officer at Ellebaek. He agreed that the buildings are in much worse shape than other prisons in Denmark and that the center is understaffed, especially in the women’s section. The extensive, 85-page report and the independent confirmation make the unacceptable conditions very clear. Whether the Danish government will take action is another story, considering the hostile environment towards asylum seekers that the government is intentionally creating.
https://medium.com/are-you-syrious/ays-daily-digest-07-01-2019-council-of-europe-says-danish-ellebaek-unsuitable-for-humans-9a9d8d723bb7
['Are You Syrious']
2020-01-08 09:31:27.875000+00:00
['Denmark', 'Digest', 'Migrants', 'Human Rights', 'Refugees']
We’re All Being Judged
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/illumination/were-all-being-judged-f23055695a5
['Terry Mansfield']
2020-12-19 23:38:33.378000+00:00
['Homeless', 'Compassion', 'Judgement', 'Poetry', 'Poem']
《当下的力量》实践与体验 — 不平静的平安夜 24/12/2020 第七天 今天是个非常值得记录的一天,因为今天的实践失败了!我遇上了实践后的第一个难题;我没办法在这个时候,保持意识。 事故大纲 — 就是我主动问了他,关于我们的关系;然后他回复了一句之后就「消失」了。每一次遇到他的突然不回复,我就会感觉精神紧绷、焦虑;然后我不记得了我该保持意识这件事。…
in In Bitcoin We Trust
https://medium.com/@tsechienoh/%E5%BD%93%E4%B8%8B%E7%9A%84%E5%8A%9B%E9%87%8F-%E5%AE%9E%E8%B7%B5%E4%B8%8E%E4%BD%93%E9%AA%8C-a22c610f6bd9
['句点']
2020-12-25 11:13:32.534000+00:00
['心灵觉醒', 'Spiritual Growth', 'The Power Of Now', 'Spirituality', '当下的力量']
Southwest
Dark and frigid before dawn, Glazed sidewalks shimmer Like jagged glass under the streetlights. Her bags are packed and loaded into the trunk. Soon we’re off and rolling – 1–87 southbound, Hands laced together in my lap. Forty minutes later We arrive at our points of departure. Flight 557 now boarding! Hot exhalations coil in the air Above us like smoke – Inappropriate goodbyes. Thanks for reading.
https://medium.com/illumination-curated/southwest-4c870d6c5f0d
['Mark Alden']
2020-11-24 14:55:58.128000+00:00
['Illumination Curated', 'Poetry On Medium', 'Poems And Stories', 'Poems On Medium', 'Poetry']
You Are Marmite And So Am I
Janine Coombes on stage at #2020Sorted. Photo by the author. How do you feel about people not liking you? I’ve never felt particularly likeable. I’m geeky. I’m not easy to get to know. And I spend far too much time in my own head. One of the remarkable things I’ve discovered in the last couple of years is how many other wonderfully geeky people there are out there, who also thought they didn’t fit in. One of the hardest parts of running my own business is standing up in front of people and saying: “Look at me! I have something for you.” Whether it’s posting on social media or talking to people face to face, putting myself forward feels awkward. What if people don’t like what I’m saying? I saw Janine Coombes speak at #2020Sorted yesterday. She is the genius behind The Secret Marketing Show which brightens up many a LinkedIn feed. As you would expect, she gave a very funny and insightful talk about building an audience through friendship. But the section of her talk I took away (apart from LinkedIn being like standing in the kitchen at parties) was this: “We’re told to turn up on social media as ourselves. Be authentic. Be Marmite. But when they say ‘be Marmite’ they’re not saying be something else to be Marmite, they’re saying you’re already Marmite. Some people like you, some people don’t like you.” Photo by Chris Lawton on Unsplash This point was emphasised by Rob and Kennedy later on. Kennedy told a story about how he had sworn in an email and someone had replied saying they were offended. Kennedy wrote back and explained that: “I’m really sorry that you were offended but I’m going to be honest, if you don’t like the use of profanity here, you’re gonna really hate it in my content and in my products. If we don’t get along, if you don’t like the way that I use slang about emails, like if we ever met in real life, you’re not going to like me either.” I’m embracing my Marmite status (not literally, that would be sticky). I’m going to be showing my face more, talking more about how I see things, and not worrying about who is switching off.
https://itsyourturnblog.com/you-are-marmite-and-so-am-i-66a7d934949d
['Rachel Extance']
2019-11-18 03:02:15.460000+00:00
['Life Lessons', 'Personal Development', 'Authenticity', 'Marketing', 'Entrepreneurship']
Tackling Burnout on Tech Workers
Stress is a prevailing issue for us as a human. In adult life, work-matter is often the common trigger of it. It can be in the form of an unimaginable amount of work, deadlines, or even an unfriendly workplace. If someone has no more energy to handle the endless stress they face, then burnout is inevitable. The pressure is undoubtedly higher in a fast-paced, and highly contentious environment called the tech industry. It is because the tech workers are often challenged to build something that has never existed before. There is no specific handbook to follow but brief and ideas to turn into tangible digital products. It needs more than extra energy to build those extraordinary things. Thus, the mental and physical drain is inevitable. However, burnout and stress aren’t uncontrollable things. This article will discuss the signs of burnout and how to overcome it. It suits best for individuals working in the tech industry or those at the managerial level to handle better a team consisting of different personalities and backgrounds. What is burnout? Burnout is a problem related to life management. Specifically, it is a mental or physical breakdown due to stress. Its common cause is overwork as a state of vital exhaustion. Simultaneously, it is a form of prolonged response caused by continuous situational stress on the job. In May 2019, the World Health Organization added: “workplace burnout” specifically on the 11th revision of its International Classification of Diseases. It is classified as an occupational phenomenon, not a medical condition. According to ICD, burnout is a syndrome conceptualized as resulting from chronic workplace stress that has not been successfully managed. Furthermore, WHO defined burnout as feelings of energy depletion or exhaustion, which in some ways could increase mental distance or feelings of negativism or cynicism related to one’s job, and eventually, it could reduce professional efficacy. It can be concluded that burnout generally refers to three main elements: Exhaustion It is an intrinsic element from an individual when he/she feel pressured of their tasks and eventually says, “I can’t take it no more.” It is an intrinsic element from an individual when he/she feel pressured of their tasks and eventually says, “I can’t take it no more.” Cynicism It is a negative response to the job one’s doing. A feel of disengagement. It can be from intrinsic factors, particularly an “impostor syndrome,” or extrinsic factors such as a “socially toxic work environment.” It is a negative response to the job one’s doing. A feel of disengagement. It can be from intrinsic factors, particularly an “impostor syndrome,” or extrinsic factors such as a “socially toxic work environment.” Professional inefficacy It is due to negative self-evaluations when someone feels that their job offers “no future” or thinks that the job has taken away their freedom and feels the “erosion of soul.” What could be the practical trigger of burnout? Inadequate compensation According to a Kronos survey, inadequate compensation is the biggest trigger (41%) of those who have experienced burnout in the workplace. It is a classic dilemma of any job. This problem’s solution is vivid when the employee is willing to communicate their needs with the employer, rather than self-complicating about the salary. Unimaginable Workload and Overtime Work Culture These two factors are the second most prominent trigger of burnout (still according to Kronos). It is controllable if the employer is willing to be fair and proactive regarding the matter. Significant Pressures It is natural to feel pressured, especially when you have that particular duty or responsibility to be taken care of. Nevertheless, It is worth noting that the stress must not hinder you from staying committed and reliable to maintain your professional being. Poor Management It is the prominent external factors that trigger burnout among employees. The manager holds more than a vital role in handling the team, not only the work but also the wellbeing. For instance, poor project management who did not carefully take care of time measurement would put unreasonable workloads at the deadline, which inevitably leads to burnout. Disconnected from the company’s mission Someone doesn’t start a new job feeling burned out. Apart from the triggers mentioned above, it might occur when the job used to make one’s feel important, meaningful, and find fascinating becomes unpleasant, unfulfilling, and meaningless. At the same time, you might be worried about your future in the company you’ve been loyal to. How to tackle burnout when you think you have one? Be Mindful with your Thoughts and Wellbeing Mindfulness is a magic word if you duly understand the meaning. Being mindful is to be able to appreciate your surroundings from multiple perspectives. This way, you must try your best not to fill up your agony by concentrating on other positive things. Additionally, it is also essential to be mindful of your own body. Eat healthily and sleep well (at least 7 hours a day). Declutter the negative thoughts to clear your mind. Keep motivated Stress is sometimes a distraction towards your goal to easily take control back when you’re highly motivated. Thus, anytime you feel like you’re “disconnected,” you must remember your long term plan why you’re choosing to be at that particular position of your career. Strong motivation could help you better focus on doing your work. Set boundaries When you’re feeling overwhelmed with work, it’s okay to ‘unplug’ for a moment. Unplugging does not mean asking for a day off and going on vacation. It might be an ultimate solution for some people, yet a day out would also lead to more time for overthinking if you didn’t manage it well. Make sure you know your purpose when you’re asking for time off. Unplugging can also be done. For instance, you can set a particular time during your break to do simple therapy such as deleting spam at your email, live your five minutes by singing your favorite song or greet your friend by chat. Another solution at a moderate level is to uninstall your mobile email not to get distracted during your quality time at home. Don’t forget to put your contacts if the boss wants to call you for an emergency matter. Know your Coping Mechanisms Eventually, everyone has their coping mechanisms to wash away the bad vibes. It can be anything from interacting with nature, listening to your favorite band, eating good food, or just crying until you sleep, all to clear your mind before being able to ‘function’ normally again. Apply this Scale of Priority Put your to-do task in the graphic below and see which one should be prioritized first than the rest. Why should employers mind this phenomenon? Businesses strive to increase job satisfaction in the workforce for favorable conditions in lowering employee turnover, enhancing engagement, and significantly improving productivity. The burnout phenomenon among workers might hinder those goals if it is not well taken care of. Apart from financial reasons, the humanitarian mission in maintaining employees’ wellbeing is essential because they are an integral part of the organization. If the employees are the fruit garden you planted, their wellbeing should be taken care of by providing them enough space, water, and sunlight to grow and bear their fruits’ sweetness. What should you do as an Employer to Prevent Burnout in your Team? Understand your Team In-Out The better you know your team, the more comfortable you can spot changing behavior towards unproductivity. Arranging social events works every time to build a friendly relationship among workers better. It allows them to express themselves and feel welcome. Provide Autonomy Autonomy allows the workers to control their work and be more flexible towards the method as long as it meets the final goal. Giving your team members the freedom to choose the job and schedule by themselves would reduce the chance of burnout. At the same time, it would also boost confidence for the worker. Walk the Talk The Tech industry is known to be the leading industry in today’s world. It comes with grandiose claims of the work the startups have been doing to transform the world while improving society’s life. Thus, there is an exceptional value someone would feel when they’re a part of this ‘movement,’ which could also automatically strengthen their business acumen. Unfortunately, along the way, the value they believe might not meet their expectations, as if when they know their work made no significant impact, they become demotivated. It became a special note for the leaders to maintain their high expectations by ensuring they could walk the talk. The chiefs must have made each individual have the vision to share where they and the team are heading. Showing Appreciation and Give Rewards When you appreciate employees for a task they have nicely executed, they feel more valued in the company and know that their hard work has paid off. Appreciation will help to lift the pressure they have handled and decrease burnout levels. It could be a memorable finish and a fresh start for their new project. Build a Diverse Team Diversity has been shown to boost a friendly working environment. It improves the input and problem-solving ability of a team as it consists of people from different backgrounds with different perspectives. A Matter of the Person or the Job? The debates over whether burnout is a people or job problem is still going strong. There is a common saying that people who experience burnout are weak as they lack psychological resilience. Others say that burnout is just another form of complaining and not being grateful for their opportunity. Eventually, burnout is a real phenomenon in the workplace. If we reflected on the definitions above, burnout is a personal problem and a phenomenon triggered by workplace circumstances. Conclusion Tech companies, startups, fast growth, high profile digital products are the arenas with typically high expectations, deadlines, and burnout chances. Hence, it is crucial for the lead level in this industry to maintain the employees’ wellbeing. Remember, any problem you face is not yours only but also a matter of organization. You can always communicate the problem rather than keeping it to yourself. Have any thoughts? Drop your comments below. ✨ Looking for the right partner for your digital transformation? Let’s talk or visit Glovory.com now! Contributor : Rachmadita Kusumastiti
https://medium.com/@glovory/tackling-burnout-on-tech-workers-8279d46b3d37
[]
2020-12-16 04:38:27.131000+00:00
['Work Life Balance', 'Tips', 'Workplace', 'Workplace Culture', 'Human Resources']
What is Model Complexity? Compare Linear Regression to Decision Trees to Random Forests
A machine learning model is a system that learns the relationship between the input (independent) features and the target (dependent) feature of a dataset to be useful in making predictions in the future. To test the effectiveness of the model, a completely new similar dataset is introduced which only contains the input features, and the model should predict the target variable based on the insights it gained during training. An appropriate performance metric is then used to compare how good the predicted values are compared to the actual target values. In this article, we are going to test the effectiveness of 3 popular models that vary in complexity; Linear regression Decision Tree Random Forest We are then going to check the their performances using two metric systems: Mean squared error, and Mean absolute error. To accomplish the above we need a dataset, and we are going to create a brand new one ourselves. To understand why I created my own dataset, let us first understand what machine learning is. Machine learning is the process of identifying patterns in data. You can think of a pattern as the ‘way that things work’. Machine learning does this by understanding the relationship between the features (X) and the label (y). Some patterns are simple for example high temperatures leading to increased ice cream sales is a simple linear relationship, while others are complex such as details about a house and how they affect the house price, and this is where machine learning shines. In my case, I created a dataset where the relationship between the X and y features is a cosine curve. Image by author Step 1: Dataset simulation In general, everything has a certain way in which it works and a great model should reveal this pattern. In our case, the ‘true state’ is the cosine curve which is y=cos(x). We will then add some noise to simulate ‘real state’ values, since the real world situations are never perfect. The first step is to initialize our x values. I used np.linspace(start, end, length). We want our axis to start at 0, end at 2*pi (which is 6.3), and have 100 evenly spaced steps. x = np.linspace(0, 2*np.pi, 100) To get the true cosine curve above, I used y=np.cos(x). However, we will add noise in the data all around the cos curve to create our simulated y values. true_y = np.cos(x) To create the noise, I used np.random.normal(mean,std,length). This creates normally distributed random values which are centered around the mean, and only vary from the mean by the std(standard deviation). I used 0 for the mean and 0.5 for the std, so the noise values will be between -0.5 to 0.5. I set a random seed of 567 so that the generated sample noise was reproducible, otherwise every time the code was run, new data would be generated. np.random.seed(567) noise = np.random.normal(0, 0.5, 100) The last step was to add the true y mapping to the noise to create our target variable y. y = np.cos(x) + noise Now our values are ready and we will create a dataframe to hold the x and y values. We will call this dataframe train, since this will be the training set to train our models on. train = pd.DataFrame({'x':x, 'y':y}) Now we can plot both the cosine curve (true relationship) and our simulated train data. plt.scatter(train.x, train.y) plt.plot(train.x, np.cos(train.x), color='k') plt.xlabel('x') plt.ylabel('y') plt.show() Image by author Test set We will also simulate a test set. This will be important for testing the effectiveness of the model, by checking how well it can predict y values using unseen x values from the test set. We create the test set exactly the same way as the train, except for the noise which we will create using a different random seed of 765. x = np.linspace(0, 2*np.pi, 100) np.random.seed(765) noise = np.random.normal(0, 0.5, 100) y = np.cos(x)+noise test = pd.DataFrame({'x':x, 'y':y}) Let us plot the train (blue dots) and test (red dots) sets together. Plot of train and test simulated data by author Step 2: Model creation Simple Linear regression This model fits a straight line through the data. y^=β0+β1x is the formula for representing a simple linear regression model; β0 is the y intercept and β1 is the coefficient for x, or the slope of the line. y^ is the predicted value. The intercept and the coefficient are learnt from the data by the model. Therefore the model’s task is to get the best estimates of β0 and β1 that represent the true patterns in the data. I used sci-kit learn to implement the model. First we import the library then initialize an instance of the model. from sklearn.linear_model import LinearRegression basic_linear_model = LinearRegression() Next we separate our features (x) from the target variable (y) in the dataset. I used df.drop(‘y’,axis=1). It is common practice to just drop (remove) the target feature and use the rest of the columns as the input features. features = train.drop('y', axis=1) target = train.y The final step is to fit the model to the data. This is also called training the model and the best estimates of β0 and β1 are determined. basic_linear_model.fit(features, target) You can get the β0 and β1 by calling the intercept_ and coef_ methods of the linear regression model. print(basic_linear_model.intercept_) print(basic_linear_model.coef_) ### Results 0.09271652228649466 [-0.0169538] To make predictions we use model.predict(features). The underlying function is β0+β1x and therefore our predictions are represented by a straight line which we can plot, together with our simulated data set. y_preds = basic_linear_model.predict(features) plt.scatter(df.x, df.y) plt.plot(df.x, y_preds, 'k') plt.show() Image by author As observed, the linear regression model assumes a linear relationship in the data, which is not a good representation for our data. Polynomial Linear Regression — adding complexity Unlike a simple linear regression, polynomial models add curves to the data by adding a polynomial factor of x for example x². Let us first create a second-order polynomial model, by adding x² as another column to our data set. We will call the new column x2 and the formula now is y^=β0 + β1x + β2x². I created a copy of the data frame to preserve the original. To get x2, I used np.power(value,power) df2 = train.copy() df2['x2'] = np.power(df2['x'], 2) Create reusable function: To avoid a lot of code, I created a function that initializes a model, separates the data set into features and target variables, then plots the predictions from the model. The function takes in a dataframe and a model, and returns the trained model. def model_fitter(data, model): features = data.drop('y', axis=1) target = data.y model.fit(features, target) y_preds = model.predict(features) plt.scatter(data.x, data.y) plt.plot(data.x, y_preds, 'r--') return model Let us now call the function and create our second-order polynomial model. polynomial_second_order = model_fitter(df2, LinearRegression()) Second order polynomial model by author Wow, look at that! It models the curve in the data very well. But we will know more when we make predictions on completely new unseen data. I also created a more complex third-order polynomial model whose results turned out almost identical to the second-order model. As a general rule, if two models perform equally well, it is better to choose the less complex model as it usually generalizes better to new data. Decision Trees Decision trees build a model by splitting the data into subsets based on the target variable. This branching nature allows for very complex relationships in the data. We will first create a Decision tree that does not have any parameters. This will create an unconstrained decision tree that will keep splitting into smaller and smaller subsets until the final leaf nodes have only 1 value. Image from www.datasciencecentral.com Let us see this concept in code: from sklearn.tree import DecisionTreeRegressor decision_tree_unconstrained = model_fitter(train,DecisionTreeRegressor()) Unconstrained Decision Tree by author The lines represent our predicted values, and clearly the model has completely learnt the train data and predicted all the points perfectly. This is called overfitting and this model will not be able to generalize well when provided with new data. The idea for a model is to ignore the noise and learn the actual signal, but this model learnt the noise as well. Constrained decision tree by depth We will add some complexity to our decision tree by limiting the number of levels that the tree can go down to. In our case, we will set a maximum depth of 3, meaning the tree will only be able to branch down 3 levels then arrive at a prediction. decision_tree_by_depth = model_fitter(train,DecisionTreeRegressor(max_depth = 3)) Decision Tree with max_depth=3 by author This was a better performance. You can see the branching nature of the decision tree by the steps, unlike the smooth curve of the polynomial linear regression. Let us use another parameter to constraint the model. Constrained decision tree by leaf Here we will set the minimum number of samples that each leaf can have before reaching a prediction to 5. This means that the smallest subset will have 5 values of x, preventing the scenario we had with the unconstrained decision tree which had one sample per leaf node. decision_tree_by_leaf = model_fitter(train,DecisionTreeRegressor(min_samples_leaf = 5)) Decision Tree with min_samples_leaf = 5 by author There are several parameters you can use with decision trees. The process of choosing the best model parameters is called hyperparameter optimization or tuning. Random Forests You can think of a random forest as an ensemble of decision trees. Random forests train many decision trees in parallel. Each decision tree is trained on only a random subset of observations, and the predictions are combined into one decision tree. This is very effective in preventing overfitting. We need to set a random_state when initializing the model to get reproducible results because each tree is trained on a random set of observations. from sklearn.ensemble import RandomForestRegressor random_forest_unconstrained = model_fitter(train,RandomForestRegressor(random_state=111)) Unconstrained Random Forest by author The unconstrained random forest is still overfit, but not as much as the unconstrained decision tree. We can also constraint the random forest the same way we did the decision tree; max_depth=3 and min_samples_leaf=5. random_forest_by_depth = model_fitter(train, RandomForestRegressor(random_state=111,max_depth=3)) Random forest with max_depth=3 by author random_forest_by_leaf = model_fitter(train, RandomForestRegressor(random_state=111, min_samples_leaf=5)) Random Forest with min_samples_leaf=5 by author As observed with the constrained random forests above, there is a general smoothing effect of the predictions curve compared to the decision trees which appear as steps. This is because random forest trees are trained on subsets of the observations, then these predictions are combined creating better generalized predictions. We have only explored two parameters of random forests; max_depth and min_samples_leaf. Find the expansive list here. Step 3: Testing model performance on unseen test data set We will evaluate our models using two commonly used metrics for regression machine learning: the mean squared error (MSE) and mean absolute error (MAE). MSE takes the average of the squared errors, and larger errors are scaled up therefore penalized more because of the exponent. MAE takes the average of the absolute errors. A lower MSE and MAE is preferred. MSE is generally a bit faster to calculate, while MAE is easier to interpret. from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error Let us now make predictions on our test data. I created a function to automate this. The function takes in a test dataset, the trained model, and the name to be printed. It separates the test set into features and target variable, makes predictions on the test set, calculates the MAE and MSE, and prints them. def model_performance(data, model, name): test_features = data.drop('y', axis=1) test_target = data.y test_preds = model.predict(test_features) mae = mean_absolute_error(test_preds, test_target) mse = mean_squared_error(test_preds, test_target) print(name) print('MAE', np.round(mae,3)) print('MSE', np.round(mse, 3)) For the second order polynomial linear model, we need to engineer the same features as those for the train data set by adding another column to our test data that contains x² values. We will first create a copy of the test data then add the x2 column. test2 = test.copy() test2['x2'] = np.power(test2.x, 2) Now we will run the function for every model and return the mean squared errors and mean absolute errors. model_performance(test, basic_linear_model, 'Basic linear regression') model_performance(test2, polynomial_second_order, 'Second Order Polynomial Model') model_performance(test, decision_tree_unconstrained, 'Uncontrained Decision Tree Model') model_performance(test, decision_tree_by_depth, 'Decision Tree Constrained by max_depth = 3') model_performance(test, decision_tree_by_leaf, 'Decision Tree Constrained by min_samples_leaf = 5') model_performance(test, random_forest_unconstrained, 'Unconstrained Random Forest') model_performance(test, random_forest_by_depth, 'Random Forest Constrained by max_depth = 3') model_performance(test, random_forest_by_leaf, 'Random Forest Constrained by min_samples_leaf = 5') Below are the results Basic linear regression MAE 0.677 MSE 0.669 Second Order Polynomial Model MAE 0.427 MSE 0.277 Uncontrained Decision Tree Model MAE 0.523 MSE 0.434 Decision Tree Constrained by max_depth = 3 MAE 0.441 MSE 0.289 Decision Tree Constrained by min_samples_leaf = 5 MAE 0.427 MSE 0.287 Unconstrained Random Forest MAE 0.47 MSE 0.333 Random Forest Constrained by max_depth = 3 MAE 0.424 MSE 0.28 Random Forest Constrained by min_samples_leaf = 5 MAE 0.416 MSE 0.276 We can see that for the two metrics, The Random Forest constrained by leaf has the lowest error, hence the best performing. As expected, the simple linear regression was the worst performing as the data clearly lacked a linear relationship. However the second-order polynomial model was the second best performing model, even though the curve seemed to fit so well. This goes to show why the random forest is so vastly used in applied machine learning.
https://towardsdatascience.com/what-is-model-complexity-compare-linear-regression-to-decision-trees-to-random-forests-7ec837b062a9
['Susan Maina']
2020-12-29 04:47:41.753000+00:00
['Machine Learning', 'Algorithms', 'Data Science', 'Data', 'Programming']
Design systems: History to present and future potential
Unless you live on the moon you might have heard of the word “design system “! This revolutionizing concept made a new balance for the design world! Years ago we couldn’t imagine something more helpful than pattern libraries till Atomic design became a thing! And then got baked to become the mature concept of design systems that we nowadays know bringing new core values: consistency, efficiency, scale, and nothing less. So how it all started and where it might be going in the future? Design systems history: from early art-forms to actual systems Generally speaking, the roots of the design field’s history dated from the first human interest in logo branding and typography. The history of design systems is long and full of household names. Without digging deep in details, I would like to sum up the design systems history into phases 1/ The humble beginnings of the design concept The humble beginnings of the invention of movable type in the early 1400s, with the rise of neutral Typography, rare headings, and iconic pics. The term “graphic design” was invented by William Addison Dwiggins in 1922 to describe his process of designing books, as a combination of typesetting, illustration, and design. And never stopped growing with the rise of artistic movements in the 20th century. 2/ Industrial revolution and the attempts to mix art and technologies Some Italian artists thought it would be cool to be able to merge technology and the industrial revolution with art in the future. And so they started experimenting in typography, geometric forms, and color which slowly evolved into defining a design process. 3/ The start of computed design and the faced challenges by designers With the emergence of computers in human life and the first steps to process digitalization in administrations and government work, the design took another definition to become a system of relationships between all of the aspects of a problem to solve from shapes sizes, proportions tools, etc. The good appeal was so hard to achieve, mistakes and the lack of consistency became an insoluble problem. 4/ The invention of design guidelines and appearance of the functional design approach Guidelines and general aesthetics started getting defined by the SWISS with European modernism in 1990 in an attempt to create a rational Method and a codified approach that helps create better designs without over counting on the individual designer’s natural artistic talent. And so they categorized design problems and turned them into a series of simple constraints and rules. The move was followed by the use of the grid system to order page elements and the formulation of “ functional design”. And Designing programs emerged functioning on strict modular principles 5/ Evolution of UI/UX design With the establishment of the age of software, the first-ever graphical user interface was developed by Douglas and Alan kay in 1981 which soon evolved with the natural evolution of human interaction with text and digital screens to become a very important and trendy field. Then soon followed, in 1990 by the development of UX design known the user experience in the necessity of usability and user satisfaction. Both concepts were challenging to maintain success and required stricter design principles. 6/ Pattern libraries Before design systems were there in UI/UX design, designers used pattern libraries to make their work more repeatable. Pattern libraries collections of design elements that appear repeatedly on a site. The library helps define how elements look like, and how they are coded or provide UI/UX kits. yet it was still so challenging to maintain consistency and mistakes that were often noticeable. Followed by the framework bomb in 2010 where a major shift happened in digital products. Businesses started requiring the early mobile apps that ended up with huge problems of non-responsive UI and inconsistent UX 7/ Atomic design A question started rising: why bother to visually design every state of every screen while you can think more efficiently and operate by Assembling all components Line of UX/UI? And so started the adoption of a new technical approach that makes design components operate as an atomic system. The concept is simple: you operate on atoms that form molecules that functionalize organisms that can flexibly adapt to required change! 8/ The first design system Google’s Material Design was the first considerable design system. It debuted in 2014, leveraging the best possible practices of design, based on atomic approach and keeping with pattern library utility. The new method worked on providing a better material design, unifying visual language, and helping keep brand consistency. A concept that will mark the present Where are we today? Over the last 20 years. Design systems didn’t stop evolving till they became a strategic move that every business aims to have. As a general definition, a design system is a collection of the detailed standards of design and front-end coding made to be easy to use and reused in complete efficiency, consistency, and scalability to allow teams to achieve high-value digital products. The biggest leading firms invested in design systems simply because it ensures efficiency for everyone by operating like a bond of atoms creating molecules that combine into an organism guaranteeing consistency, optimization, efficiency, and scale. Many think that the obsession with Standardization and optimization that design systems work on rob designers of the ability to explore and experiment yet it surely gives the powerful the ability to Scaling work and bundle hundreds of projects and users in a short time What’s next? The future of design systems is surely bright. It is expected to elevate into more powerful design systems that tackle different challenges and give more space for designer creativity. A huge number of users are rising day by day so by the future, we might see everyone using a design system and those who are not losing all competitive advantages. Many new features will be added. Custom plugins are soon expected to keep up-to-date automatically and that teams could access the elements or components quickly without having to worry about what element to use or where to pull it from. Design systems might get customizable or even personnel and why not payable through mobile apps or even smartwatches! Truly we can not really limit expectations! Related Articles: Basics you need to know about Atomic Design System Design systems to change designer vision! Reasons that make the Design Systems inevitable The impact of design systems on adopter organizations & users Design System: beyond just UI kits Common Myths Surrounding the Design Systems in 2020 Design system team Management Advantages of Having a Design System in Your Plans Tips to create the Perfect Dark Themes in Software Design Tips to Create Blurred Backgrounds in Software UI Designs
https://medium.com/cohort-work/design-systems-history-to-present-and-future-potential-9a2529805afd
['Rania Mdimagh']
2020-12-07 06:44:50.642000+00:00
['Design Thinking', 'Design', 'History', 'Technology', 'Design Systems']
Navigating a Triaspora
Down the Rabbit Hole Vancouver, photo by me Recently, I’ve shared less about my personal life online so I thank you for taking the time to read this. I’ve been in a reflexive period of my life for the past year, especially due to the pandemic and I’ve been asking myself a lot of questions. These questions also possess answers that I can’t ask for much advice for, from the people around me. I came into this whole self-identity ethnicity conflict when I moved to Vancouver in 2019 to begin my Masters of Public Health program. Having lived in Edmonton my entire life, I really needed fresh perspective and decided to move there despite knowing few people in the city. Time flew by during my first semester of my Masters program, and interestingly, I became close friends with an Ethiopian girl and a white girl. I say interestingly, because (as you know, if you know me) all of my friends are East Asian (EA), and this has been my reality for the entirety of my life. I never questioned this until I moved away. After forming these new friendships in Vancouver, and seeing my Ethiopian friend’s passion for her culture, language, and food, I realized I couldn’t say the same about myself and my own culture which led me down this rabbit hole of questioning my identity. These friendships were also a refreshing change. Because of how all of my current friendships are so healthy and fulfilling, I never really had the reason or introspection to reflect on myself and ask these questions. It took a combination of moving to Vancouver and (perhaps) maturing for me to realize/question the following aspects I noted about myself: I have been completely steeped in various aspects of EA culture since my childhood. Why is it that when I hang out with my friends we only go out to eat Japanese, Korean or Chinese food? Why do I also only cook these dishes and learn these recipes ? A good majority of my friends are East Asian and looking back I often felt the need to subconsciously hide my ethnicity and assimilate to engage in these friendships. Thoughts went racing through my mind: do my friends even know I’m Sri Lankan? Even today, I am genuinely curious about this. None of my friends have been curious about my culture. As an example: Why have none of my friends ever even asked to try Sri Lankan food? Why are none of the food options raised by the group when we go out to eat ever even South Asian at the least? I discuss food a lot through this because I think that food IS culture. Food, for me, is the easiest gateway to experiencing someone’s culture. I mean, how do people pass down their culture other than language, and other forms of media? Reflecting on this, I realized that another area of how EA culture plays a big role in my identity is through travel and language. I note how much praise/ surprise I get from my East Asian friends and their parents when I understand parts of their language, have visited their countries and engaged in their culture. I often feel that I’m not in a diaspora (trapped between two worlds: my parents’ culture and Western culture), as is common with many BIPOC first generations, but I’m experiencing something more likened to a “tri-aspora” where I’m trapped between Western culture, different forms of East Asian culture and Sri Lankan culture. Expanding on this idea, often many POC first generations find themselves unable to fully relate to either Western or their parent’s culture, whereas for me- because of this third element of EA culture, I was outed by other Sri Lankan people in the community for “being too [East] Asian” which isolated me from fully forming some connections earlier in life as well. One huge factor that stood out amongst all my reflections was that there was little reciprocity of cultural exchange in my friendships. As genuinely good as these friendships are, why are my friends not curious about ME and MY culture? This is when I came to a more painful realization that at a maximum, 50% of the responsibility of my friends not experiencing Sri Lankan food/culture was on my friends, but most likely a larger percentage of the responsibility was on myself, and the self-shame that came with me hiding my ethnicity. Point blank- although my friends never asked to experience any of my culture, I never invited them either, and this led me to my second stage of reflection: me dealing with my own internalized self-racism. Internalized Racism and Events Shaping Self-Perception Downtown Edmonton, photo by me How did I become like this? When did I start hating and becoming embarrassed of my culture, ethnicity and identity? And ultimately, how did this lead to my lack of invitation for people to experience my culture? I had a phone call with one of my best friends in Toronto to clear my head and run through some of my thought process with him. One part of the conversation was especially memorable, he asked: “Are you proud to be a Sri Lankan person?” I honestly couldn’t say yes to that, but I said “I’d like to be, but that’s not where I am right now.” Then countered with, “are you proud to be a Hong Konger?” To which he said yes. But then I asked him to think about why that was. Hong Kong is seen as an ideal travel destination, people love Dim Sum, and Mahjong is seen as cool. These are a few elements of Hong Kong culture that people are familiar with and able to enjoy. Even examining the recent event around the culturally appropriated Mahjong set developed by a group of white girls who wanted to “refresh” the game (yikes), is yet another indicator that East Asian ideas and cultural symbols like Mahjong are cool. But, the point being is that I have difficulties finding an equivalent aspect or artefact in my culture with such impact, and this ties into the idea of cultural capital that I’ve alluded to above. Cultural capital, an idea originally coined by sociologist Pierre Bourdieu is defined as “symbols, ideas, tastes, and preferences that can be strategically used as resources in social action” (Oxford Reference, n.d), as well as “familiarity with the legitimate culture within a society” (Cultural Learning Alliance, 2019, para. 2). Although this concept was originally used to describe aspects of high culture in western society, it can similarly be used to explore the idea of domineering EA culture in media, food, and other cultural artefacts in the West. I could write an entire tangent topic about cultural capital but will keep it short for the sake of brevity. I think travel is the cherry on top in comparison: many people view East Asia (especially places like Japan, Korea, China) as “cool”, and “trendy”, where places in South Asia are more-so seen as “dirty” and “dangerous”. I’ve seen this firsthand as well, for example when viewing an EA person I follow on Instagram’s responses to an “ask me anything” thread, where their response to someone saying “Come to India!” was “I’d like to, but I heard it’s too dangerous” Although it might be true that travelling anywhere in general comes with certain danger, their overall positive reception to other EA locations in that same thread reveals a sort of hierarchy in favorable travel destinations. As I don’t have the same pride in my racial identity, I reflected on my experiences below: Memorable racial microaggressions and racialized life experiences Grade 4: Being upset about the 2004 tsunami that hit Sri Lanka, and telling my Chinese friend about it in class, his response being “damn did all the dirt roads there get washed away?” Being upset about the 2004 tsunami that hit Sri Lanka, and telling my Chinese friend about it in class, his response being “damn did all the dirt roads there get washed away?” Elementary- High School: not wanting any traditional food for lunch because I didn’t want to bring things that might be off-putting to people not wanting any traditional food for lunch because I didn’t want to bring things that might be off-putting to people Junior High: Korean friends instilling a hierarchy in me that I didn’t question until the end of high school, that Korean people were the best race, followed by Japanese, Chinese being the worst, and any brown people way below any of these. Korean friends instilling a hierarchy in me that I didn’t question until the end of high school, that Korean people were the best race, followed by Japanese, Chinese being the worst, and any brown people way below any of these. Junior High- High School: I was the darkest person in my friend circle, so I got dubbed the “black guy” or played this offensive caricature of a “Jamaican guy”, which led to me saying the N-word, other friends calling me the N-word, and living my life as this “funny” caricature version of myself which perpetuated anti-black racism and other harmful tropes. As an example, for my 18th birthday, some friends bought me a green shirt with a Jamaican flag, and a pink headband with watermelons on it, further casting me into this “role” and perpetuating harmful stereotypes. I was the darkest person in my friend circle, so I got dubbed the “black guy” or played this offensive caricature of a “Jamaican guy”, which led to me saying the N-word, other friends calling me the N-word, and living my life as this “funny” caricature version of myself which perpetuated anti-black racism and other harmful tropes. As an example, for my 18th birthday, some friends bought me a green shirt with a Jamaican flag, and a pink headband with watermelons on it, further casting me into this “role” and perpetuating harmful stereotypes. Junior High- High School: Hating my curly hair, straightening it, and getting outed and ridiculed for doing that by EA friends in high school Hating my curly hair, straightening it, and getting outed and ridiculed for doing that by EA friends in high school 2016: Getting racially profiled while getting a ride to my cousin’s place, on a vacation to New York, getting pulled over, arrested and spending 5 hours locked up in a cell in the New Jersey State Police Department (this is a long one so ask me in person if you want!) Getting racially profiled while getting a ride to my cousin’s place, on a vacation to New York, getting pulled over, arrested and spending 5 hours locked up in a cell in the New Jersey State Police Department (this is a long one so ask me in person if you want!) Throughout my life: Generalization- people seeing all brown people as being the same, all brown food being the same, and people continually identifying me as an Indian person. I wouldn’t assume that all EA people are Chinese, so I’m not sure why so many people identify all brown people as Indian. *Lately I’ve been super conscious about my use of racial identifiers as descriptors, especially as they’ve been misused so often by the media as racial microaggressions (i.e. black male suspected of robbery on 24th street, instead of 22-year-old male suspected of robbery on 24th street), but here they’re done purposely to illustrate inherent colorism. Colorism in the Community I think part of the experiences I’ve outlined above are due to colorism in the Asian community, and I’ll illustrate that through my personal experiences in Singapore, and in virtual communities. The textbook definition of colorism, is as follows: “prejudice or discrimination especially within a racial or ethnic group favoring people with lighter skin over those with darker skin” (Merriam Webster, 2020). Colorism is an interesting phenomenon because it applies not only between different ethnic groups, but within them. A few common examples of colorism are the increasing popularity of skin-lightening cosmetic products in Asia, and the general fear of getting “too dark”. The skin lightening cosmetics in the Asia-Pacific region makes up 7.5 billion dollars of the industry, with China producing 40% of total sales, followed by Japan at 21% and Korea at 18% (CNN Health, 2017). A UC Irvine Law Review article by Trina Jones (2013) outlines the concepts of colorism in both Asian and Asian-American Communities well: “For example, a Cambodian-Chinese man stated, “In the Cambodian community, [dark skin is] associated with less intelligence, laziness, working manually and lower class, and unattractiveness. Business people are the lighter-skinned ones, more intelligent, more ethical and morally superior. . . . [People] want to look whiter because it’s associated with wealth and status [in Cambodia].”48 Similarly, a Taiwanese man reported, “Light skin is the standard of beauty in Taiwan. . . . Wealthy people tend to be light-skinned, while darker people are associated more with low socioeconomic status. . . . ” (p. 1116) One of my personal, anecdotal experiences with intra-racial colorism was my realization that lighter skin was prized and prioritized amongst Sri Lankan people as well. For example, I was shocked when my skin was lightened during the post-production process in a family portrait taken by a Sri Lankan photographer. Moving past colorism that exists within ethnic/racial groups, there is also inherent colorism that exists between racial groups. As Jones (2013) mentions again: “Awareness of this additional use of skin color is important if, as sociologist Eduardo Bonilla-Silva argues, severe gaps in income, educational attainment, and professional status are emerging between what Bonilla-Silva calls honorary White Asians (Japanese, Koreans, Filipinos, and Chinese) and those who are more likely to be darker and among the Collective Black (Vietnamese, Cambodian, Hmong, and Laotians)…” (p.1120). In expressing similar sentiments, I would also include the South Asian community in Jones’s “Collective Black” group as well. From Singapore It was 2018 when I was looking into Commonwealth countries that I could partake a Queen Elizabeth Scholarship Program in. I decided on Singapore, as I managed to find an interesting topic surrounding zooplankton and aquatic ecosystems which I was always interested in. The government website also emphasized “racial harmony” with English being the major spoken language so I felt comfortable in choosing Singapore to travel to. Singapore was the first time in my life where I experienced both racism and colorism together. I quickly learned that there is prevalent structural racism that intersects with underlying colorism, with the majority Chinese population oppressing the darker skinned minorities who live there: mainly the Malay and Indian people. This was my first time experiencing overt racism from someone who wasn’t white, which had been my typical context in a Canadian setting. Me, with my treasured mobility knee-scooter & friend Valentin, photo by Cassandra Loh Due to my infamous ankle strength (this is written ironically, my ankle strength is terrible), I managed to fracture my toe after my ankle rolled while at work, and this is when I got to know the real Singapore. I came to the realization that 95% of darker skinned folks that I saw- whether it be Filipino, Indian, Bangladeshi or other South and South East Asian (SA/SEA) workers were all involved in construction or manual labor jobs. A few friends I made in Singapore told me of their country’s huge migrant-worker problem: how thousands of people in SA/SEA are recruited under false premises to work and live a “better” life in Singapore. In reality, these migrant workers are lured into thousands of dollars worth of debt and are almost treated as sub-human: like machines or animals. Their passports become confiscated, and they are not allowed to leave. Because there are no minimum wage regulations in Singapore, they’re likely paid just a few dollars per hour in unsafe working conditions (Singapore Ministry of Manpower, 2020). Talent and wealth become further drained out of SA/SEA and funneled into cheap labor that builds Singapore’s characteristic skyscrapers & high density housing.. A photo of everyday Singapore, photo by me From my experience there, I felt that the common notion of darker-colored people in Singapore being in construction or working “low class jobs”, shaped the underlying colorism in Singapore. Because of my fractured foot and temporary disability status, I believe that my experiences were a subtle blend of racism, colorism, and ableism and I felt them all amplify the already unprivileged status of what it is like to be ‘brown’ and darker-skinned in Singapore. One memorable instance I can think of was pressing the disability button on the bus for the ramp to be activated/pulled-out and the bus driver REFUSING to get out of their seat and pull out the ramp for me to board the bus. They proceeded to yell at me in Mandarin, and luckily my Mandarin-speaking Singaporean friend was there to defend me and yell back at the bus driver who begrudgingly came and lowered the ramp so I could board the bus while also swearing at me the entire time. You might potentially argue that I’m exaggerating being racially marginalized in Singapore and perhaps am taking different instances of discrimination (disability) and am conflating them with the racial ‘inferiority complex’ that I’ve built growing up in Canada. But these experiences showed me that this is not the case: After my foot had healed and I was no longer visually disabled, I was trying to find a seat to eat my meal in a busy food court at one of Singapore’s bigger malls, and I got rejected from every single table that had a few empty seats in the food court for 20 minutes, until my food went cold and I found a McDonalds nearby to eat my meal. I remembered asking one lady who was clearly done with her meal and was on her phone for her seat and she rejected me by motioning with her hands for me to leave. Right as I was walking away, I watched as she gave up her seat with no hesitation to another Chinese Singaporean person who asked her. My Singaporean friend who was hired for an internship in Singapore was reflecting on small talk she had with her HR manager while getting her paperwork setup to begin her position. This HR manager was reviewing a few applicants who were going to work at a café in the entrance of the office, and said “this applicant seems great, but I don’t want someone with skin this dark to be working at the front”. She said this during idle talk, to someone who just got hired to the office, as she assumed another Chinese Singaporean person would be able to relate to this sentiment, but my friend was disgusted. She was also unable to say anything due to office power dynamics as a new hire. At this point, the colorism isn’t subtle or a microaggression, it’s plain and out in the open, and it came from someone who is in charge of “human resources”. I think nothing highlights Singapore’s social problems more than the brownface ad for E-PAY, a new hawker-stall payment system that was released by MediaCorp in 2019 that had a Chinese actor wear dark makeup, a wig and fake facial hair to dress up as an Indian person, a Malay woman wearing a Tudung and a Malay office worker. Given the structural racism in Singapore and current racial hierarchy between Chinese people and Malay, Indian and other minorities this was highly unacceptable. I want to highlight again that this was paid for and approved by the Singaporean government. Having now done a government internship and known how many checks and balances are before something gets released, this was quite shocking. And it doesn’t stop there: two Brown Youtubers who lived in Singapore did a parody skit mocking the ad, and calling out Chinese people for perpetually erasing their culture, and in turn these Youtubers were sanctioned by the police for creating an “offensive video”. People were more angry that these Youtubers spoke out against the racist campaign, rather than the racist act itself… There is a definite hypocrisy in the legislation that details “racial harmony” in Singapore, and it’s clear that more work needs to be done to shift the perspective of the majority. It was disappointing, but eye-opening for me to have to experience this first hand. Subtle-Asian traits & Asian Creative Network The Facebook group, Subtle Asian Traits (SAT) was an interesting worldwide social phenomenon, that spoke on the first gen “Asian” experience- but very quickly became a page for East Asians, and even more so catered towards Chinese Asians. The first time I was told about the page was around the end of 2019 by a friend who said it was a funny page but: “Not for YOUR type of Asian, though.” This is when I first came to some reflections about how brown people were generally excluded from the term Asian, and why/how that even came to be. Another friend even told me to join the page “subtle curry traits” instead which I thought was stupid and even more exclusionary. I eventually left the SAT page due to the amount of problematic & explicitly colorist comments, i.e. EA folks (including people I knew) making jokes about brown people, and further telling them they don’t belong on this page. There were several sub-groups created out of SAT including one that I’m still a part of called Asian Creative Network (ACN). ACN is essentially a networking space for Asian folks engaging in any type of creative work whether it be photography, animation, videography, or even their hand-made items, although it is again a very EA dominant space. I don’t have as many qualms with ACN, other than the quality and repetitiveness of the content that has generally fallen into some clichés like endless Bubble Tea/Boba affiliated merchandise, but I definitely do still feel a level of exclusion or a lack of being able to share in broad sentiments like “a win for Asians!” when yet another EA person is cast into a movie role for example. I remember once making a comment on a post in this group about the lack of South Asian representation, and someone mentioned Aziz Ansari and Russell Peters as proof of equal representation for brown people… I think if you have to mention Russell Peters for brown representation in 2020 there’s definitely an issue, and the last time Russell Peters was funny was probably in 2005. When thinking about creative work, I can easily think of many white, Black and EA artists that I look up to, but I really have to wrack my brain to think of any Brown ones, let alone someone Sri Lankan. When discussing this with one of my only other Sri Lankan friends, I found out that he’s in a similar boat. We both realized that part of the reason we can’t find more pride or solace in our own ethnicity is the lack of prominent figures or respectable people in the fields we’re in. This again relates to the idea of a lack of cultural capital in the SA community. We don’t really have anyone to look up to, and this is reflected in the culture that people consume around us. Where do we go from here? Bromo, Indonesia, photo by me So you might be asking: what’s the outcome of this entire reflection? Do I completely stop consuming EA culture & throw away all my friends? NO. I have to insert here that I feel truly blessed to have experienced all the things that I have, and possess the richness of cultural knowledge that I do. This reflection is just a start to my personal journey in reclaiming and combatting my internalized racism, and I hope that in the future I will be able to have work that builds a form of cultural capital for Sri Lankan people and makes them proud of their own ethnicity. I hope I can find that for myself first. I’ve dropped a lot of unnamed people in this article. I know it’s difficult but I want to reiterate to de-center yourself, and not take any of this personally- I hope you’re reading this until the end! There’s so much here that took me so long to unpack, and we’re all learning to be better, including myself. I’m going to leave off with something a friend sent recently: “THE CURIOUS PARADOX OF CHANGE: SELF-ACCEPTANCE So what should we do if we want to set a resolution? Well, the first thing to do is take some time to reflect inwards. Fox Weber advocates taking an honest inventory of who we are first. Getting an accurate picture of ourselves so we can really comprehend the material. This means not only “tolerating our conflicts, exploring in a compassionate way the various parts of ourselves” but also, becoming aware of our environments, to understand what might circumstances might challenge our ability to stick with our goals. “When we accept ourselves” — as well as our circumstances — “we understand what we are working with, and we feel less judgmental of ourselves.’” We see the good bits and the problematic patterns, and we can look at ourselves in a calmer, kinder, more considerate way. While she cautions that it is likely we won’t like everything we are, but we are willing to face ourselves, and that in itself is courageous and motivating. “We think that by not accepting ourselves, we will somehow push forward and be our best selves, but actually, this is a form of denial, and denial and intolerance actually blocks progress.” In fact, if we refuse to accept where we are, we are starting from a premise of shame and intolerance. Instead, when we have a more realistic picture about ourselves and our circumstances lay the foundations for which we can truly develop and flourish. As the founding father of humanistic psychotherapy Carl Rogers infamously noted “the curious paradox is that when I accept myself just as I am, then I change.” (Prendergast, 2021, paras. 14–16) Happy New Year, and here’s to more self-reflection and being more accepting of ourselves in 2021 ! Kasun
https://medium.com/@kasunblue/navigating-a-triaspora-619c17a3c4ad
['Kasun Medagedara']
2021-01-27 05:03:01.999000+00:00
['Diaspora', 'Ethnicity', 'Asia', 'Racism', 'Colorism']
Build location-aware apps in minutes: maximize return on your GIS investment with Fulcrum
With the mobile revolution, more and more users in field roles are discovering the power of location-based data. Whether it is used to trigger a workflow or process, or simply help visualize information on a map, users — particularly those who work in the field — are increasingly demanding applications that use location-based data to make their jobs easier and more efficient. By integrating location-aware apps with your existing GIS tools, you can do things like: Track an inspector in the field right down to where she was standing when she took a picture See how the distance traveled by machine installers affects team productivity Scale existing paper-based or spreadsheet-based data collection processes to orders of magnitude more people, while getting better productivity and higher-quality data. Use location data to kick off the next step in a process Empower field workers with maps that help them easily visualize an end-to-end process or task The complexity of GIS tools — especially their ability to enable “location-aware” applications through integrations with application platforms (which often carry their own added complexity) — often makes it difficult to keep up with demand for more apps that harness the power of GIS in the field. This limits the deployment of new applications that drive value across the organization and often requires ongoing manual integration of geolocation data between the application and the GIS system. The result is less-satisfied workforce and a reduced return on GIS investments. To meet the demand in the field and unlock new value from GIS investments, companies need to cut the time it takes to build and deploy mobile apps from days or months down to minutes. In order to reach this level of agility, GIS experts must tap into location-aware no-code application development platforms such as Fulcrum that provide a straightforward way to send high-quality data into your GIS platform, as well as get that information out to provide real time location context to field workers. Fulcrum’s unique set of native geospatial application development, data capture, workflow and reporting capabilities allow customers to put location data into the hands of more people when and where they need it. And our no-code application environment makes it easier than ever to create new applications quickly to serve field users better. See how some of Fulcrum’s customers in public works, the electric utilities, oil and gas, and telecom industries have achieved this kind of success. Electric Utilities: Northpower Northpower is one of the largest multi-utility contractors in New Zealand, that performs 85,000 pillar inspections over a cycle of three years. Before implementing Fulcrum, the crews used paper-based maps to log and navigate to each pole, 10% of which would be hidden by vegetation overgrowth and go uninspected. Further, ad-hoc administrative support time was needed to upload and process their inspection results, which were exported individually from ArcGIS on devices to CSV files. Now, using fixed locations in Fulcrum, every pillar is found and expected the first time around, eliminating double site visits. The platform has also increased the efficiency of their inspections, enabling them to complete more work with fewer people and improving their margins. Read the case study here. Water & Wastewater: Brown & Caldwell Environmental engineering firm Brown & Caldwell was hired by DeKalb County, GA officials to locate and inspect the county’s 60,000 manholes as part of a sanitary collections system overhaul. Before adopting Fulcrum, the team had to gather historical data from record drawings, upload it into their existing GIS platform, and then export it into paper map books. The inspection crews would then try to locate the structures, take photos, and inspect them using pen and paper forms. Another team would follow with GPS equipment to gather precise location and elevation information. Once Brown & Caldwell implemented Fulcrum, inspection crews could locate the manholes using integrated GIS mapping on their mobile device, and photos captured were automatically connected to the corresponding assets. The team estimated that they needed almost 30% fewer hours to manage and QA their data once they started using Fulcrum, which has improved their delivery schedules and reduced costs. Read the case study here. Public Works & Planning: Fresno County When officials in Fresno County, CA, faced unprecedented wildfire damage, they needed to respond quickly. They were able to create a custom app in Fulcrum based on a FEMA damage-assessment worksheet in a matter of minutes, and soon they had dozens of volunteers using it in the field. The Fresno County folks integrated Fulcrum with their GIS program, Esri, so they could view the live data their volunteers were collecting in real time on the map. They were also able to export thousands of records of their existing assets from Esri into Fulcrum so volunteers could quickly get to each location and update the data or snap a photo, without having to create a new record or input GPS data. “We could do something like this with our existing GIS platform, but we just don’t have the luxury of time to deploy such an application during this emergency period,” said Public Works and Planning IT Manager Thanaphat “Pat” Srisukwatana. “But with Fulcrum, it’s ready to go out of the box.” Read the case study here. Oil & Gas: Premier Utility Services A global contractor specializing in energy infrastructure, Premier Utility Services implemented Fulcrum prior to a large gas meter inspection and leak detection project in upstate New York that involved surveying more than 3,300 miles of pipeline and half a million gas meters. Premier was already using CARTO as part of their internal GIS, and leveraged Fulcrum webhooks to push field data directly to CARTO. They also developed custom dashboard applications using the Carto.js API, which gave Premier and their clients the ability to easily view and analyze their data. Once an inspection had been scheduled by office staff, that information was instantly available to field staff on their mobile devices. Read the case study here. Telecom: Celerity Telecom contracting company Celerity is a premier provider of OSP engineering, aerial and underground construction, splicing and testing services, and next mile wireless networks. Their technicians collect massive amounts of data in the field. Celerity conducts route maintenance over long stretches (up to 200 kilometers at a time), which required employees to carry multiple books of maps on the road with them. But with Fulcrum, they were able to import their existing maps into their mobile devices so they could easily see the entire route and leave the cumbersome paper books behind. Not only did it improve their productivity in the field, it saved them a considerable amount of time managing administrative tasks back in the office. Read the case study here. Interested to see how to send data collected in Fulcrum into Esri ArcGIS Online with just one click? Watch this webcast! This post originally appeared on the Fulcrum blog.
https://blog.spatialnetworks.com/build-location-aware-apps-in-minutes-maximize-return-on-your-gis-investment-with-fulcrum-7d1b68db3bc7
['Sam Puckett']
2021-03-08 18:38:32.325000+00:00
['Infrastructure', 'Utilities', 'Location Intelligence', 'No Code Development', 'GIS']
Compatibility between PostgreSQL and Oracle regarding Data Sampling and Desensitization
Background Data sampling and desensitization are common test functions. For example, if you use online services to create a test database, you cannot extract the entire database but need to encrypt sensitive data. Consider the following Oracle example. SELECT COUNT(innerQuery.C1) FROM ( SELECT ? AS C1FROM RM_SALE_APPORTION SAMPLE BLOCK (?, ?) SEED (?) "RM_SALE_APPORTION" ) innerQuery SAMPLE [ BLOCK ] (sample_percent) [ SEED (seed_value) ] A variant of the SAMPLE clause is SAMPLE BLOCK, where each block of records has the same chance of being selected, 20% in our example. Since records are selected at the block level, this offers a performance improvement for large tables and should not adversely impact the randomness of the sample. sample_clause The sample_clause lets you instruct Oracle to select from a random sample of rows from the table, rather than from the entire table. BLOCK BLOCK instructs Oracle to perform random block sampling instead of random row sampling. sample_percent sample_percent is a number specifying the percentage of the total row or block count to be included in the sample. The value must be in the range .000001 to (but not including) 100. Restrictions on Sampling During Queries You can specify SAMPLE only in a query that selects from a single table. Joins are not supported. However, you can achieve the same results by using a CREATE TABLE ... AS SELECT query to materialize a sample of an underlying table and then rewrite the original query to refer to the newly created table sample. If you wish, you can write additional queries to materialize samples for other tables. When you specify SAMPLE, Oracle automatically uses cost-based optimization. Rule-based optimization is not supported by this clause. -------------------------------------------------------------------------------- Caution: The use of statistically incorrect assumptions when using this feature can lead to incorrect or undesirable results. -------------------------------------------------------------------------------- PostgreSQL also provides the sampling function as shown below. TABLESAMPLE sampling_method ( argument [, ...] ) [ REPEATABLE ( seed ) ] BERNOULLI: corresponding to Oracle SAMPLE(), row-based sampling corresponding to Oracle SAMPLE(), row-based sampling SYSTEM: corresponding to Oracle SAMPLE BLOCK(), block-based sampling corresponding to Oracle SAMPLE BLOCK(), block-based sampling REPEATABLE: corresponding to Oracle SEED, which is a sampling seed Test Data Consider the following as test data. postgres=# create table test(id int primary key, username text, phonenum text, addr text, pwd text, crt_time timestamp); CREATE TABLE postgres=# insert into test select id, 'test_'||id, 13900000000+(random()*90000000)::int, '中国杭州xxxxxxxxxxxxxxxxxx'||random(), md5(random()::text), clock_timestamp() from generate_series(1,10000000) t(id); INSERT 0 10000000 postgres=# select * from test limit 10; id | username | phonenum | addr | pwd | crt_time ----+----------+-------------+---------------------------------------------+----------------------------------+---------------------------- 1 | test_1 | 13950521974 | 中国杭州xxxxxxxxxxxxxxxxxx0.953363882377744 | 885723a5f4938808235c5debaab473ec | 2017-06-02 15:05:55.465132 2 | test_2 | 13975998000 | 中国杭州xxxxxxxxxxxxxxxxxx0.91321265604347 | 7ea01dc02c0fbc965f38d1bf12b303eb | 2017-06-02 15:05:55.46534 3 | test_3 | 13922255548 | 中国杭州xxxxxxxxxxxxxxxxxx0.846756176557392 | 7c2992bdc69312cbb3bb135dd2b98491 | 2017-06-02 15:05:55.46535 4 | test_4 | 13985121895 | 中国杭州xxxxxxxxxxxxxxxxxx0.639280265197158 | 202e32f0f0e3fe669c00678f7acd2485 | 2017-06-02 15:05:55.465355 5 | test_5 | 13982757650 | 中国杭州xxxxxxxxxxxxxxxxxx0.501174578908831 | b6a42fc1ebe9326ad81a81a5896a5c6c | 2017-06-02 15:05:55.465359 6 | test_6 | 13903699864 | 中国杭州xxxxxxxxxxxxxxxxxx0.193029860965908 | f6bc06e5cda459d09141a2c93f317cf2 | 2017-06-02 15:05:55.465363 7 | test_7 | 13929797532 | 中国杭州xxxxxxxxxxxxxxxxxx0.192601112183183 | 75c12a3f14c7ef3e558cef79d84a7e8e | 2017-06-02 15:05:55.465368 8 | test_8 | 13961108182 | 中国杭州xxxxxxxxxxxxxxxxxx0.900682372972369 | 5df33d15cf7726f2fb57df3ed913b306 | 2017-06-02 15:05:55.465371 9 | test_9 | 13978455210 | 中国杭州xxxxxxxxxxxxxxxxxx0.87795089604333 | cbe233f00cdd3c61c67415c1f8691846 | 2017-06-02 15:05:55.465375 10 | test_10 | 13957044022 | 中国杭州xxxxxxxxxxxxxxxxxx0.410478914622217 | cdf2f98b0ff5a973efaca6a82625e283 | 2017-06-02 15:05:55.465379 (10 rows) Sampling For efficient sampling in versions earlier than 9.5, see Data Sampling in PostgreSQL. In version 9.5 and later, use the TABLESAMPLE syntax for sampling (note that the sampling filter is used before the where condition filter). The syntax is as follows, refer to this page for more details. TABLESAMPLE sampling_method ( argument [, ...] ) [ REPEATABLE ( seed ) ] sampling_method指采样方法 argument指参数,例如采样比例。 REPEATABLE(seed) 指采样随机种子,如果种子一样,那么多次采样请求得到的结果是一样的。如果忽略REPEATABLE则每次都是使用新的seed值,得到不同的结果。 Example 1) BERNOULLI (Percentage) Sampling Scan the full table to return the sampling result according to the percentage of sampling parameters. postgres=# select * from test TABLESAMPLE bernoulli (1); id | username | phonenum | addr | pwd | crt_time ---------+--------------+-------------+------------------------------------------------+----------------------------------+---------------------------- 110 | test_110 | 13967004360 | 中国杭州xxxxxxxxxxxxxxxxxx0.417577873915434 | 437e5c29e12cbafa0563332909436d68 | 2017-06-02 15:05:55.46585 128 | test_128 | 13901119801 | 中国杭州xxxxxxxxxxxxxxxxxx0.63212554808706 | 973dba4b35057d44997eb4744eea691b | 2017-06-02 15:05:55.465938 251 | test_251 | 13916668924 | 中国杭州xxxxxxxxxxxxxxxxxx0.0558807463385165 | 71217eedce421bd0f475c0e4e6eb32a9 | 2017-06-02 15:05:55.466423 252 | test_252 | 13981440056 | 中国杭州xxxxxxxxxxxxxxxxxx0.457073447294533 | 6649c37c0f0287637a4cb80d84b6bde0 | 2017-06-02 15:05:55.466426 423 | test_423 | 13982447202 | 中国杭州xxxxxxxxxxxxxxxxxx0.816960731055588 | 11a8d6d1374cf7565877def6a147f544 | 2017-06-02 15:05:55.46717 ...... Example 2) SYSTEM (Percentage) Sampling Perform block-based sampling to return the sampling result according to the percentage of sampling parameters (all records in the sampled data block are returned). Therefore, the dispersion of SYSTEM is lower than that of BERNOULLI, but the efficiency is much higher. postgres=# select * from test TABLESAMPLE system (1); id | username | phonenum | addr | pwd | crt_time ---------+--------------+-------------+------------------------------------------------+----------------------------------+---------------------------- 6986 | test_6986 | 13921391589 | 中国杭州xxxxxxxxxxxxxxxxxx0.874497607816011 | e6a5d695aca17de0f6489d740750c758 | 2017-06-02 15:05:55.495697 6987 | test_6987 | 13954425190 | 中国杭州xxxxxxxxxxxxxxxxxx0.374216149561107 | 813fffbf1ee7157c459839987aa7f4b0 | 2017-06-02 15:05:55.495721 6988 | test_6988 | 13901878095 | 中国杭州xxxxxxxxxxxxxxxxxx0.624850326217711 | 5056caaad5e076f82b8caec9d02169f6 | 2017-06-02 15:05:55.495725 6989 | test_6989 | 13940504557 | 中国杭州xxxxxxxxxxxxxxxxxx0.705925882328302 | a5b4062086a3261740c82774616e64ee | 2017-06-02 15:05:55.495729 6990 | test_6990 | 13987358496 | 中国杭州xxxxxxxxxxxxxxxxxx0.981084300205112 | 6ba0b6c9d484e6fb90181dc86cb6598f | 2017-06-02 15:05:55.495734 6991 | test_6991 | 13948658183 | 中国杭州xxxxxxxxxxxxxxxxxx0.6592857837677 | 9a0eadd056eeb6e3c1e2b984777cdf6b | 2017-06-02 15:05:55.495738 6992 | test_6992 | 13934074866 | 中国杭州xxxxxxxxxxxxxxxxxx0.232706854119897 | 84f6649beac3b78a3a1afeb9c3aabccd | 2017-06-02 15:05:55.495741 ...... To customize a sampling method visit this website. Desensitization Many desensitization methods are available for the many different scenarios in which users must desensitize data. Common examples include: 1) Use asterisks (*) to hide the content in the middle of a string but keep the original length. 2) Use asterisks (*) to hide the content in the middle of a string but do not keep the original length. 3) Return the encrypted value. In all cases, the desensitization operation converts the original value to the target value. PostgreSQL allows using functions to implement such conversion. For different requirements, just write different conversion logic. For example, use asterisks (*) to hide the content in the middle of a string, with only the first two and the last one characters of the string displayed. select id, substring(username,1,2)||'******'||substring(username,length(username),1), substring(phonenum,1,2)||'******'||substring(phonenum, length(phonenum),1), substring(addr,1,2)||'******'||substring(addr, length(addr),1), substring(pwd,1,2)||'******'||substring(pwd, length(pwd),1), crt_time from test TABLESAMPLE bernoulli (1); id | ?column? | ?column? | ?column? | ?column? | crt_time ---------+-----------+-----------+-------------+-----------+---------------------------- 69 | te******9 | 13******5 | 中国******9 | c0******2 | 2017-06-02 15:32:26.261624 297 | te******7 | 13******2 | 中国******1 | d9******6 | 2017-06-02 15:32:26.262558 330 | te******0 | 13******5 | 中国******3 | bd******0 | 2017-06-02 15:32:26.262677 335 | te******5 | 13******5 | 中国******6 | 08******f | 2017-06-02 15:32:26.262721 416 | te******6 | 13******6 | 中国******2 | b3******d | 2017-06-02 15:32:26.26312 460 | te******0 | 13******4 | 中国******8 | e5******f | 2017-06-02 15:32:26.26332 479 | te******9 | 13******1 | 中国******1 | 1d******4 | 2017-06-02 15:32:26.263393 485 | te******5 | 13******0 | 中国******3 | a3******8 | 2017-06-02 15:32:26.263418 692 | te******2 | 13******9 | 中国******4 | 69******8 | 2017-06-02 15:32:26.264326 1087 | te******7 | 13******9 | 中国******3 | 8e******5 | 2017-06-02 15:32:26.266091 1088 | te******8 | 13******8 | 中国******7 | 37******e | 2017-06-02 15:32:26.266095 1116 | te******6 | 13******8 | 中国******2 | 4c******3 | 2017-06-02 15:32:26.266235 1210 | te******0 | 13******4 | 中国******8 | 49******c | 2017-06-02 15:32:26.266671 ...... For a more complex conversion, write a PostgreSQL UDF to change the field values. There are also many methods to extract sampling results on other platforms, such as copying to StdOut or ETL tools. Consider the example below. psql test -c "copy (select id, substring(username,1,2)||'******'||substring(username,length(username),1), substring(phonenum,1,2)||'******'||substring(phonenum, length(phonenum),1), substring(addr,1,2)||'******'||substring(addr, length(addr),1), substring(pwd,1,2)||'******'||substring(pwd, length(pwd),1), crt_time from test TABLESAMPLE bernoulli (1) ) to stdout" > ./sample_test.log less sample_test.log 54 te******4 13******4 中国******3 52******b 2017-06-02 15:32:26.261451 58 te******8 13******6 中国******3 23******a 2017-06-02 15:32:26.261584 305 te******5 13******6 中国******9 c0******4 2017-06-02 15:32:26.262587 399 te******9 13******5 中国******4 71******7 2017-06-02 15:32:26.26298 421 te******1 13******0 中国******4 21******3 2017-06-02 15:32:26.263139 677 te******7 13******5 中国******5 e2******7 2017-06-02 15:32:26.264269 874 te******4 13******9 中国******2 a6******9 2017-06-02 15:32:26.265159 References Original Source:
https://medium.com/@alibaba-cloud/compatibility-between-postgresql-and-oracle-data-sampling-and-desensitization-67cf7ceec7af
['Alibaba Cloud']
2020-06-10 03:08:52.268000+00:00
['Database', 'Postgresql', 'Alibabacloud', 'Oracle', 'Data Sampling']
Form best practices — the Do’s and Don’ts in form design
What does ‘form’ mean and where it is used? In general, a form is a box with labels and fields with a call to action buttons. Below are some examples of Forms used in many parts of websites and applications: · Login/registration · Purchase order · Ticket reservation · Room booking · Payment checkout · Newsletter/subscription · Consultation/audit request · Donation · Survey · Custom form. So, How to design a user-friendly form? In general, form design hides a lot of hard and painful points for user side. Hence, it is necessary to design a form that is user-friendly. Let’s start by discussing the two main rules for form design: · First of all, the best practice for designing forms is to do User Testing and never neglect a user’s opinion, even if you disagree. · Doing User Testing helps to figure out life-hacks like measure time of interaction with form, detect pain points, observe how user work with interface, and where he got stuck or can’t find the necessary element, etc. Performing these activities helps beginners to avoid the biggest and ignorant mistakes in the FORM world.
https://uxdesign.cc/form-best-practices-8e560e9f8bd0
['Victoria Verner']
2019-07-28 13:37:05.321000+00:00
['Product Design', 'Usability', 'Ts', 'UX', 'UI']
Canary Release Patterns for APIs, Applications, and Services with Ambassador
With an increasing number of organisations adopting the Ambassador API Gateway as their main ingress solution on Kubernetes, we are now starting to see a series of patterns emerge in relation to continuous delivery and the testing and releasing of new functionality. The canary releasing pattern is one of the most popular, and so in this article I will outline a series of related sub-patterns, implementation details, and things to watch out for with this approach. Canary Prerequisites The canary releasing pattern has been made popular by the cloud native vanguard, such as Netflix, Amazon, and Google, but obviously not every organisation is running with this much experience or as many customers. From my experience, canary releasing is a genuinely useful pattern, but there are several prerequisites: Basic automated (continuous) delivery pipelines: canary releasing relies on deploying and running of multiple versions of a service, which is challenging to do (and not much fun) without some form of automation. Implementing a delivery pipeline that does basic continuous integration and quality assurance on a service will also reduce the amount of issues you see within your canary experiments Basic observability: in order to decide if a canary release is causing problems you will need to implement basic monitoring (observability) that provides you with data in relation to the golden signals (latency, traffic, errors and saturation). For example, you don’t want to continue migrating traffic to a canary service that is causing the CPU or I/O to spike above tolerable levels. A codified definition of business success: in a similar fashion to basic observability, you will need to identify a series of metrics that indicate success in relation to a business hypothesis or goal. These are often referred to as key performance indicators (KPIs). For example, when rolling out new functionality you will typically have some kind of goal in mind, such as an increase in customer spending or a lowering of user complaints. Even if you aren’t experimenting with improving a specific business goal (for example, you are simply releasing code with architectural improvements) you will still want to be able to monitor for negative impact on your metrics of success, and roll back the release if this occurs. A sufficiently large amount of representative user requests: although the likes of Netflix receive millions of user requests each day, many of us are not building systems for this scale, and this can mean that it is difficult to push enough traffic into a canary release in order to fully validate it. In addition to a sufficiently large traffic volume, you will also have to ensure that you test your canary with diverse and/or representative types of user requests in order to test all of the execution paths. For example, if you are canary testing a feature that each customer only uses at most once a year, then you will need a very large amount of users in order to generate a statistically significant success rate. I’m sure there might be a bit of nuance or disagreement with these prerequisites, and so please do reach out to me if you have any comments. Canary Releasing Patterns I’ve written about the fundamentals of canary releasing in several other blog posts, but I often get asked specific questions via the Datawire OSS Slack about how to implement canary releases with Ambassador. The general approach I have seen taken by organisations follows the path from manual canaries, to semi-automated, to the fully-automated canaries (often by way of shadow canarying). So, assuming you have the above mentioned prerequisites covered, let’s examine the typical evolution of canary releasing with Ambassador in more detail. Step 1: Manual Canary This approach to canary releasing is typically the easiest approach, and often the best place to begin your journey. The key mindset change when adopting canaries is to separate the deployment and release of new functionality within a service. You may also want to separate your deployment (build or delivery time) and release (runtime) configuration files and scripts into separate version control repositories, as this leads to a clearer separation of concerns and potentially a cleaner VCS history. Assuming that you use Kubernetes as your deployment platform of choice, the implementation is as simple as deploying a new “canary” service alongside your existing service, and initially routing no traffic to this. At this point in the canary process, you have not “released” the canary, and you will have only modified your deployment configuration. It’s also worth mentioning that you don’t necessarily need to perform a rolling update of the service via Kubernetes Deployments at this stage, as this is a completely new service being deployed. You will, however, need to name and label this service as appropriate, for example, appending a version number or UUID to the service name to make this unique. At this point in the process you add a new Mapping for the canary service into your release configuration Kubernetes Ambassador YAML files, and also add an appropriate traffic shifting “weight” property that will route the related percentage of traffic away from the original service to the canary (note that the prefix target URL path is identical for the canary and existing service): --- apiVersion: ambassador/v1 kind: Mapping name: shopping-basket-canary prefix: /basket/ service: shopping-basket-v0023 weight: 10 --- apiVersion: ambassador/v1 kind: Mapping name: shopping-basket prefix: /basket/ service: shopping-basket-v0010 You then apply the updated YAML to your cluster via kubectl , or ideally trigger this via a delivery pipeline for your release configuration. The emerging best practice for this style of automating all aspects of configuring a cluster and using a version control repository as the single source of truth is being referred to as “GitOps”. As the specified percentage of user traffic is routed through the canary, you then begin watching your service dashboards and observing top line metrics (e.g. amount of request being routed to the canary, request handling latency, 5XX errors, resource usage etc) and also related business KPIs (e.g. is the new shopping basket improving checkout conversion). If the metrics look favourable (including the amount and type of requests being sent to the canary), then you will increase the traffic shifting weight in the canary service Mapping, and iterate again on this process. If the metrics show a negative impact, then you can change the traffic shifting weight to 0 and investigate the issue. You can then either begin the experiment again, or potentially delete the canary service with the goal of deploying an improved version later. It’s worth mentioning that you will need to account for the time lag between incrementing a shift in traffic and being able to observe the effects in the metrics — for example, if you only scrape metrics from your services every 30 seconds, then there could potentially be a 30 second window where your new release is causing an issue and you don’t know about this unless you have more fine-grained alerts. You will also typically want to keep an eye on other (supposedly unrelated) alerts that you have configured, as it’s not unknown for a subtle change in a canary service to cause an unintended issue in an upstream or downstream service. Once you are routing 100% of the traffic to the canary service and are happy with the result, you can delete the original service in Kubernetes. You can then change the name of the Mapping, removing the word “canary” and switching this to the appropriate “production” name. (The alternative is to use a “blue/green” style naming convention, which doesn’t require the Mapping name to be changed upon a successful release): --- apiVersion: ambassador/v1 kind: Mapping name: shopping-basket prefix: /basket/ service: shopping-basket-v0023 You will also need to make sure that you synchronise all of these config update tidying actions with your VCS and the state of the cluster before you begin more experiments or releases. The canary release cycle then begins again with the testing of the next version of the service. Step 2: Semi-automated Canary The next evolutionary step with canary releasing is to automate the shifting of traffic to a new canary candidate from 0 to 100 percentage of traffic. This can be implemented at a crude level simply with bash scripts, or ideally triggered via updated config files which are then actions via a continuous delivery pipeline (for example, using Jenkin’s pipeline-as-code scripting), and can be as simple as incrementing the percentage of traffic shifted within the Ambassador Mapping at a specified time period e.g. 60 seconds. You will still have to manually monitor associated top line and business metrics, and abort the release if you identify anything negative. Some organisations also incorporate an automated simple canary release rollback into the script. You can implement similar, providing that you can programmatically access and test the new functionality or associated metrics. I’ve included a very pseudo-script example below, which provides a (very raw) template as to how this can be implemented for (i = 0; i < 100; i=i+ 10) { cat <<EOF | kubectl apply -f - ... --- apiVersion: ambassador/v1 kind: Mapping name: shopping-basket-canary prefix: /basket/ service: shopping-basket-v0023 weight: {i} ... EOF //trigger deployment of new config git commit -m “Increment canary of service X to {i}” git push sleep 60 // potential curl check to canary endpoint } When automating the rollout of a canary you will need to ensure that enough user requests have been shifted to the canary release. This can be a particular issue for time-based traffic shifting increments, where an unexpected reduction in traffic can result in a false negative result. For example, if your user traffic fluctuates greatly at off peak hours, then releasing a canary here could mean that it does not see a representative load even with a high percentage of traffic being shifted. Step 3: The Fully Automated “Robo” Canary The ultimate goal that many organisations are aiming for is the fully-automated canary release, where developers simply flag an updated service as ready for deployment, define associated operational and KPI metric thresholds, and then rely on the deployment pipeline to autonomously manage the entire canary release/rollback lifecycle. This is what the Netflix team have talked about with Spinnaker and Kayenta, and also what the Weaveworks team are working on with Flagger. While the fundamentals of this approach are not much different than the semi-automated canary mentioned above, there is a significant amount of extra sophistication required. This is typically focused around the automated metrics collection and analysis, and “outlier detection”. Humans are typically very good at eyeballing a series of dashboards and deciding whether a new canary is performing positively or negatively. However, outside of the obvious cases, such as a service crashing or a massive increase in latency, it can be challenging for a computer to determine whether a canary rollout should be continued or halted. Another often underestimated problem is identifying the traffic and data thresholds that are sufficient to make a decision. During a canary release, every tick of the clock provides incrementally more information about the canary. At some point, though, enough data has been gathered and a decision should be made. How exactly do you identify that point? In the real-world, we’ve encountered very few organizations who have successfully implemented fully automated canary testing, and I see several barriers to widespread adoption. The first is the lack of general adoption of sophisticated traffic shaping systems such as Ambassador, although the prevalence of this issue is rapidly decreasing as organizations adopt more cloud-native infrastructure. The second is the actual metrics analysis technology, which tends to be domain-specific, and is typically a challenging engineering problem to implement successfully. The final barrier we see is cultural — getting organizations to accept the notion of testing in production, and trusting automated systems to get it right. Alternative Patterns Not every organisation follows the evolution described above. For example, some opt to begin their journey with the semi-automated canary. Others encounter different challenges, and may instead adopt some of these alternative patterns. Not Comfortable Testing In Production? Use a Shadow Canary Several of the organisations we have been working with are not yet fully comfortable with testing with production traffic, or can’t do this for regulatory or compliance reasons. An alternative pattern that still provides some of the benefits is the “shadow canary”. With this approach you can either deploy your canary service into production or within a staging environment, but you shadow or “mirror” a percentage of production traffic to the canary. Although you observe the canary, you do not return any of the results to the end user, and all of the related side effects are enacted on non-production systems or are mocked or implemented as no-ops. With this pattern 100% of the production user requests are served by the original service running in production at all times. This often satisfies concerns around testing in production, but in some cases you may also have to anonymize data within user requests before it can be sent to a non-production environment. Ambassador Pro supports the addition of custom “filters” to the request handling process, which can be used to implement data cleansing and transformation. Upon a successful shadow canary release test, you can then deploy and release the newly canaried service using your existing processes. Advanced Shadowing: Tap Compare You can implement a shadow canary in a manual, semi-automated, or fully-automated manner. The fully-automated approach enables the use of additional techniques such as “tap compare” testing, where you can verify that the response of a new canary candidate does not deviate inappropriately from the existing version (often with routing to an additional secondary instance of the existing service in order to identify the “non-deterministic noise” in the response, such as UUIDs and dates). Diagram showing how Diffy operates (taken from Twitter Engineering Blog) This is an approach that was popularised via Twitter and their Diffy tool, and there are also additional implementations such as Diferencia. Limited by Traffic Volume? Use Synthetic Canarying It is not uncommon for organisations to have important services that either receive a small number of user requests or a small amount of a specific request, and therefore canary releasing isn’t an effective testing strategy. An alternative pattern that helps in this situation is the “synthetic canary”, which relies on automatically generating synthetic user requests, or “synthetic transactions”, that are sent to the canary. This pattern is often combined with the “shadow canary” pattern discussed above. You can deploy a canary release into production or a staging environment, and only allow access to this from requests with a specific header set or with a specific authentication/authorization — i.e. no production traffic will be sent to the canary. You can then run a series of tests, synthetic transactions or a load test that targets the canary release, and observe the results. This can be a manual process, or you can semi- or fully-automated this. Conclusion This article has presented a whistle-stop tour of how I see organisations implementing the canary releasing of functionality with the Ambassador API gateway. There is no single “right way” for beginning the journey to testing functionality with real user traffic, but there are definitely several gotchas to watch out for, such as the need for the prerequisites mentioned. The Datawire team are working on related solutions with a number of Ambassador users, and so please get in touch if you want to share any experience of comments with implementing automated canary releasing. As usual, you can also ask any questions you may have via Twitter (@getambassadorio), Slack or raise issues via GitHub.
https://blog.getambassador.io/canary-release-patterns-for-apis-applications-and-services-with-ambassador-51174b3fab4e
['Daniel Bryant']
2019-04-04 15:24:35.946000+00:00
['Cloud Native', 'Continuous Delivery', 'Api Gateway', 'Canary Release', 'Kubernetes']
Hopeful for the Knicks
Hopeful for the Knicks Yes, it’s Preseason. Yes, it’s the Cavs. But for the first time since Phil Jackson was still on the Lakers payroll I’m legitimately hopeful for the Knicks future. Because for the first time since Mark Jackson was a Knick, we may finally have our long term solution at PG. The drafting of Immanual Quickley raised a lot of eyebrows as a 6'3" shooting guard. But playing him at point has been a revelation. The ball moves in a way not seen since Linsanity was hitting MSG and the improvement in the games of Knox and Barrett over the offseason really emerges. A legitimate Point Guard (even a league average one) would allow Knox to focus on just getting into the right spots to make good shots. The same would allow RJ to focus on getting his shot off, with an emphasis on an improved shot form. When RJ was drafted, I felt that he and Knox could become the Knicks version of Tatum/Brown and with a PG to shepherd them, this scenario becomes more realistic. The lineup of Quickley, Barrett, Knox, Toppin, and MitchRob was dynamite, not to mention young (Toppin is the oldest). All of whom are also on their rookie contracts (something the Knicks have been known to screw up ala Porzingis). Assuming Thibs can coach the kids up, this year is going to be fun to watch and I’m actually… hopeful. Do I think they make the playoffs? Doubtful. But a run would be great. And even if they don’t, they would be in a lottery that no longer rewards the worst team. Cade Cunningham could easily slot into this lineup, pushing either Knox or Quick to the bench. Or Mitch sits down and they go small. Go Knicks!
https://medium.com/@johnyuan1/hopeful-for-the-knicks-ddbee9a08fc0
['John Yuan']
2020-12-21 21:13:57.873000+00:00
['Immanual Quickley', 'Cade Cunningham', 'NBA', 'New York', 'Knicks']
Laravel Firebase Push Notification
This instructional exercise will give you a basic illustration of Laravel send message pop-up firebase. we should examine about send firebase message pop-up utilizing Laravel. follow cry venture for firebase pop-up message Laravel. In this post, I will give you a straightforward illustration of send firebase message pop-up utilizing Laravel. You can likewise use in Laravel 6, Laravel 7 and Laravel 8 version. Firebase gives a realtime information base and backend as assistance. The administration gives application engineers an API that permits application information to be synchronized across customers and put away on Firebase’s cloud. Firebase message pop-up is a free open source, and you can undoubtedly actualize utilizing your google account. Here I will give you easy to getting gadget badge of signed in clients and afterwards we will send web pop-up message. So how about we essentially follow howl step to make message pop-up with Laravel application. Preview: Step 1: Create a Firebase Project and App In the first step, we need to go Firebase Console and make an app. at that point you need to make a web application on that app as like I added below screen capture: After the given name and next then you will get firebase SDK as like the following screenshot: You need to save that all data since we will use the latter in our application. Step 2: Install Laravel Now, we need to install a fresh Laravel installation through composer or Laravel CLI composer create-proj
https://medium.com/@indranandjha/laravel-firebase-push-notification-b513bc6695b4
['Indra Nand Jha']
2020-12-05 16:27:06.999000+00:00
['Laravel', 'Firebase']
AQARCHAIN TOKEN ECONOMICS
Tokenization is basically the process involving the adaptation of physical as well as non-physical assets into the blockchain. The trend has already been engraved in traditional industries like Real Estate. So, why do we need real estate tokenization in the first place? For example, if you own a house, and suddenly you need urgent money. What are you going to do? But you need only $10,000 while your apartment has a value of $100,000. Is it possible for you to use your property for obtaining the money? This is where the importance of asset tokenization comes into the picture! Let’s discuss how tokenization will work here while helping in the conversion of ownership rights in an asset to a digital token. Aqarchain Token Economics Asset tokenization can help you convert your $100,000 worth of apartment into divisible security tokens. Each token would carry one(1) share of the apartment. If you purchase 10 tokens, you will get 10% of ownership in the asset. About Aqar Chain Tokens Blockchain: Tezos Token Type: FA1.2 | FA2.2 Token Protocol Token Name: AQAR Token Token Symbol: AQR Token Supply: 100 million Tokens Token Price: $0.45 Who are primary token holders? Team members, partners, advisors Initial investors who participated in the private round and public round. Vendors and clients Token Sale Stages
https://medium.com/aqarchain/aqarchain-token-economics-ba8183119966
['Smartchain Llc']
2021-06-08 12:01:25.978000+00:00
['Utility Tokens', 'Staking', 'Tokenization', 'Token Sale', 'Token Economics']
Fame and Former Successes May Be the Biggest Hindrance to Entrepreneurial Progress
The explosion that broke the camel’s back The best thing about flying across the country, leaving my old corporate life and colleagues behind, and embarking on my flight of (initially failed) startups from the solitude of my Marina del Rey studio apartment was the isolation. Pursuing entrepreneurship in a vacuum might sound like a horrible idea, but some people — I’d venture to guess many — work best on their own, without the scrutiny of a watchful boss, judgmental colleagues, or a pressure-enhancing audience. Unfortunately, some startups don’t allow that secrecy, isolation, and anonymity — and for one of my clients, the transition to a public launch became her downfall. This client wasn’t your average product-hawking entrepreneur; she did have a plan for a line of future products, but initially, she was selling services in the entertainment space. We identified what we thought was her ideal customer audience, strategized a launch plan, and she began building out the service side of her business online. For the first few weeks, her business was doing well, but it was far from exploding. Then, something unexplainable happened, and seemingly overnight her business blew up. Well, it wasn’t just her business that blew up — it was the services she’d provided, the audience of invested fans, followers, and customers, and her very own name. This person definitely didn’t pursue business for the attention; she merely came up with a unique service to tap into an underserved market that she had unusually direct access to, so she decided to give it a try. If anything, she was building it in spite of the potential attention. When I say her audience blew up, I’m talking she went from tens to hundreds to thousands to tens of thousands of people patronizing her services in a matter of months — with zero advertising and barely any marketing at all. In fact, the services largely sold themselves, spreading through word of mouth or SEO algorithms perhaps, plus a few unsolicited features in relevant publications. Financially speaking, things were going very well for her — and given her lack of marketing expenses, her profit margin was through the roof. That’s when she experienced something new and equally unexpected: a negative review. Statistically speaking, it would have been nearly impossible for her to grow this fast without accumulating a few less-than-stellar reviews. However, the handful of negative reviews sprinkled among the hundreds of positive ones weren’t the biggest problem. The much graver problem at hand was the size of the audience itself — and the heightened stakes and elevated expectations to which my client felt she had to live up. She contemplated shutting the entire business down — saying she was “closed for orders” until the frenzy went away. Just a tip — from entrepreneur to entrepreneur — shutting down your entire business in the heat of its explosive growth is typically not a winning move. Especially if you may have been algorithmically blessed and awarded free, zero-cost customers flocking to your offerings. As far as I could see, this was her time to ride this wave all the way to the top. It wasn’t as if she hadn’t proven her concept; she had — many times over. That said, service businesses may suffer from a component that few entrepreneurs talk about: performance anxiety. With the increased pressure of a large audience potentially watching her company’s every move — in terms of services added, customer testimonials, and new product launches — she felt paralyzed in her very own business. She doubted that she could ever again provide the same high-quality services she’d successfully provided to tens of thousands of people over the past few months. She’d expanded her team a bit — to keep up with the accelerated demand — and the added pressure of supporting her new employees weighed on her further. This client is a perfect example of an entrepreneur traumatized by her very own public success — or “fame” on a small scale. Thinking back to my earlier failed ventures, it probably isn't a shocker that I’d rather have failed quietly in isolation than on a public stage with an audience to watch, jeer, and “boo” with my every misstep. It wasn’t until years later that I felt confident enough to come clean about my startup failures — which ironically enough, became one of my biggest claims to fame. Even Harry Potter acknowledges the curse of fame My two guilty content consumption pleasures range from firsthand prison exposés to interviews with former child stars. Last night, it was child stars with none other than Harry Potter’s Daniel Radcliffe. I watched the interview at double speed (the same way I watch everything to maximize the bang for my limited time’s buck), but I paused and rewatched his answer to one question. Does Daniel Radcliffe like being famous? When Daniel Radcliffe was everyone’s favorite little wizard boy, the stakes were fairly low. JK Rowling had already proven the concept, and the success of the movies likely came down to the post-production editing and the marketing circuit. Post-Potter, Daniel started branching out into more mature and artistically complex content. That new, unfamiliar content didn’t sit too well with his former teenage bookworm audience. They captured Daniel in their minds as Harry, and any deviation from that felt like a betrayal, a failure, and a mistake on Dan’s part. That’s why it’s no surprise Daniel answered “Anonymity is a really cool thing. It allows you to do a lot and make mistakes…and not feel like [they’re] all very, very high stakes all the time.”
https://entrepreneurshandbook.co/fame-and-former-successes-may-be-the-biggest-hindrance-to-entrepreneurial-progress-db3687391526
['Rachel Greenberg']
2021-09-10 14:06:09.150000+00:00
['Business', 'Success', 'Social Media', 'Entrepreneurship', 'Startup']
Enterprise How to tap the true value of 80% unstructured data?
Modern enterprise applications in the digital economy will generate a variety of data, including operating system logs, network and security device logs, business access and transaction data, etc. These data are very large in volume and in different formats. Big data analysis concept Big data analysis refers to the analysis of large-scale data. Big data can be summarized as 5 Vs, with large data volume (Volume), fast speed (Velocity), multiple types (Variety), value (Value), and authenticity (Veracity). Big data is the most popular IT industry vocabulary nowadays. The subsequent use of data warehouse, data security, data analysis, data mining and so on around the commercial value of big data has gradually become the focus of profit sought after by industry professionals. With the advent of the era of big data, big data analysis has also emerged. Challenges facing big data analysis IDC data shows that at present, structured data in enterprises accounts for 20% of the total data volume, and 80% is unstructured data. How to mine the true value of 80% of the data, including the hidden risks and commercial value, has become a big The main goal of data analysis. 1. Data island There is a lot of information in logs, network traffic and business process data, but they are isolated from each other, and these logs need to be effectively correlated. 2. A wide variety of log formats Different systems and devices have different log formats. Especially when dealing with unstructured and semi-structured data, how to quickly normalize various types of logs is the primary problem for big data analysis. 3. Department independence Security, business, and operation and maintenance are independent of each other, and their respective concerns and perspectives are different. It is difficult to meet the needs of multiple operation and maintenance responsibilities of business operation and maintenance departments in many enterprises on the same data analysis system. 4. Retrospective analysis and evidence collection Security incidents occur frequently, and I want to learn from the incidents, how to effectively conduct retrospective analysis and obtain evidence. 5. Fast iteration Once the analysis system is developed, it is difficult to make adjustments to data sources, analysis dimensions, report presentation, etc., to adapt to new requirements. 6. Regulatory compliance The cyber security law and the security system all put forward relevant requirements for log retention and security situation awareness. “Faced with massive amounts of data, whoever can better process and analyze the data can truly seize the opportunity in the era of big data.” This is almost the consensus of everyone in the industry. The analysis of massive data has become a very important and urgent need of enterprises and governments. Holographic data perspective solution The holographic data perspective system launched by Pangeo Technology is an analysis platform for multi-source heterogeneous massive data. It can collect, organize, archive and store machine data in any format, and provide data analysis, search, and Reporting and visualization capabilities. For logs from various security devices, network traffic, personnel behavior data, host application logs, and business system data, through intelligent analysis methods such as correlation analysis, user behavior analysis, and machine learning, it can provide business security threat monitoring, applications, and network Security attack monitoring, operation and maintenance security monitoring and analysis functions. Main functions of holographic data perspective 1. Security threat analysis Analyze and present the current attack situation of the system. Including: malicious accounts, IP addresses, attacked systems, etc. controlled by hacked products; 2. Business threat perception Perceives, analyzes and presents behaviors that cause account and data leakage risks such as database collisions and crawlers, as well as business fraudulent behaviors such as swiping orders and spikes; 3. Application security audit Audit abnormal account operations and access behaviors; 4. Operation and maintenance monitoring support Provide support for operation and maintenance work such as application operation resource monitoring, application access analysis, and operation analysis; 5. Customizable security visualization Large screen display can be provided for real-time monitoring of the monitoring center. Core advantages of holographic data perspective 1. Full-view monitoring 2. Unstructured index 3. Search engine query 4. Drag and drop dashboard 5. Excellent support platform 6. Built-in a variety of mathematical statistics and machine learning models 7. Enterprise-level architecture Conclusion Big data is everywhere, and data analysis has become crucial in this era of big data. This is also the key to realizing the commercial value of big data. Only good data analysis can control the lifeblood of industry development in advance. The initiative of the market thus creates greater value.
https://medium.com/dataprophet/enterprise-how-to-tap-the-true-value-of-80-unstructured-data-3f029497677
['Sajjad Hussain']
2020-10-19 07:45:42.988000+00:00
['Technology', 'Data Science', 'Data Visualization', 'Big Data', 'Data']
Wine Buyer
Competitive Salary, Full Time, must be based in or willing to relocate to Edinburgh (or nearby). About Beer52 We’re the world’s most popular craft beer club, curating a delicious selection of beers for our members from a new country each month. Delivered along with our magazine, Ferment, we have helped more than a million people to explore the exciting world of craft beer. Now, we’re branching into wine and launching a disruptive new concept with an ambitious plan for scale. With a team of over 70, based mainly at our Edinburgh HQ, we are one of the UK’s fastest-growing and most exciting startups. We’re all passionate about craft alcohol and it’s our mission to build the most exciting drinks clubs for our members. To achieve this, we have built a super talented and inclusive team, a great place to work and industry-leading technology and marketing that have secured our position as №1 at what we do. About the Role Beer52 is the largest independent buyer of craft beer in the UK, with longstanding and intimate relationships with several hundred leading brewers from across the globe. Collecting millions of reviews from our customers, our buying team have industry-leading insights into new trends and work closely with suppliers to remain at the cutting edge of new releases by the most sought-after brands. We plan to use a similar model to achieve this level of success in the world of wine. We are looking for an experienced buyer to oversee an annual budget in excess of £5m. To deliver competitive value for money to our customers, you will have keen negotiation skills and a talent for seeking out interesting producers. Our model seeks to uncover exciting wines from undiscovered regions, buying directly from wineries (bulk and bottled stock) to offer exceptional wines in eye-catching labels at an attractive price point (£10/75cl). You will be adept at communicating this unique proposition to new suppliers and partners. Each month will be focused on a different theme or country, making a final selection of 6 wines from a pool of 60 or more samples from suppliers that you seek out. You must have a high level of understanding of wine-making, quality control, bulk wine processing and shipping and handling into our bottling partner’s facility. Launching 100+ new wines every year, your attention to detail and ability to track costs across stock, duty and freight, will ensure you hit commercial targets. You will also be responsible for the on-pack messaging and coordinating with our content team to produce an exciting magazine every month, bringing the stories of the wineries and regions to life. You will work closely with our founders to implement their ambitious plans to make our wine club’s monthly themes and selection into an exciting package for our members. Collaborating with suppliers to uncover lesser-known varietals and techniques will help us to offer a unique experience. About You You should be very organised, enjoy solving problems, with a strong attention to detail. You must have a natural ability to negotiate and a track record of fostering close supplier relationships. We are looking for a clear communicator and someone who is not afraid of rolling up their sleeves, forming a structured plan and helping small suppliers to stay on track with deadlines. You must have prior experience of wine buying in a fast-paced commercial environment and be skilled at thinking independently to solve problems as they arise. You must have a good understanding of the wine business and a strong sense of taste in terms of liquid quality and packaging design. We are a high growth company in a competitive market — you will be comfortable with this pressure to offer the most exciting wines at a competitive price point. You will plan carefully to protect our margins by avoiding overstock, making sure that your communication with our forecasting team is constant. As a business, we have high ambitions for growth. Buying is at the core of what we do and this offers an excellent opportunity to develop your career. Key Requirements: ● 5 years + experience as a retail buyer ● Hold relevant professional wine qualification (e.g. WSET4 or above) ● Understanding of wine styles, production and marketing and ability to assess quality of product ● Ability to negotiate make good commercial decisions ● Creative ability to write tasting notes, wine descriptions and marketing materials ● General understanding of cost accounting ● Ability to manage a plan with many moving parts ● Understanding of wine quality management and bulk wine buying, shipping and handling ● Strong planning, communication and presentation skills ● Proficient in Excel ● Self-motivated ● Flexible and adaptable ● Ability to work to tight deadlines Benefits ● Competitive salary ● 28 days paid holiday per year, paid sick leave, paid bereavement leave, NEST company pension (3% company contribution) ● Travel to wineries, wine fairs and other worldwide events ● Complimentary 12-pack monthly beer and wine club membership ● Retail discount scheme ● Regular team events and in-house ‘Beer School’ How to Apply Please send your cover letter and CV to [email protected] — we look forward to hearing from you!
https://medium.com/@beer52/wine-buyer-54f1cfbb41c0
['Careers Board']
2021-09-02 10:42:01.767000+00:00
['Jobs', 'Wine', 'Edinburgh', 'Careers', 'Scotland']
專業整合行銷廣告媒體商 JS Adways 傑思愛德威,大讚企業通訊軟體 JANDI 帶來的高效率
in In Fitness And In Health
https://medium.com/jandi-blog-tw/js-adways-marketing-jandi-ef41d168ffc9
['曹凱閔', 'Km Tsao']
2019-07-12 08:41:37.692000+00:00
['Marketing', 'Management', 'Project Management', '企業協作工具', 'Jandi User']
Know Thyselves
I have been reading about the development of human psychology lately. Initially, I started to read this stuff because I wanted to provide some context around the “why” of my actions and behavior. In knowing the historical milieu from which human decision-making came out, then I may be better capable of orienting myself to make better choices in my own life. I also wanted to know the role of my emotions and rationality as it applies to my own decision-making. The Evolution of Consciousness by Robert Ornstein explores the evolutionary roots of the mind, how it is that the human mind came to be what it is, and where it is taking us. Central to his thesis is the notion that humans now have the unique capacity to direct biological evolution through the use of the mind. I had a pretty visceral taste of this in real-time the other day. I was crossing a road at night, and an oncoming vehicle turned on the road I was crossing and didn’t see me. I leaped out of the way, pivoting onto the curb. I was an arm’s length away from getting run down. As I was crossing the street, I knew in my periphery that a vehicle was coming down, but I already pre-decided he was going to see me and give me the right-of-way. I didn’t consider the fact that it was night, and I was wearing dark clothes. When I realized he wasn’t stopping, my whole physiology kicked-up into flight-mode, causing me to leap for dear life. As he was driving off, he admonished me to start wearing bright clothes. This pissed me off. Moments ago, my life was endangered at the hands of his carelessness, and all he had to show for it was some cheeky advice. Shocked and disoriented, I plopped down into someone’s lawn, laying there for a while. When I got home later, I read a similar account in the aforementioned book, demonstrating how one can move across various “selves” in a matter of moments, which happens all the time, but is easily demonstrated in the midst of a crisis. Upon reflection, I could detect at least four different selves that elapsed within me in a matter of minutes. This is a rather dramatic example, but it drove home the notion of how decentralized the self is, as opposed to the self as a consistent totality. He goes on to explain how these different selves emerged evolutionarily out of a need to adapt to an environment that has long since faded, drawing on the notion of the divided mind. Being conscious of these selves as they operate in a world they are no longer “fitted” to in an evolutionary sense gives us a chance to direct them, instead of being directed by them. This is, I take it, what the author is conveying when he puts forward the idea of conscious evolution. I’m also reading Steven Hayes “A Liberated Mind”. I was introduced to Steven on an episode of the Stoa. He was a developer of the ACT therapy, which is a psychotherapy technique that builds on the idea that pivoting our attention to difficult experiences, instead of avoiding them, will help us grow into what is worth caring for, allowing those difficult experiences to be held within the container of awareness. Central to his work is the entrainment of situational awareness. This might be another way of speaking to the need to develop and train consciousness. I arrived at a similar notion to Steven recently. I decided that moving forward in my life would be best by acting out what is most worth caring for. For me, what is worth caring for is wisdom, which I describe as the springboard affording us to make the most responsible choices. From that center-point, everything else worth caring for, like love, emerges and is nurtured. To be able to embody wisdom means to know what is worth caring about, and being a witness to one’s own experience. From observation, one can move into decision and action. It turns out Steven’s ACT model also uses the notion of the observer as the point of departure. Since I’ve been meditating and getting to experience my own observer, my own ability to pivot to what matters has not been inconsequential. This pivot has taken me to the realm of relationship. As I wrote in an earlier place, my desire has been to expand the bandwidth of my caring in depth and in scope. I’m looking forward to how Steven’s accumulated wisdom will guide me through my orientation to act on what matters most.
https://medium.com/@kfolz/know-thyselves-aa0340d1a03e
['Kevin Folz']
2020-12-22 20:31:04.192000+00:00
['Evolution', 'Conciousness', 'Self', 'Act', 'Mind']
Audience Fracked
“Through adorable animated characters, kids can watch videos that are appropriate for a young audience. Swiping right or left shows a new Vine, and you can tap the screen to hear quirky sounds.”
https://medium.com/the-awl/audience-fracked-1169ddc6de6b
['John Herrman']
2016-05-14 01:15:34.772000+00:00
['Vine', 'The Internet', 'Content']
Save $26 on this extremely smart power strip
In order to succeed, we must first believe that we can (Nikos Kazantzakis)
https://medium.com/@Robert33993669/save-26-on-this-extremely-smart-power-strip-cdffcb5b628b
[]
2020-12-20 22:39:10.267000+00:00
['Surveillance', 'Cutting', 'Mobile Accessories', 'Home Tech']
The Paradox of Partial Information
“Most decisions should probably be made with somewhere around 70 percent of the information you wish you had.” – Jeff Bezos, in Amazon’s 2016 Annual Shareholder Letter [1] Certainty can be elusive, and too often, the act of waiting for complete information means that an opportunity passes by. A framework for decision making can be helpful, as well as input from friends or advisors who share their own experiences, but ultimately many decisions require making a judgment call with partial information. The Price of Potential Investors face these decisions repeatedly. Opting to wait for more information in favor of more certainty — whether the passage and detail of a stimulus bill, the final results of an election, or waiting for a company to prove years of strong earnings — may mean missing the short-term price opportunity. This week stock market prices moved slightly lower following a lackluster jobs report and slowing economic data.2 Some investors viewed this news as increasing the likelihood of a bipartisan stimulus bill, despite continued evidence of congressional infighting on several points. The key issues remain liability protection for businesses and support for state governments. Congress is scheduled to leave for its holiday break on December 18 — making the coming week a critical stretch for short-term market sentiment. Noteworthy this week were the two IPOs of Airbnb and DoorDash. In recent years, employees joined these companies with this expectation, but they lacked clear information about timing or valuation as they considered the employment offer that included some stock compensation in lieu of a higher salary. In both cases, that faith was rewarded. This placed employees (and investors) at another decision point with partial information: the decision of when to sell their company stock. Both companies’ shares quickly jumped higher after their debuts3, with DoorDash’s price increasing 85% on Wednesday, and Airbnb’s price moving up 112% on Thursday. Employees now face a decision of whether to sell some shares or hold, not knowing if the next move for the stock will be higher still or a reversal of fortunes.4 Holding means retaining the potential, and possible tax advantages, while selling means locking in certainty — each decision offers its own specific value. The Price of Security Our clients face similar challenges of partial information in their personal lives. An example that has arisen this year is the choice of committing to a particular college or paying an upcoming tuition bill without knowing whether classes will be fully virtual and the physical campus closed. This key information is inherently unavailable when the decision needs to be made. This epitomizes the decisions we all make daily during this pandemic, altering our behaviors without perfect information about precautions that others are taking or where the invisible virus is lurking. We cannot stop all forward movement in our lives, but we can choose to pay a certain price of inconvenience for increased health and security. Another common example our clients face is the complicated decision of helping a partner or parent consider care needs. The decision to purchase long-term care insurance needs to be made when the person is healthy, with uncertainty about whether this protection will be needed. The decision to move into certain CCRCs or eldercare communities similarly needs to be made before health begins to deteriorate materially. The complexity of determining their likely pace of physical or mental decline is tricky to balance with the benefit of staying in a long-familiar environment or with family. Often, too, the priority at one point in time gives way to a new concern when a sudden health event occurs. One way to move forward with these decisions is to focus on what you are getting now. In the instance of insurance, what you are buying is certainty and security. The feeling of security that insurance is in place to ease the financial aspects of a health crisis, the ability for a college student to commit themselves to remote learning and staying on track with a cohort, or the security of knowing that a parent is in the right environment with the right care — all these bring a sense of resolution and security that has value and can be worth paying for. Yet knowing how way leads on to way We value diversification in portfolio management as well as in broader life choices. In a portfolio, we take risks in pursuit of growth with some assets, while emphasizing protection and stability with others. Expanding that lens may mean balancing a high risk/high growth career decision with a more stable portfolio allocation. It may mean paying for robust LTC and life insurance coverage even if you are healthy because you have seen the challenges of cognitive decline in your own family. The peace of mind you get in the present can be more meaningful than saving money on premiums. The balance is personal and needs to support your specific landscape of resources and concerns. Timing matters too. Deadlines create clarity, but our most momentous personal decisions often come without a deadline. And yet, a particular decision — whether to move to assisted living, whether to change careers — will look different in the present moment than the next unknown moment when you revisit it. Your sense of readiness for change will impact your timing, and whatever choice you make, you’ll be living with both its benefits and downsides as you move forward. Not making a decision at all is, in fact, a decision to remain on your current path. Robert Frost’s poem The Road Less Taken comes to mind, with his contemplation of two unknown paths that diverge in a wood. The very human dilemma of longing to “…travel both, And be one traveler…” exists in many daily life choices, when we have only partial information about what lies ahead. After choosing one path, Frost says, “Yet knowing how way leads on to way, I doubted if I should ever come back.” Our personal choices shape the landscape as we walk on, leaving behind some options, but also brings us to other choices we couldn’t have previously imagined. More articles at NorthBerkeleyWealth.com
https://medium.com/@northberkeleywealth/the-paradox-of-partial-information-551f135ca4c6
['North Berkeley Wealth Management']
2020-12-11 23:23:38.756000+00:00
['Financial Planning', 'Financial Advisor', 'Decision Making', 'Market Commentary', 'Investment Management']
Taking profits in Bitcoin? Here’s where to keep them safe.
Article provided by Silvertoken.com Bitcoin has experienced a heck of a run over the last month (and year). There’s no point trying to predict how long the run will last, and the only way to protect those profits is to take some off the table. Eventually the momentum in any rally stalls. The problem with Bitcoin is that much of the price action is driven by speculators. Once the current momentum stalls, many of those speculators will want to exit, and it’s entirely possible we will see a large correction. So, if you decide you’re going to take some profits off the table, where do you move that capital? You can move to fiat, another asset class, or another digital asset. Moving to fiat, stocks or another liquid asset class means exiting the crypto market. And moving to different digital assets also poses a problem. Cryptocurrencies are highly correlated — especially the liquid ones. And the correlation increases when prices fall. This isn’t just true for digital assets — it also happens with stocks, bonds, commodities and precious metals. In September when Bitcoin fell 40%, the correlation between Bitcoin and other leading cryptocurrencies jumped. The correlation between Bitcoin and Ethereum, and between Bitcoin and Litecoin both peaked within a week of Bitcoin’s biggest decline in six months. So, if you want to lock in some of your Bitcoin profits, moving into another purely digital asset may not be a smart move. There is however another option — asset backed cryptocurrencies. Bitcoin hovering around all-time highs If a digital asset is backed by something liquid, fungible and tangible, it will track the price of that asset rather than the crypto market as a whole. The only asset markets this can really work for are gold and silver. While real estate is also a tangible asset, it’s illiquid and not fungible — no two pieces of land are identical. A token directly supported by a precious metal — whether it’s coins or bars — will track the metal’s price. The price of the token can only deviate by the transaction cost of buying or selling the token. If it deviates more than that, it offers an arbitrage opportunity to traders. There is therefore a real incentive for traders to keep the token price in line with the underlying market. Gold isn’t a bad store of value, but silver may outperform over the next few years. Many industries actually use silver. Meaning, the world’s silver supply is being consumed and silver will become scarcer over time. Also, with the rise of cryptocurrencies, gold is losing its status as the ultimate safe haven asset. In that respect gold has more to lose than silver in the medium term. Silvertoken is a cryptocurrency supported by physical silver and has recently launched. The tokens are backed by real silver that is stored in a vault. A small transaction fee goes into buying more silver each time a token is used– so the amount of silver backing each token increases over time. Silvertoken is the ideal token to store value if you decide to take profits from you Bitcoin. The tokens have a strong underpin and very low correlation to digital assets. And, if there is a correction in the price of Bitcoin, you can then move value back into Bitcoin, or another digital currency. Purchase silver on Silvertoken.com
https://medium.com/silvertoken/taking-profits-in-bitcoin-heres-where-to-keep-them-safe-8a5091339b60
[]
2017-11-21 18:04:47.617000+00:00
['ICO', 'Precious Metals', 'Ethereum', 'Silver', 'Litecoin']
The Short-Form Posts on Medium Are All the Rage
I’m becoming a big fan of the short and sweet story. Let’s make it pack a punch. I opened a short-form post. The only writing in it was five links to the writer’s other medium post. Come on, can’t we be a little more creative? Yes we can. Our old pal, William Shakespeare wrote; Brevity is the soul of wit. Being brief doesn’t mean erecting a billboard listing links. I mean, you could tell us why frogs and toads look the same, but one lives in water, the other on land. Why the difference? There, put a link on the word ‘difference’ linking to your post on how you’re different from your family. Remember, the card company’s slogan; ‘It’s the thought that counts.” Let’s all give more thought, use some wit to our short-form stories. You can do this since you’re a better writer than most others. Richard lives at @rich-53302
https://medium.com/rule-of-one/the-short-form-posts-on-medium-are-all-the-rage-924bf13b4225
['Richard Armstrong']
2020-12-19 10:22:43.177000+00:00
['Thinking', 'Self Improvement', 'Choices', 'Inspiration', 'Writing']
Only 5% of People Wash Their Hands Properly
Only 5% of People Wash Their Hands Properly Misconceptions about hand-washing are as rampant as the germs themselves Photo: Moyo Studio/E+/Getty Images When a highly contagious stomach virus blew through a Colorado school district in November, sickening 30% of students and 20% of staff at one high school, some of the victims felt fine one hour and were vomiting in public the next. It was a reminder of how quickly viruses can make a lot of lives miserable. Especially during winter months. And especially when people don’t wash their hands often and properly. Health experts agree that hand-washing is a vital defense against stomach-turning viruses, deadly bacteria, and other communicable germs including the flu, the common cold, and the particularly nasty norovirus thought to be the cause of the Colorado outbreak. But there are several misconceptions regarding what works and what doesn’t and research finds most people just don’t get it. The key, science shows, is to wash the old-fashioned way, with soap and water and lots of scrubbing bubbles. And do it often, because the germs are everywhere. A quarter-teaspoon of infected diarrhea can have 5 billion norovirus particles, but it only takes 20 of them — which can fit on the head of a pin — to infect you. Germs love to linger A simple sneeze can transport flu virus particles across an entire room, an MIT study found. When a person infected with norovirus vomits, which they’re prone to do, microscopic virus particles “can travel in the air for up to 25 feet,” according to the health department that dealt with the Colorado outbreak. But you don’t have to walk through a sneeze or an invisible cloud of vomit to become infected. According to the British National Health Service: Common cold viruses can lurk alive on a person’s hand for more than an hour, eager for a handshake and a new host. Flu viruses can remain viable up to 24 hours on hard surfaces like countertops. MRSA, a bacteria that causes intractable, sometimes deadly staph infections, can endure for days or weeks on a variety of surfaces, awaiting your touch. Viral outbreaks are more common in winter in part because viruses thrive in the cooler, drier temperatures. Also, “cold temperatures drive people indoors” and “it’s easier to catch bugs because everyone’s in closer proximity,” says Suzanne Salamon, MD, a geriatrician with Harvard-affiliated Beth Israel Deaconess Medical Center. How and when to wash The U.S. Centers for Disease Control and Prevention (CDC) recommends you wash your hands much more frequently than you probably do: before prepping or eating food and after using the bathroom, blowing your nose, touching garbage, touching an animal, changing diapers, or caring for someone who is sick (and those are just the highlights). Here’s the CDC’s how-to: Use regular soap and running water. Lather up between fingers and on the back of your hands and especially under your nails. Soap helps lift germs from skin, and microbes congregate under fingernails. Scrub for at least 20 seconds (sing “Happy Birthday” twice). The friction is key to dislodging germs. Rinse well with running water. Use a clean towel to dry, or air dry. Soap works because of some simple chemistry. All molecules have polarity, which determines which other molecules they can interact with. Polar molecules, like sugar, mix well with water. Nonpolar molecules, like oil, do not. Soap molecules are amphipathic, meaning they have polar and nonpolar properties, explains Gabriel Rangel, a PhD candidate in biological sciences at Harvard University. “This gives soap the ability to dissolve most types of molecules,” Rangel says, “making it easier to wash them off your hands.” Does the water need to be hot? Nope, unless the temperature simply feels good and therefore encourages you to wash up frequently and thoroughly. Otherwise, cold water is fine, and has the benefit of saving energy, researchers pointed out in a 2017 study in the Journal of Food Protection. The scientists put bacteria on the hands of 21 people several times over a six-month stretch, and had them wash with soap and water at temperatures of 60, 79, and 100 degrees. The study said that “no significant difference in washing effectiveness was found at different temperatures.” “People need to feel comfortable when they are washing their hands but as far as effectiveness, this study shows us that the temperature of the water used didn’t matter,” said study leader Donald Schaffner, PhD, a food-science professor at Rutgers University. What doesn’t work Antibacterial products don’t kill viruses. In fact, in 2016 the Food & Drug Administration banned 19 of the most common antibacterial ingredients for over-the-counter soap products, because “we have no scientific evidence that they are any better than plain soap and water” and they “may do more harm than good” by fostering bacterial resistance that makes those types of germs stronger. An alcohol-based hand sanitizer with at least 60% alcohol can help when you don’t have access to soap and water. But these products — mainstays in hospitals and many homes — do not get rid of all germs, the CDC says, and they won’t work as well if your hands are filthy or greasy. A quarter-teaspoon of infected diarrhea can have 5 billion norovirus particles, but it only takes 20 of them — which can fit on the head of a pin — to infect you. In fact, alcohol-based hand sanitizers have just been exposed as less effective than thought. A recent study published in the journal mSphere found the influenza A virus (IAV) remains infectious in — here comes one of the grosser revelations of what scientists do—wet mucus from infected patients exposed to an alcohol-based disinfectant for up to four minutes. Secretions of mucus, the gooey stuff of snot, ramp up to fight infections. Mucus aims to disable and eject viruses and other invading microbes, but it can also serve as a protective gel. Its viscous nature slows the spread of a hand sanitizer, the researchers discovered. “The physical properties of mucus protect the virus from inactivation,” says study leader Ryohei Hirose, a physician and molecular gastroenterologist at Japan’s Kyoto Profectural University of Medicine. “Until the mucus has completely dried, infectious IAV can remain on the hands and fingers, even after appropriate antiseptic hand rubbing.” Baby wipes aren’t made to remove germs, so wash your hands of that notion. Watch out for the other guy If nothing else convinces you to wash your hands often, consider this: Most other people fail miserably at the task, thereby potentially picking up and packing germs to who knows where. In a 2013 study, Michigan State University researchers trained college students to covertly observe hand-washing in the restrooms of bars, restaurants, and other public places. 15% of men and 7% of women didn’t wash at all, and among men who did wash, half failed to used soap, while 22% of women skipped the soap. The observations, detailed in Journal of Environmental Health, found that only 5% of people washed their hands long enough, and with soap, to kill the germs that cause bad health.
https://elemental.medium.com/only-5-of-people-wash-their-hands-properly-a140aaa775e
['Robert Roy Britt']
2019-12-03 12:01:02.340000+00:00
['Body', 'Hygiene', 'Health', 'Norovirus', 'Flu']
2020–21 NBA Season Preview
TEAM RANKINGS Holds the Throne (Until Someone Takes it) Los Angeles Lakers Long suffering Laker fans finally had their epic title drought ended when the team triumphed over the Miami Heat in six games. Just think, there were 5th graders in L.A. who, until last October, had never seen their favorite team win a championship. The Lakers came into last season with a strong argument for best superstar duo in a league that was suddenly replete with them, and as it turned out, the two of them were simply too good for their opposition. With Anthony Davis anchoring the paint and defensive guru Frank Vogel on the clipboard, LeBron recommitted to playing tough and active perimeter defense, and the team’s identity was formed. On the other side of the ball, the Lakers struggled at times during the bubble, but the LeBron-AD pick-and-roll as well as Davis shooting an absurd 55% from midrange was a healthy enough engine for points. Their role players stepped up as well, including the much-maligned Kentavious Caldwell-Pope, the genius playmaker Rajon Rondo, and fan-favorite Alex Caruso. With the Clippers and the Bucks cleared out of the way, the Lakers sailed fairly smoothly to the franchise’s 17th title. Bad general managers fail to construct competitive teams, good general managers fail to turn momentary victory into sustained success, and great general managers are at their most aggressive right after they’ve won. Rob Pelinka, as it turns out, is a great GM. He filled every team need going into this season, and improved on every outgoing role player. He exchanged Danny Green for Wes Matthews. He replaced Javale McGee with Marc Gasol. He switched out Rajon Rondo for Dennis Schroder. And then, to cap it all off, Woj tweeted that he had stolen Montrezl Harrell from the Clippers, and the NBA world basically reacted like: This Lakers team was tested off the court in ways we won’t understand for a long time, but on the court you got the feeling that they were never in dire straits. This team is the favorite to win it all, but they will likely have to go through an ascendant Nuggets team or a Clippers team that’s seeking redemption after going out in the most embarrassing fashion just to get back to the Finals. They face steeper competition, hungrier competition. In order to go back to back, they’ll need consistent outside scoring and perimeter defense from Kyle Kuzma, KCP, and Wes Matthews. They’ll need strong guard play from Schroder and Caruso. Strong interior defense against big forwards and centers from Trez and Marc Gasol. They might even need Talen Horton-Tucker to take the next step. The team looks set to improve on offense while potentially taking a half-step back on defense. They’ll be versatile. They can once again play ultra-big lineups, with LeBron at point guard and AD & Gasol playing at the four and five, or small ball with Schroder and Caruso at the guard spots and Kuzma at power forward. As mentioned, potential shortfalls might show up on defense, if Gasol is not able to replace Dwight Howard’s tenacious energy in the paint, or on the creative side, if Kuzma’s development stalls and Schroder’s play is not up to par. Age comes for us all, and though he’s signed up to play Laker basketball for at least the next three years, LeBron James’s championship window is closing. Thankfully, his front office has made sure that the iron is red hot one more time. He’s playing with the best teammate he’s ever had, and on a roster that’s tailor-made for the game he’s crafted in the later stages of his career. Will he strike? I wouldn’t bet against it. The Challengers 2. Brooklyn Nets Nets fans had to take a gap year before they got to watch the fruits of their success in the 2019 free agent market pay off, but the wait is over, and it’s time to get excited. A team that made the playoffs largely without either of their marquee signings will finally have them back, and if Kyrie Irving and Kevin Durant play the way they’ve looked in preseason, it’s not hard to imagine them hoisting the Larry O’Brien at the end of this season. Despite his success at creating a winning culture in a team that had done a whole lot of losing, head coach Kenny Atkinson was fired four days before the season was suspended in March. He’s been replaced by Steve Nash, one of the most predictable eventual head coaches of anyone who’s played in the league. Nash will likely need some time to get his sea legs under him, but he’ll be buoyed by the presence of Mike D’Antoni and Jacque Vaughn on the assistant coaches’ bench. Irving made some odd comments about the organizational structure when the hire was announced, including his view that the team “wouldn’t have a head coach,” and it’s indicative of just how much Nash’s success in this job will be predicated on handling and balancing two idiosyncratic personalities. The team is built around two of the best individual scorers in the game today, who are surrounded by a motley crew of quality centers, strong perimeter shooting, and some very athletic (but raw) wing talents. Many see GM Sean Marks pursuing a trade to convert long-term prospects in that latter camp like Timothe Luwawu-Cabarrot and Rodions Kurucs into a veteran third contributor. It’s hard to predict what kind of offense Steve Nash is going to run, but if the preseason is an early indication, it seems predicated on off-the-ball movement even when Kyrie or KD are breaking a defender down one-on-one as well as taking advantage of DeAndre Jordan’s strengths as a screener to create open catch-and-shoot opportunities. Ultimately, this team’s ceiling will be determined by KD and Kyrie’s level and whether their supporting cast can manage to be stars in their roles. Both Durant and Irving seem to have a chip on their shoulder: KD’s championships in Golden State didn’t earn him the adulation he was seeking, and Irving’s is clearly trying to expand his winning legacy beyond his time as second fiddle to LeBron. The talent is there, the motivation is there, all that is left is for them to learn to maximize their potential on the court together. If they can, they will be the team to beat in the East. 3. Los Angeles Clippers Perhaps no team is coming into the 2020–2021 season with more to prove than the Los Angeles Clippers. After their shocking second round exit at the hands of the Nuggets, those murmurs about the Clippers flaws suddenly became gospel. Maybe the Clippers do need an elite playmaker. Maybe it is a problem that Kawhi isn’t a vocal leader. Maybe Doc simply isn’t the answer. All these questions and more were asked for the entire offseason, but only some got answered. The Clippers made the right choice by firing Doc Rivers, and we’ll see if they made the right choice with his replacement. Despite being at the helm of a team that many picked to win the finals, Rivers came dangerously close to losing in the first round to Luka and the Mavs. One wonders if that series goes differently with a healthy Kristaps Porzingis. He then proceeded to blow a 3–1 lead to Nikola Jokic and the Nuggets. While there was plenty of blame to go around, one of the more dumbfounding decisions was Rivers choosing to keep Montrezl Harrell on Jokic, who feasted on the mismatch for several games. Ty Lue will need to avoid obvious tactical mistakes like that to break the franchise’s conference finals curse. In one of the more savvy moves of the offseason, the Clippers picked up Serge Ibaka. While Montrezl Harrell left for the Lakers, Ibaka provides defense and spacing that Harrell could not offer. However, the Clippers still lack an truly capable playmaker which can be a serious liability on offense. The duo of Paul George and Kawhi Leonard is an elite one, but neither have the playmaking ability of, say, Lebron James or Chris Paul. The Clippers also lack a vocal leader, as Kawhi Leonard famously keeps to himself. Perhaps Ty Lue can take on that role, but that is yet to be seen. The Clippers are one of the most talented teams in basketball, but they leave us asking a lot of questions that were being asked last season as well. 4. Milwaukee Bucks And now, the fun begins for Milwaukee. In an offseason filled with ups and downs, the Bucks ultimately won the offseason by signing Giannis to a five year supermax extension. But before all that, Milwaukee was still making moves. They caught many in the league off guard when they traded Eric Bledsoe, George Hill, and picks for two-way star Jrue Holiday. The move undoubtedly made them better, but they parted ways with three first round picks and two pick swaps to make it happen. The consensus seemed to be that while it was necessary if that is what Giannis wanted, it would basically ruin the franchise if Giannis ended up leaving. No pressure! Of course, the Bogdan Bogdonavić sign-and-trade that never happened is also worth mentioning. Milwaukee’s trade for the talented guard was reported by Woj, until it became clear that the Bucks violated the league’s tampering protocols in the process. It is not entirely clear who reported what or what exactly happened, but the important information is that the trade fell apart and Bogdonavić is very much not a Milwaukee Buck. However, this clearly did not phase Giannis as he signed his supermax extension on December 15th. At this point, Milwaukee could go undefeated in the regular season and people would not be convinced that they are the real deal. And this wouldn’t be irrational! For the second year in a row, Mike Budenholzer led Milwaukee to the Eastern Conference’s top seed, only to have a disappointing end due in no small part to Coach Bud’s inability to adapt. There is no shortage of elite coaching talent at the top of the East that will be able to exploit this should Budenholzer remain stubborn as ever. Milwaukee will remain competitive for the near future with Giannis secured, but questions remain about their ability to get to the next level. Always the Bridesmaid 5. Denver Nuggets Denver stunned everybody when they came back from consecutive 3–1 deficits to reach the Western Conference Finals. However, Denver has always been a great team. Led by passing savant Nikola Jokic and rising star Jamal Murray, the unique duo was just too much of a problem for the Jazz and the mighty Clippers to handle. Now that Denver is officially on the radar, the NBA will watch to see if they can duplicate last year’s success. Whether or not they can depends on a few factors. Losing Jerami Grant hurts, especially when it was reported that Denver offered him the exact same contract Detroit did. Grant is a very capable wing defender who provided some offensive spark in the playoffs. Denver clearly valued him, given that they offered him $20 million annually for three years. They also lost a few role players to the free agency market, but were able to add JaMychal Green and Argentinian point guard Facundo Campazzo. Campazzo will surely see many of his flashy passes end up going viral and also provide Denver with playmaking off the bench. Despite all of this, Denver’s success, both in the regular and postseason, will ultimately come down to three players: Nikola Jokic, Jamal Murray, and Michael Porter Jr. Jokic has notoriously slow starts to the season, and a repeat could doom Denver’s hopes of getting a high seed in the hyper-competitive West. However, if he hits the ground running, Denver could once again get a top 3 seed. Jamal Murray impressed the basketball world during his playoff run, but the question about him is the same as it’s always been: can he keep it up? That Jamal Murray has shown up in the past, but the Nuggets need that on a regular basis to remain in the top-tier of the West. Finally, Michael Porter Jr. seems to be the trendy pick for MIP (we are guilty of following the trends), and for good reason. With the departure of key players and another year of development for the 22 year old, many see MPJ getting a much bigger role in the offense. The Nuggets are counting on him to elevate his game to the next level, and the flashes he showed last season make that seem likely if he can stay healthy. 6. Miami Heat You have to respect this rebuild. Pat Riley, the consummate winner, overcame one of the toughest problems any front office can face: being the team that LeBron just left. The Cavs didn’t qualify for a single playoffs until his return, and haven’t since. They earned a few #1 overall picks along the way, which netted them Kyrie Irving and the assets they needed to trade for Kevin Love. The Heat on the other hand rejected tanking, never finishing a season more than eight games under .500. They played the draft incredibly well, turning mid-first round picks into Bam Adebayo and Tyler Herro and signing undrafted free agent Duncan Robinson to a Summer League contract. They put themselves in position to sign a max player in the 2019 feeding frenzy and landed Jimmy Butler, a perfect culture fit with the Heat. Of course, it was still a poetically harsh outcome for Riles, as he fell to the LeBron-led Lakers in the Finals, but it’s hard to look at the Heat as anything other than the post-Spurs gold standard for running a basketball organization. Unfortunately for them, they didn’t have many pathways to improve from their result last season, and it doesn’t look like they have. Jae Crowder became an incredibly valuable 3-and-D player for them in the playoffs, and Derrick Jones Jr. looked like he had the potential to be an X-factor going forward. They lost both, and while the Avery Bradley addition is a solid one, the net result is a team that will lean mainly on improved chemistry and big steps forward from their budget version of the Splash Bros™, who I have decided to call The Seasoningless Siblings. This team will be good again, but with a new addition to the upper echelon in the East (not to mention a healthy Philadelphia team), they face an even tougher climb back to the top. 7. Philadelphia 76ers Much like the Houston Rockets, the history of the Sixers will now be marked by the notations “B.M.” and “A.M.”, as in Before Morey and After Morey. The mercurial GM made his mark almost on the first day of the job, jettisoning Al Horford and Josh Richardson in order to add needed outside shooting to a roster that should once again contend in the East. Will Ben Simmons survive the overhaul with his GM’s favorite player available on the market? Only time will tell. 8. Boston Celtics It seems like Jayson Tatum’s ascent to superstardom isn’t a matter of if, but when. If he can truly break out this season, Boston might finally get over the ECF hump that’s gotten the best of them three out of the last four years. Their success will also rely on the team’s ability to stay healthy and the performance of recent acquisitions and draft picks. 9. Dallas Mavericks The sky’s the limit for Luka and the Mavs. The MVP candidate has had another year to develop both his own game and his chemistry with co-star Kristaps Porzingis and the rest of his returning teammates. If they can solve the late-game woes that plagued them last season, don’t be surprised if Dallas is a top-3 seed. Playoff Locks 10. Utah Jazz Utah locked down their two stars. Both Donovan Mitchell and Rudy Gobert agreed to massive extensions, keeping them both in Salt Lake City for the near future. Gobert is a generational paint defender, but he leaves much to be desired outside of that, making his contract a potential issue for the Jazz if they’re serious about winning a championship. 11. Portland Trail Blazers This year’s Blazers look a lot like last year’s Blazers, but then that’s been the story of the team under Neil Olshey. They brought back Enes Kanter to fill out their depth in the big spots, added Robert Covington to improve their paltry defense, and took flyers on Derrick Jones Jr. and Harry Giles. It’s likely not enough to make it past the second round in the West, but if the team (read: Nurkic) can stay healthy, they’ll be a tough out yet again. 12. Golden State Warriors The Warriors seem to have been cursed by the basketball gods for their four year reign of terror. With Klay Thompson out for another season, it will take a Herculean effort from Stephen Curry, a renaissance from Draymond Green, and Andrew Wiggins, at long last, taking the leap to All-Star level play to put this team in contention in the West. 13. Toronto Raptors It’s hard to see the silver lining to losing Serge Ibaka and Marc Gasol, two cornerstones of the Raptors 2019 title. Though Head Coach Nick Nurse has solidified himself as among the NBA’s elite, they’re currently relying on Pascal Siakam to be their franchise player, and so far, it looks like he’s not built to handle that load. 14. Phoenix Suns The darlings of the bubble got an upgrade at point guard. With the addition of Chris Paul, Phoenix could have three stars if Deandre Ayton takes the next step this season which seems much more likely with CP3 running the offense. With no shortage of solid role players surrounding those three, Phoenix is looking to end their decade-long playoff drought. On the Bubble 15. New Orleans Pelicans If David Griffin is lucky, and his first NBA lottery as GM of the Pelicans would suggest that he is, this team will be contending very soon. Not this year, but soon. It all depends on two people: Zion Williamson and Brandon Ingram, both of whom have All-NBA potential. If they reach it, the sky, much as with Zion’s vertical leap, is the limit. 16. Washington Wizards Well, this’ll be fun! The Wizards were already 5th in Pace in the league last season, and they just added Russell Westbrook into the mix. Reunited with Scott Brooks and playing alongside yet another generational offensive talent, Westbrook will get to be his typical bull in a china shop self on a team lacking ball-handlers and spark plugs. 17. Houston Rockets It seems like James Harden’s departure is all but a guarantee, so Houston will have to start getting used to life after the Beard. On the bright side, John Wall has looked much more like the old John Wall than anybody expected and Christian Wood could have a breakout season next to the former Wizards point guard. 18. Indiana Pacers If you can tell me one thing about new Pacers head coach Nate Bjorkgren without Googling, you shouldn’t be reading this (in fact, you should probably be writing it). The former Raptors assistant will command a fairly deep roster that is lacking a true superstar talent, and might lose the closest thing he has to that in Victor Oladipo. If T.J. Warren can sustain his form from the bubble and Sabonis stays healthy, this team could be feisty, but that’s about it. 19. Memphis Grizzlies Getting to the playoffs will be difficult for Memphis given the West’s abundance of great teams. However, if Ja Morant can take The Leap™ and their other young players like Jaren Jackson Jr. and Brandon Clarke also continue to develop, Memphis could find themselves in the hunt. 20. Atlanta Hawks Atlanta’s success depends on how well their veteran additions fit next to Trae Young. They aren’t a lock for the playoffs, but if Trae continues to improve they could be a headache for a higher seeded team that ends up playing them. 21. Orlando Magic The Magic have been the whipping boy of the East for the past two seasons, and it’s hard to see how their playoff fate is any different this year. With Jonathan Isaac and Nikola Vucevic locked up for the next three years, the next task on GM John Hammond’s list must be to find a second star to take this team to the next level. Outside Looking In 22. Sacramento Kings The Kings locked down franchise point guard De’Aaron Fox with a max extension, but the team is still in purgatory. They have some nice pieces in Buddy Hield and Marvin Bagley (if he can stay healthy), but they will probably be on the outside looking in until they can get a real co-star for Fox. It might end up being rookie Tyrese Haliburton, who many saw as one of the draft’s biggest steals. 23. Minnesota Timberwolves This team is built to win games 136–120. They have multiple creators and scorers, most notably their stars, Karl Anthony-Towns and D’Angelo Russell, but not a lot of players known for their defense. Along with first overall pick Anthony Edwards and sophomore Jarrett Culver, there is a plausible future for the Timberwolves project, but this year will be about progress, not pride. 24. San Antonio Spurs Oh, how the mighty have fallen. San Antonio will likely not be in playoff contention this year with their duo of DeMar DeroZan and LaMarcus Aldridge. Devin Vassell will be a great two-way rookie, but the Spurs are a long way from their big-3 era level of dominance. 25. Charlotte Hornets The perennially boring Hornets decided to make a highly uncharacteristic move with the second pick in the draft: choosing the most exciting player in the whole class. Then they made another by signing Gordon Hayward to a max deal. The team has a vein of raw young assets in Ball, Malik Monk, Devonte’ Graham, and Miles Bridges, but ultimately no real hopes even in a weaker Eastern conference. Praying for Cade 26. Chicago Bulls The Bulls have a lot of interesting young pieces. They almost certainly aren’t a threat for the playoffs, but the performance of players like Lauri Markkanen, Wendell Carter Jr., and Coby White will say a lot about where Chicago is heading with their young core. 27. Detroit Pistons I don’t know what direction Detroit is going in, and I’m not sure if they know either. If Blake Griffin can stay healthy, they might not be the worst team in the league this year, but that’s a big if. Rookie Killian Hayes could be fun to watch, but the Pistons’ front office did not give him much to work with. 28. Oklahoma City Thunder A year after blowing it up, the Thunder went ahead and took the debris from that previous blowing up, and blew that up as well. The roster is now clear of all veterans, and the keys have been handed to Oklahoma City’s young talent. They’ll get the chance to embrace their growing pains into NBA maturity as they wait for the front office to convert the genuinely laughable amount of draft capital they’ve acquired over the last two seasons into quality players. 29. Cleveland Cavaliers The Cavs will be pretty awful this year. However, Colin Sexton is already a talented scorer and could become a solid piece if he adds another dimension to his game. Darius Garland could also see a bigger role to potentially shine in, and rookie Isaac Okoro has flashed serious two-way potential in the preseason. 30. New York Knicks A year after the most disappointing offseason (relative to expectations) in NBA history, the Knicks kept a lower profile, opting for hires and signings that hewed towards stability and competence rather than glitz and glamour. They have some decent young talent (Barrett, Robinson, Toppin), but at MSG, the rebuild continues under Leon Rose, and they would probably be best served by another sub-30 win season.
https://medium.com/unculture/2020-21-nba-season-preview-5b874450a39c
['Unculture Staff']
2020-12-22 23:54:31.842000+00:00
['NBA', 'Nba Preview', 'Sports']
12 things I learned from Les Jackson before 2021. Well worth the read
It is about a week I have been reading The Complete ASP.NET Core 3 API Tutorial Hands-On Building, Testing, and Deploying of Les Jackson. Indeed, my aim was to read a hundred pages for a day and I completed the mission almost successfully. I didn’t read the last chapter deliberately, as it isn’t the right time for me to learn this section yet but will cast my mind back to that section in the right time. All in all, I liked the book. What I liked the most about the book was: Its language- which wasn’t very complex to understand. It was explaining almost all the acronyms and new terms and it was telling you if you need to do further search on the topic or not. I was taking the notes but they weren’t arranged. Though, I was going to arrange the notes, then I thought I can also share some here. It looks like a chaos but maybe you should find something useful among them. Let’s start. REST APIs are “a lightweight way to transfer textual representations of resources. A “Solution” is really nothing more than a container for one or more related projects; But projects in turn contain the code and other resources to do something useful. You would not put code directly into a Solution. Solution isn’t that much important. It is useful whether you want to “build” all projects within a solution together. Bad request : Your browser sent a request that this server could not understand. Just using an asynchronous controller action does nothing to improve the time the I/O operation (e.g., database query) takes to complete. It does, however, improve the situation where we may run out of threads (due to blocking) which has positive implications for scaling. There is also some nice usability implications when applied to User Interface design (have you ever used an application that “freezes” when performing a long-running operation?). Once we have registered our service in the Service Container, whenever we request to make use of a given interface from somewhere else in our app, the DI system will serve up, or “inject,” the concrete class we’ve associated with the interface (aka the “dependency”) in its place. • AddTransient: A service is created each time it is requested from the Service Container. • AddScoped: A service is created once per client request (connection). • AddSingleton: A service is created once and reused The answer is that we have to give the DI system an entry point where it can perform the “injection of the dependency,” which in this case, it means creating a class constructor for our Controller and providing ICommandAPIRepo as a required input parameter. We call this Constructor Dependency Injection How does our app know which environment it’s in? Quite simply — we tell it! This is where “Environment Variables” come into play, specifically the ASPNETCORE_ENVIRONMENT variable. You can configure it inside the launchSettings.json 9. So, when an application is launched (via dotnet run) • launchSettings.json is read (if available). • environmentVariables settings override system/OS-defined environment variables. • The hosting environment is displayed. 10. An operation is idempotent when performing the same operation again gives the same result. 11. When naming your unit test methods, they should follow a construct similar to <method name>_<expected result>_<condition> For example: GetCommandItem_Returns200OK_WhenSuppliedIDIsValid 12. Authentication (The “Who”): Verifies who you are, essentially it checks your identity is valid. Authorization (The “What”): Grants the permissions/level of access that you have. These are some terms and some points that I took notes of from the book.
https://medium.com/@zarifarasullu/what-did-i-learn-from-les-jackson-well-worth-the-read-e3610e0ecade
['Zərifə Rəsullu']
2020-12-05 18:55:38.991000+00:00
['Net Core', 'Restful Api', 'Book Review', 'Tech Book Review', 'C Sharp Programming']
2020 Year Review
Play your cards This year was about redemption. It was also about resilience, focus, and letting go. I cannot say this year was very fun, especially given the current world’s climate; but I can say that it involved a lot of work. Career At the start of January, I began a job working for a health supply chain distributor as a copywriter — a full-time job which I was seeking for a while. It was a brilliant company to work for, the people were nice, the culture was good, and I enjoyed my work — and I felt like I was thriving. In the middle of the year, however, I got offered a scholarship at a London University to study a modern languages degree. When I got the offer, I planned to defer it to next year. I was happy in my job, I hadn’t been there long, why would I leave? However, after lots of reflection, it dawned on me that this opportunity to do the degree might not be guaranteed next year — perhaps the funding would be cut for instance. Additionally, deferring would essentially mean that I’d be delaying my life by a year. Given the situation with COVID at the time, it seemed sensible for me opt for stability and security instead of ripping off the band-aid. Perhaps staying in my job was the better choice. However, after a lot of going back and forth, over and over, I decided, or rather it was decided, that I ought to move on for the best outcome. After 10 months in my role, I started my degree. Opting to go down that path, in retrospect, was one of the best decisions of my life. I am now growing different sets of skills, feel more independent, and also earning more than I was before. Not only that, but I later found out that had I deferred, the funding would have been cut significantly. My brain, for the longest was telling me to stay in the job and that I shouldn’t leave. In the end, I followed my heart, however cliche that sounds, and it’s worked out for the best. On the Theme of Redemption I mentioned that this year was about redemption. Here’s why: In late 2018, all my savings, which were £15,000 at the time got lost or gambled away, depending on how you look at it, through a bad decision/s online. From that time till now, I’ve experienced a lot of inner turmoil. A lot of anguish, disappointment. How could I have been so foolish? If I was to play the role of a psychologist for a moment I would say the following: “Samy, you knew that when you made the decisions you did, subconsciously, and even consciously to a degree, you risked losing all the money you had — but you did it anyway. You did it because you had a hole inside you that you felt would be filled from the outside: it’s the textbook gambler’s mindset. Not only that, but a heavy sense of loss was part of your karmic field. And because you hadn’t yet resolved that karma/those feelings, you held them within your aura, and thus attracted a similar experience/s.” Wow — okay. I think I knew that somewhere deep down. Honestly, maybe in the grand scheme of things, this experience will be good for me. I now value money more than before, I realise it’s importance, and I am not nearly as unconscious with my spending and saving habits. That can only be a good thing. To have the rug pulled underneath you eventually forces you to get back up and bring a greater level of awareness to your decisions. The road to making all that money back has been tough — it’s almost been like a rite of passage. A lonesome journey towards redemption. Up until now, I refused to tell anyone about this until I’d resolved the issue and made all the money back. I didn’t want to speak about the loss from a place of victimhood but instead from a place of resolve. When something significant is taken away from you, you go through a process of rediscovery. Diving into this underworld of loss was a shock for my system but one that’s made me stronger. Romance This was the first year in a long time where I barely did anything in this domain — I channelled much of my energy into the client I was working for and the master’s degree, which I’m now nearly halfway through. Sure, I flirted with a few dating apps, but not much more. I still want to spend more of my time and energy delving into creative work and the degree I’m taking. Besides that, I feel like I have lots of stuff to resolve in my psychology — I don’t feel like having a relationship at the moment would be best. Creative Writing Funnily enough, as soon as I stopped tracking the number of hours I was investing each week for this work (as an experiment) half-way through the year — I just stopped doing anything. This is a lesson for me — I need structure or some system of accountability with creative work, otherwise, it doesn’t get done. Smartphones I realise now that one of the reasons why my anxiety has increased over the last 3 years is down to smartphone usage. Although getting an eink phone in February 2020 was a step in the right direction — alongside uninstalling apps and using limits on social media, I’m afraid that the lure of the smartphone is just too irresistible, and no matter what I try I can’t help but use it more than I’d like to. I’ve tried to make it work, but like in any bad relationship, you have to know when to say goodbye. So late this month, I bought my old phone — the Blackberry 9100 — the best non-smartphone I’ve ever had the privilege to own. It’s the smallest device with a full-text keyboard that exists. I couldn’t see myself using a complete dumbphone due to the difficulty with writing texts. But this is the perfect fit. On the Theme of Saying Goodbye: Things I said goodbye to in 2020 —Whatsapp — Facebook — Deleted entire friends list (Only using Messenger) — Unnecessary meetups (COVID helped) — Eating meat (eating fish only) — Writing paper journals — Wearing polyester clothes — Wearing watch religiously — TV Shows — Audiobooks — Multiple tabs (Use xTab extension with limit set to two tabs) — Google products (Stopped using: Google search engine, Chrome Browser, Google Photos, Google Keep, Google Docs, Google Maps) — Meditating with a backrest A Change in Philosophy I remember that I used to plan and write everything I wanted to achieve for the next year. Although this would give me some sense of motivation, it would also increase my anxiety. And as someone who is already highly perfectionist, making elaborate highly tightly focused goals with deadlines, produces more problems than it solves for me. So what’s the alternative? I now am focused on simply bringing awareness to whatever it is I’m doing and ‘could do’, with a focus on developing better habits. Admittedly, this is something that’s hard to do given the lure of the goal driven mindset — which forms the fabric of our western culture — but I believe it’s something I’m getting better at. This has been a short year review and it’s been a real pleasure writing it. Thank you for reading.
https://medium.com/@samyfelice/2020-year-review-ee3a5a2ded04
['Samy Felice']
2020-12-29 13:03:49.571000+00:00
['Year In Review']
The Importance of Reducing Post-Consumption Waste.
Kuburat Kadiri — Director of Porgramme ICCDI Africa at the Popak West Africa 2019 International Exhibition and conference Our Director of Programmes speakes at the just concluded Propak West Africa 2019 International Exhibition and conference that took place in Lagos at the Landmark Event Center for its seventh edition from the 17–19 September. It is the largest packaging, food processing, plastics, labelling and print exhibition in West Africa. The Central theme for this year was Environmental Sustainability: Being Responsible. During this conference she sared her view with the delegates on the panel with other great speakers on “The Importance of Reducing Post-Comsumptions Waste.” Waste — we can’t live without producing it likewise can’t live with it after generating it. Municipal solid waste can be classed as one of the banes of human existence. Kuburat Kadiri — Director of Porgramme ICCDI Africa at the Popak West Africa 2019 International Exhibition and conference Due to our increasing human population and life span our overall waste will continually increase despite efforts to reuse and recycle. it is imperative that we reduce our waste generation because reused and recycled items will still be disposed of at some point. Although some waste are biodegradable others are toxic, non-degradable and hazardous to the environment we rely on for our existence. Below in brief are some reasons why it is important we reduce post consumption waste. 1. We are interdependent on the environment and without our environment there will be no economy or money to be made. 2. Waste either incinerated, openly burned or undergoing anerobic and aerobic degradation will generate greenhouse gas (for example, methane, carbon dioxide, etc.) which contributes towards global warming. 3. Global warming leads to extreme weather conditions. For example, warming leads to melting of the ice caps which results in rising sea levels; hence, flooding that causes erosion of arable land, livestock, contamination of drinking water, among others. 4. Wrongful disposal of waste (e.g. plastic, batteries) which has the potential to be toxic or harmful to life on earth can lead to land toxicity, death of marine life and bio accumulation of toxins in marine edibles. Consequently, this will lead to the prevalence of ill health in humans and others when consumed. While we await further advancement in human technology towards elimination of waste to the barest minimum thus reducing its negative impact on us and the planet earth, other measures can be imbibed and adopted. One of such includes minimalism as a way of life and cultural shift towards acceptance of alternative green solutions and environmentally friendly consumerism by awareness and education on human activities towards our changing environment. These have a very strong potential to drive market demands for greener goods and services because business exist due to demand. what are you demanding? Compiled by Kuburat Kadiri — Director of Programmes ICCDI AFRICA
https://medium.com/climatewed/the-importance-of-reducing-post-consumption-waste-2f4bfb20f25d
['Iccdi Africa']
2019-09-20 00:05:50.328000+00:00
['Climate Change', 'Recycling', 'Women', 'Solid Waste Management', 'Environment']
7 tips for young designers
1. Don’t rush on your computer A big mistake for many junior UI designers is to start a project jumping like starving zombies on their computer right away. Sketching layouts on a piece of paper or a notebook gives you a bigger picture and helps you explore options before opening your favorite software. Hand drawing helps me think, when using softwares puts me in an “execution/production” mode. 2. Put your ego aside As a creative, you know that your job is to design interfaces that are usable and delightful. You will probably be inspired by existing apps or websites, and you’ll be eager to add your personal “touch” in every project. That’s completely natural. Unfortunately, you’ll face many people who don’t share the same passion and sensitivity for aesthetic, so get ready to loose some battles even if it’s hard to accept critics on something that can be so subjective. The Art of Compromise The best advice I can give you is to refrain yourself from thinking that each project is your momentum. You’ll have yours. Be patient. You’re job is all about compromise. 3. Take a step back Time is running faster and faster in our industry so that many designers are under a lot of pressure. It’s subsequently easy to spend hours on Sketch or Photoshop without taking the time to just sit back and look at what you’re doing as if you were completely new to the projet. What are you supposed to do on this layout? Do you understand the design? Does it feel easy to interact with? Is it breathing? Can you read the copy properly? It just takes 10 minutes to look at your work with a new perspective. The more you work on a composition, the harder it gets to see it from a client or a user perspective. Don’t obsess over the details too quickly, look at the bigger picture first (proportion, balance, hierarchy…). 4. Avoid awkward situations How many times a week do you feel surrounded by a horde of people looking at your screen while you’re sitting behind your desk? This is a poor habit that needs to change. Too often, the posture of the designer is similar to a kid being judged by adults. You’re not presenting your computer screen, you’re presenting your your work and the thinking behind it. Ideally, try to find a high office desk to present your design to other team members. Everyone will be standing up and you’ll feel much more empowered. Keep in mind that the goal is to have everyone at the same level. 5. Get feedback from anyone around As a designer, you’re probably working in an open space, whether it’s a company or a co-working place. In both cases, don’t hesitate to show your work to anyone around you. Art directors, UX/UI designers and any other employees nearby can offer you constructive feedback and different perspectives on how to improve it. Don’t think that you’ll be a disturbance. Designers usually feel grateful when you come to them for feedback or guidance. Chances are strong that they’ll do the same with you next time! 6. Find the rational behind everything you’re crafting It’s probably the most important tip. How many times can you hear a client say: “So, what’s the rational behind this color’s choice?”. Same goes for typography, buttons, photography… Don’t feel judged. Clients simply want to understand your thinking (and make sure they can explain it internally as well). It’s a good exercise to start thinking about your design choices (i.e “Typography is light to convey elegance and sophistication…”, “The photography is vibrant to bring energy to the page…”, “The line width is short to limit the number of characters and facilitate legibility”). If you don’t explain your choices, it will come down to a frustrating “I like it…I don’t like it” type of conversation. 7. Speak more, write less (Update) New tools are great. They’re supposed to make us more productive and efficient. But are they? Let’s use the example of Invision. They did an amazing job at creating an integrated platform that allows designers to publish instantly their designs and let reviewers (team, clients…) comment and interact with it. But there’s nothing more frustrating and time consuming than reading endless comments and replies about micro-design decisions on Invision (or any other tool). If you believe there’s a little bit of a “digital” fight going on, please, take a quiet place and start a real conversation asap (meeting, call). Written conversations can’t convey real-life expressions, emotions and attitudes. Plus, Emoticons are biased: I can’t tell if a smiley at the end of a painful feedback means that it’s supposed to be less painful… If you have a design argument, the benefit of hearing or seeing one person’s attitude and body language is way more insightful than typing loudly on your keyboard waiting for the next reply. My advise: Use written communication for tasks, todo-lists and straightforward feedback (ex: texts to update, mistakes…). Any other issue should be an opportunity to talk. We’re human after all.
https://medium.com/ideas-by-idean/6-tips-for-young-ui-designers-abec2cf4c64a
[]
2019-03-21 11:08:09.892000+00:00
['UX', 'Digital', 'Tips', 'UI', 'Design']
Secern
How do I waken this fire-haired beauty Sleep’s own radiance eternal heat Hibernate’s special quest Solo bed, the Queen’s ferns Dare-less and stupid I only stare Her body exists in the ethers Existing only to be touched Immortal fountains juice translucent under Her reverend mother’s porcelain spice Life flows strong in this one Carried through sweet piquant Dare I must disturb gossamers finial Second skins finely weaved spun Slowly as if earth itself holds no mercy Drawing close to sculpt perfection Perish he might for the sight of her Through sizzling air Alive with all he knew To get there
https://medium.com/grab-a-slice/secern-2e2c862160b
['Michael Stang']
2020-08-14 11:49:57.898000+00:00
['Storytelling', 'Quest', 'Poetry', 'Lovers', 'Union']
New sleepy baby at home
Unexpected stress… Congratulations! You may be pregnant or maybe you just had a baby (and lucky you if actually have time to read this)! There’s no feeling more incredible than when you’ve just had a baby. So why don’t you feel wonderful all the time? Well, even the best events in life have stress attached to them. Having a baby is exciting for everyone. You’ve been flooded with company practically from the moment of delivery. If you’re a first-time mother, hospitals don’t give you very much help or advice; they send you home with this new little creature with an array of demands that you have to try to interpret. And new babies don’t sleep much. At least not long enough to allow you to get some much-needed rest. Check out this amazing sleeping solution for new parents! Add to that the hormonal changes in your own body, and you have a formula that’s guaranteed to be stressful. Sometimes you think you’ll never get a full night’s sleep again. Until the baby settles into a routine, you probably won’t! To get through those first few weeks and months, here are a few tips to help you get at least a little more sleep. Check out this amazing sleeping solution for new parents! First of all, don’t try to be a supermom. When the baby goes down for a nap, take a small nap yourself. The laundry can wait and so can the dishes. You don’t need to have a perfect house. There will be time for all that; give yourself a break whenever you get the opportunity. If you have a good friend or relative to help out, by all means, take advantage of that for an afternoon. Grandma would probably jump at the chance to have the baby all to herself for a few hours! When you put the baby to bed for the night, take some time to decompress and relax so you have a better chance of falling asleep. Take a bath scented with lavender; put on some soft music and baby yourself a little. Sometimes it’s hard even without a new baby to fall asleep right away. There’s a lot to get used to! Check out this amazing sleeping solution for new parents! Typical day and night Bringing home a new baby brings with it an exhausting array of new responsibilities and challenges. Is there such a thing as a typical day and night for new parents? Probably not! Remember, the baby has just gone through an enormous change too, so part of the process when you first bring him home is his transition from the womb to the outside world. Keep the baby close to you, keep him wrapped and warm. If you’re breastfeeding, this will take some time for both of you to adjust to as well. If your new baby is formula-fed, he’ll need to feed every 3 to 5 hours. If you’re breastfeeding, he’ll need to feed more frequently. Sometimes you will feel that all you do all day and night is breastfeed. You will probably feel much more empathy with cows! And there will be a lot of diapers to change, especially until you get familiar with his schedule. His diaper will probably need to be changed shortly after feeding, about once an hour in the very beginning. Be sure to check frequently. Check out this amazing sleeping solution for new parents! Until the umbilical cord has fallen off, you’ll want to keep to sponge baths every few days, but you will want to wash the baby’s bottom every day. You can wipe the baby’s hands, face, neck, and bottom every day with a soft washcloth with warm water. When the baby’s ready for full baths, in a few weeks, every day is a good idea to prevent diaper rashes. But keep in mind that too much bathing can dry out his sensitive skin. So see what works for your baby. Those little fingers and toenails will grow quickly, and they’ll need to be trimmed regularly so the baby doesn’t scratch himself. The baby’s nails can be long, even at birth, and attached high on the nail bed. You’ll need to gently press the fingerpad away from the nail and clip it with a baby nail clipper. You might want to do this when the baby’s sleeping to ensure that he doesn’t jerk those little fingers and toes away! Get used to being busy 24/7 during the first month, at least. You will be feeding and changing diapers around the clock so get as much help as you can so you can have some peace too. Check out this amazing sleeping solution for new parents! Breastfeeding New parents want to give their babies the very best. When it comes to nutrition, the best first food for babies is breast milk. Experts recommend that babies be breastfed for six to 12 months. The only acceptable alternative to breast milk is infant formula. Solid foods can be introduced when the baby is 4 to 6 months old, but a baby should drink breast milk or formula, not cow’s milk, for a full year. Cow’s milk contains a different type of protein than breast milk. This is good for calves, but human infants can have difficulty digesting it. Bottle-fed infants tend to be fatter than breast-fed infants, but not necessarily healthier. Human milk contains at least 100 ingredients not found in formula. No babies are allergic to their mother’s milk, although they may have a reaction to something the mother eats. If she eliminates it from her diet, the problem resolves itself. Check out this amazing breastfeeding solution for new parents! Sucking at the breast promotes good jaw development as well. It’s harder work to get milk out of a breast than a bottle, and the exercise strengthens the jaws and encourages the growth of straight, healthy teeth. The baby at the breast also can control the flow of milk by sucking and stopping. With a bottle, the baby must constantly suck or react to the pressure of the nipple placed in the mouth. Initially, a breastfed baby will need to be fed 8–12 times in a 24-hour period, especially since both baby and mother are getting used to the process. Check out this amazing breastfeeding solution for new parents! Breast milk is more quickly digested than formula, which is another reason why more frequent feeding is necessary. Another reason for the constant suckling at the breast is to stimulate the mammary glands to produce more milk for the baby’s growing appetite. But the extra time spent feeding the baby that first year is well worth it as breast milk passes along the mother’s immunities and delivers the highest-quality nutrition for a developing baby. Coping with a new schedule There will be days when you bring the new baby home that you think you’ll never get to sleep again. In the meantime, try to get some rest and sleep whenever you can. The baby won’t be sleeping through the night for several weeks, perhaps months. While she’s adjusting to the schedule of night and day, you won’t be able to sleep through the night until she does. Until she’s sleeping through the night, try to sleep when she sleeps. Many new mothers try to do everything at once and start cleaning or doing the laundry once the baby goes down for a nap. You’ll only make yourself more tired if you try to be supermom. Check out this amazing sleeping solution for new parents! If you can get some help in those first weeks with the cleaning and laundry, by all means, do so. If you can have a friend or relative in to watch the baby for an afternoon while you catch some much-needed sleep, try to take advantage of that whenever you can. When you’ve been so busy all day with new baby chores and everything else you have to do to maintain a household, and possibly take care of older siblings as well, it can be hard to wind down just because everyone else is asleep. Check out this amazing sleeping solution for new parents! Make some routines to help yourself unwind at night. Take a warm bath — not too hot, hot water can be stimulating — and play some relaxing music. Even if you’re not breastfeeding, avoid caffeine throughout the day and especially at night. Drink water or decaffeinated or herbal tea. If foods that have a lot of preservatives or sodium can make you jumpy, try to avoid those as much as possible. Try to eat very natural foods, such as salads, green vegetables, fruits, and warm healthy soups. As soon as you’re able, try to get out and walk for a little bit each day. The fresh air and moderate exercise will help you and your baby feel relaxed and can help you get to sleep at night. Check out this amazing sleeping solution for new parents!
https://medium.com/@howto-improve/new-sleepy-baby-at-home-936042443cd8
['Howto Improve']
2020-10-22 10:04:41.275000+00:00
['Breastfeeding', 'Newborn', 'Baby Sleep', 'New Parents', 'Baby']
Crypto — RadixDLT: A dose of nuance: Scalability (millions and millions of TPS!?)
This piece arose out of a bit of irritation with the widespread tactic of creating hype, even FOMO with superficial claims in the crypto space. Of course, marketing usually is that shiny piece of silver that attracts birds that are flying by. Therefore it shies away from getting into the nuts and bolts of the good or service they try to gain awareness for. The technicals are often longwinded or to be honest boring to most customers. But when every project represents itself as a piece of silver the entire space becomes blinding with all the light it is reflecting. Let’s land and inspect if there really is value. Scalability is one of those topics that gets abused by claims of millions and millions of TPS. I would like to inject some nuance in this area by pointing out that there are different types of scalability. Consequently, scalability statements should specify to which type it refers to. Let’s enter deep shard space! There are 2 categories of transactions to be discerned on a distributed ledger. The distinction is made by the presence of a smart contract. Application logic scaling is on most DLTs much harder to do than simple value transfers. This is also the case for Ethereum as can be seen on below graphic used by Vitalik Buterin on a presentation given on Coindesks virtual conference ‘CoinDesk invest: ethereum economy’ that started on 14th of October 2020. ‘Generic EVM applications’ are trailing behind the ‘simple payments’ In those 2 categories there are 3 distinct types of transactions Visual representations can be found in below images. The article header image provides an explanation of all the components we can see here (check it out first before continuing here as things will be much easier to follow). These types are ordered from least complex to most complex. User to user transaction User to smart contract (or dapp) transaction Smart contract to smart contract (or dapp to dapp) transaction Although this is an abstraction for the purpose of conceptually being able to talk about this subject matter. How it technically looks like might be different from ledger to ledger. This because the answer might be different on the question: ‘What is being stored on each shard?’ Let’s couple back to Radix. According to Dan Hughes, founder and CTO of Radix, both ‘simple payments’ and ‘generic EVM applications’ in the terms of Vitalik will scale to the same degree, and that is linearly. Below an excerpt of the AMA session held on Radix’ telegram channel Radix #AMA session of 29/09/2020 (on telegram: https://t.me/radix_dlt) Every loved pet has a nickname. ‘Cerby’ is the nickname of ‘Cerberus’ which is the name for the novel distributed ledger technology Radix research and developed over the course of 7 years. Many iterations preceded it, beginning with the well-known blockchain. I would recommend this article to continue your rabbit hole journey into Radix: https://medium.com/@radixdlt/dan-and-radixs-tech-journey-70752de17629 Here the same concern is uttered. TPS claims are often only applicable for the simple value transfers which are the easiest to scale. The real proof is in the pudding called smart contract scaling. I’ve started off with a sharded ledger representation to set myself up for a little bridge into ‘Composability’ (to be expanded on in a future article). I want to put out there that the blockchain trilemma (scalability, decentralization, security) coined by Vitalik might not be telling the complete story. According to Radix there’s a fourth element namely composability. When sharding is implemented in favour of scalability, there is a risk elements are broken in this new quadrilemma (scalability, decentralization, security, composability)! When dapps are located in different shards of a blockchain they are unable to be atomically composed together. This means that different smart contract transactions of these dapps can not be put together as one anymore because they reside on different shards. They have to communicate via an intermediary (often a chain) which introduces latency and increases complexity. This would mean a whole array of (combined) functionalities dapps provide on an unsharded ledger are broken (e.g. flash loans). I do want to include that grouping related dapps in shards is a workaround but then there might creep in once again scalability issues and decentralization issues. Radix claims to have come up with a solution that keeps the elements of the quadrilemma intact. This intershard case is resolved by a consensus mechanism they named ‘Braiding’ (this as well to be expaned upon in a future article together with ‘Composability’). What I’d like you to takeway from this article are 2 key messages: There are different types of transactions to be scaled with each their own difficulty. => When projects claim a number of TPS ask yourself: “Are these claimed TPS numbers applicable on all types of transactions?” With the introduction of sharding there is an element that was taken for granted namely ‘composability’. Sharding a blockchain made it come to the forefront as the fourth piece of the puzzle. => When projects claim high scalability ask yourself: “Is scalability achieved while keeping composability (and decentralization and security) intact?” How important composability is depends on the type of application but it seems like that the strength of most dapps comes from this tying together of multiple features as one. Succes as one, failure as one. Alot of these dapps enable functionalities that weren’t before possible. Functionalities in both the individual as well as composed (combined as one) sense. If you are a crypto enthusiast (for whatever reason: idealistic, technological, investment wise, …) just like I am then I would recommend looking into what Radix has to offer. On all fronts this project is very exciting. I sound like a shill right now so the best course of action is to perform your own trial by critical fire of Radix’ claims. Your motives can be to gain extra insight in the value of the crypto projects you are currently invested into or to scout for the next opportunity.
https://medium.com/@hermesradvocado/crypto-radixdlt-a-dose-of-nuance-scalability-millions-and-millions-of-tps-aee9c3b094ed
['Hermes Radvocado']
2020-11-03 11:43:20.795000+00:00
['Blockchain', 'Distributed Ledgers', 'Radix', 'Bitcoin', 'Ethereum']
Microsoft is Lonely at The Top
The Transition from Windows is Full and Final Photo by Tadas Sar on Unsplash On October 28th, 2020 Microsoft Corporation launched its first industry-specific cloud, Microsoft Cloud for Healthcare, which seeks to integrate the growing bundle of Microsoft’s software application and infrastructure layers into a single product. An extremely difficult task but, if executed well, it will help cement Microsoft’s position at the top of the growing Cloud industry. “This end-to-end, industry-specific cloud solution includes released and new healthcare capabilities that unlock the power of Microsoft 365, Azure, Dynamics 365, and Power Platform,” wrote Tom McGuinness, Corporate Vice President, Worldwide Health, Microsoft. It’s a heavy lift, bringing all these pieces together and creating a seamless experience across multiple applications, while also sprinkling them with enough properties to address the requirements specific to a particular industry. But you would rather be in Microsoft’s shoes than those of its competitors. Image Source: Microsoft Microsoft took nearly a decade to build its capabilities in the software application layer (office 365, Dynamics 365, LinkedIn) and the infrastructure layer (Azure), while slowly connecting all the different pieces as parts of a single, unified platform. This unification process of bringing disparate individual systems together and making them work seamlessly for a customer will be a never-ending process that will continue as long as Microsoft Cloud is alive. Now, with the first industry-specific cloud, Microsoft is attempting to take things up a notch and move from creating a platform that can be used by a single client into a platform that can be used by specific industries, such as Healthcare. It will not be far-fetched to think that Microsoft Cloud for Healthcare is just the first of many industry-specific cloud products that will be launched by Microsoft. Sign-In is Simple, but Sign-out is Not Industry-specific cloud products will be notoriously difficult for clients to move away from. To this day, Oracle’s (ORCL) penetration on the database side is what is keeping the company alive in the cloud race, as clients who got entrenched with Oracle databases find it extremely hard to migrate to other cloud providers. The transition has been slow and the stickiness of the products has allowed Oracle an enormous amount of time to stay alive in a cloud industry that has run far ahead of it. Microsoft’s industry-specific cloud offerings will achieve the same level, if not more, of stickiness with its clients. Once the client starts using all the different software and infrastructure products offered by the Microsoft Cloud Platform, it will be extremely difficult for them to migrate to another service provider. The migration will not be impossible but will be very difficult to execute, and a time-consuming process that most CTOs would rather not engage in unless there is an overwhelming advantage. The migration process will only be accepted if it offers enough benefits in the form of cost or if it offers a significant technological edge. Typically, decision-makers look for signs of both cost and technology advantages to be present to move from one service provider to another. But Microsoft will do everything it can to keep the cost of migration higher and benefits of migration lower. The Global Cloud Industry is still on a Growth Trajectory Microsoft is the leader of the cloud industry but it is still growing faster than the number two player, Amazon (AMZN). In the most recent quarter, Microsoft CEO Satya Nadella announced that Microsoft Commercial Cloud reached $15 billion in revenue, a growth of 31% year over year. Amazon Web Services reported $11.6 billion in revenue during the most recent quarter, a growth of 29% year over year. When the number one player grows faster than the number two player, it is a sign that its position at the top is getting stronger. Through the last eight quarters, both Microsoft and Amazon Web Services increased their cloud revenues as the pandemic helped the cloud industry to grow even further. AWS quarterly revenue increased by more than 29% in each of the last six quarters, while Microsoft achieved better growth than Amazon Web Services, excluding Q4–2020 Data Source: Microsoft, Amazon Both companies held on to their operating margins; Amazon Web Services reported an operating margin of 30.5% in the most recent quarter, while Microsoft’s Productivity and Business Process segment reported an operating margin of 46.32% and the Intelligent Cloud segment reported an operating margin of 41.75%. This clearly shows that both companies did not sacrifice their margins to achieve a high growth rate. The top two global cloud players have proved beyond a reasonable doubt that they were not only resilient to the global economic distress caused by the pandemic but that they were able to take advantage of the transition in technology usage due to the pandemic. The growth rate could further improve once the global economy starts recovering. According to Forrester, “the global public cloud infrastructure market will grow 35% to $120 billion in 2021”. Microsoft will continue to grow closer to the industry average. The size of its top line, $15 billion in quarterly commercial cloud revenue, makes it harder for Microsoft to grow faster than the industry average. But the Windows maker may soon be standing alone at the top.
https://medium.com/illumination/microsoft-is-lonely-at-the-top-ca202d653c8b
['Shankar Narayan']
2020-11-29 04:16:10.114000+00:00
['Technology', 'Illumination', 'Business']
Link Containerized DotNet Core Libraries During Runtime
To package the library in a base image the following Dockerfile is used: Dockerfile for the library To create a base docker image that does not depend on any other docker images it starts FROM scratch . It then creates an /app directory in the image and sets it as the working directory. The COPY /lib . command copies all the files from the /lib folder where the mylib library was published into the working directory /app in the image.
https://medium.com/@sami_islam/link-containerized-dotnet-core-libraries-during-runtime-154fdd61e494
['Sami Islam']
2020-10-17 17:23:45.690000+00:00
['Docker', 'Containers', 'Software Engineering', 'Dotnet Core', 'Programming']
Does User-Experience Design confirm the Paradox of Choice?
Understanding consumer decisions and the design of experiences to contemplate if choices are good for business. ‘Looking for shoes?’ Try Amazon, they said — after an hour of scrolling up and down, next thing I know I have about a zillion shoes in my wishlist, but exit from the app with zero shoes heading my way. Has this ever happened to you with online shopping? Overwhelmed with the various sub-types and different colors/patterns of the same product which make you discard your actual need for the product in question? If you have, then psychologist Barry Schwartz termed this exact feeling of yours as the paradox of choice. In a nutshell — more choices lead to fewer sales. Digital experiences aim at making consumer lives easier but it doesn’t seem to be the case. Not very long ago, a consumer report by Smartassistant revealed that 42% of digital shoppers abandoned their transactions because there was too much choice. Is choice paralysis being induced with too many options presented by online retailers? It all began with Jam! Almost two decades ago, Sheena Iyenger, a professor at Columbia Business School, performed a consumer study with an experiment using two different jam displays: one with six flavors and the other with 24. The result? Although the table with 24 flavors had a high footfall, the conversion rate was only 4% in comparison to 30% for the counter with fewer options. This experiment set the path for Barry Schwartz to talk extensively about this choice paralysis in his book “Paradox of Choice: Why More is Less”. A large number of options leads to anxiety in users making it counterproductive to the actual underlying objective of providing choices. So why are there still so many options for the same thing? — Neo-classical economics or love for colors? Even with this oddity of many choices leading to fewer conversions, e-commerce sites still see a display of 10 different colors of the same product! Why so? The phenomenon of choice overload was shot down by quite a few people. Financial Times journalist Tim Hartford posed the question that if this were to be true, then retailers and food chains like Starbucks wouldn’t offer a plethora of options of basically the same thing. Research scientist, Benjamin Scheibehenne, echoed this point with an empirical twist after conducting around ten experiments to study choice overload and concluded that lots of choices made no significant difference. This is synonymous to the traditional economic theory of consumer behavior where more is always better. Also, the jam experiment suffers from something called the replication crisis — when tried to repeat the experiment with the exact same parameters, the study did not come up with the same conclusion. That can mean only one thing. (uh-oh, choices are good!) But then again, in case of online consumer activity, we see conversions happen when there are fewer fields to fill for forms or fewer social sharing buttons. Along with reports of abandoned digital carts, these tilt the scales to the other side. Psychological wisdom implies choices create fatigue in human minds which directly affect consumer decision-making. But wait, there’s something called Single-Option Aversion as well! If choices were bad for business, retailers would stock only unique products and brands would eliminate all variants. But, that’s not the case as Daniel Mochon writes about in the Journal of Consumer Research. According to his study, in one experiment, consumers were asked to buy a DVD player, one group had an option of a Sony DVD player, a second group a Phillips one, and the third one had both options. As the notion of single option aversion suggests, the third group made the most purchase. Conventional Wisdom or Jam Flavors? — and why Choices are not Decisions So we’re back at the debate, myth or fact? For a consumer, are choices to a product a good thing or bad? A wise man, and marketing guru, Seth Godin shared his wisdom: ‘in a world where we have too many choices, and too little time, the obvious thing to do is just ignore stuff.’ So, when it comes to choice overload, a lot of it lies with the consumer and the underlying purchase decision. Researchers at Kellogg business school also revealed a few cases which can be called the factors of the overload in question. What came first? –A Stanford GSB study suggests that every decision can ideally swing two ways. Firstly, if the initial decision is definitely buying the product, then various alternative options would confuse the buyer. However, if the decision about which product to buy isn’t clear, then options might be conducive. Personal loans or Home loans or Car loans? –Options presented are either not comparable, or inadequate information on each of them, or the mere way of being organized can lead to confusion in the consumer mind. Is that what I want? — Imagine you want to invest in mutual funds for the very first time, and you have absolutely no idea about them. And you’re shown several fund types (equity, mid-cap, open-ended, index, sectoral to name a few), along with the separate risk associated with each — would you still have clarity on your preference? It’s the decision that matters–dating sites vs picking ice-cream: Options in Tinder would require time and consideration whereas choosing at Baskin Robbins is something usually done quickly. It’s those comparatively trivial and instantaneous decisions that need fewer options to avoid overload. In the end, it’s all about Design and the Choice Architecture Although, with the various studies and experiments done by researchers to counter the paradox over the years, Scheibehenne confers from his meta-analysis that both cases actually hold true. Sometimes choices are good for consumers, and sometimes they aren’t. Coming back to the great Seth Godin again, he is of the opinion that the trick in getting the balance lies in storytelling — it’s the responsibility of the brand or marketer to choose the right story behind each of the choices that they want to present to their customers. The key to finding the optimum between too many choices (that drive customers away) and no options (that creep customers away) is to find the right way to present those variants so that consumers don’t feel anxiety and are able to make the right choice. And, as Godin infers choices aren’t decisions: the ‘overload’ that businesses need to worry about should be information, not product variants. The experience, digital or otherwise, needs to be so designed that the consumers enjoy the choices and feels empowered by comparing their selections to the alternatives. The story should include all possible extremes of intended emotions, and the consumer then chooses the one with the most resonance. As Godin observes, marketers have been doing this forever. When David Ogilvy and his team first started making ads in the 1950s, they figured a hole in the market and filled that gap with features — the mantra was all about positioning. Potato chips: healthy organic, crispy, traditional satisfying — emotions that sell. The question no longer lies in numbers — less is more or more is better. It’s crucial to figure out when to give choices and when not to by understanding the consumer psyche. Brands, marketers, and retailers need to build the arc of the consumer path in a way that guides customers to the path of a definite goal with or without choices. That’s exactly what I was looking for!: Brands like Amazon, Spotify, and Netflix are doing great when it comes to presenting options. There are close to 15000 titles on Netflix including movies, television shows, and original content. If people sort through that every time on a movie night, nobody would ever get around to watching anything. To avoid overwhelming users with too many choices, personalization is one of the tools brands seem to be adopting. Through product recommendations based on user behavior and preference history, giving the user exactly what they want saves the anxiety of having to look at multiple choices. Superheroes or Sitcoms?: There can be 5000 T-shirts on an e-commerce site — but browsing through all of them in the hope of a better one would definitely leave users disoriented. Instead, if they were sorted into various categories, customers could just browse through their desired group. What exactly is that?!: Remember the mutual funds’ example — in most cases, especially for financial services, users get stressed out by technical jargon, and lack proper conceptual understanding. Retailers and brands need to condition their users to complexity. Make their offerings easily comprehensible and then expose customers to options. Simplifying web experiences keeping minimalistic interfaces leads to higher conversions — maybe unique CTAs or fewer social media sharing buttons. Not too long ago, a report revealed that consumers value honest and personal advice as an important service from retailers and brands. Expert advice on products and brands could mitigate the existing paralysis from choice proliferation. Looks like choices are here to stay in a consumer’s life. But with a better design in building those choices, a better choosing experience can be crafted. References: ‘This is Marketing’ by Seth Godin ‘Can There Ever Be Too Many Options? A Meta-Analytic Review of Choice Overload’ by Benjamin Schiebehenne, Rainer Greifeneder, Peter Todd
https://medium.com/moonraft-musings/does-user-experience-design-confirm-the-paradox-of-choice-a1eeb8804640
['Sarba Basu']
2019-04-17 08:51:06.195000+00:00
['UX Design', 'Psychology', 'Marketing', 'Customer Experience']
PAK- U19 vs SL- U19 Dream11 Prediction Today With Playing XI, Pitch Report & Players Stats
Welcome to ACC U19 Asia Cup 2021, and today is the 1st Semi-Final Match, which is being played between Pakistan Under-19s and Sri Lanka Under-19s, and this match will be played at ICC Academy, Dubai, United Arab Emirates. This match will start today on 30th December 2021 at 11:00 AM. Preview Pakistan Under-19s team is a very good team, which is performing very well in the U19 Asia Cup, and this team has played 3 league matches so far and has won all three, and this team Has reached the semi-finals, now this team will play its semi-final match with Sri Lanka, whoever wins in this match will directly reach the final, now it has to be seen whether this team can continue its good performance in the semi-final match or No, hope the audience will get to see a high scoring and thrilling match. On the other hand, Sri Lanka Under-19s team is a very good team, which is performing very well in the U19 Asia Cup, and this team has played 3 league matches so far and has won two matches out of which 1 match is inconclusive. And now this team has reached the semi-finals, now this team will play its semi-final match with Pakistan, whoever wins this match will directly reach the final, now it has to be seen that this team does their best in the semifinal match. Whether it continues or not, it is expected that the audience will get to see a high-scoring and thrilling match. Pitch Report The pitch of ICC Academy, Dubai is a very good pitch, and this pitch is a very balanced pitch in which both the batsmen and the bowlers get help, in this pitch the batsmen get more help than the bowlers so that they can score a good score for their team. Can set, and on the other hand, fast bowlers can also show a lot in this pitch because they can also get a lot of help in this pitch. Probable Playing XI :- PAK- U19 vs SL- U19 Pakistan Under-19s Playing Xi :- - Abdul Wahid :- (M-3, R-37, Ave-12.33) - Maaz Sadaqat :- (M-3, R-58, Ave-29, W-3) - Muhammad Shehzad :- (M-3, R-110, Ave-36.66, W-0) - Abdul Faseeh :- (M-1, R-0) - Haseebullah (wk) :- (M-3, R-53, Ave-17.66) - Irfan Khan :- (M-3, R-47, Ave-15.66) - Ahmad Khan :- (M-3, R-63*, W-3) - Mehran Mumtaz :- (M-1, W-2) - Faisal Akram :- (M-1, R-8, W-0) - Qasim Akram © :- (M-3, R-72, Ave-24, W-5) - Mohammad Zeeshan :- (M-1, W-2) Sri Lanka Under-19s Playing XI :- - Chamindu Wickramasinghe :- (M-3, R-165, Ave-82.50, W-0) - Shevon Daniel :- (M-3, R-71, Ave-35.50, W-1) - Sadisha Rajapaksa :- (M-3, R-166, Ave-83, W-3) - Dunith Wellalage © :- (M-3, R-53, Ave-53, W-7) - Yasiru Rodrigo :- (M-3, R-8, Ave-8, W-0) - Raveen de Silva :-(M-3, W-3) - Pawan Pathiraja :- (M-3, R-86, Ave-86) - Ranuda Somarathne :- (M-3, R-60*) - Anjala Bandara (wk) :- (DNB) - Treveen Mathew :- (M-1, W-2) - Matheesha Pathirana :- (M-3, W-5) Must Picks Players :- PAK-U19 vs SL-U19 PLAYERS STATSDREAM 11 POINTSSadisha Rajapaksa (M-3, R-166, Ave-83, W-3)330Dunith Wellalage(M-3, R-53, Ave-53, W-7)310Qasim Akram (M-3, R-72, Ave-24, W-5)239Ahmad Khan (M-3, R-63*, W-3)239Chamindu Wickramasinghe(M-3, R-165, Ave-82.50, W-0)216Matheesha Pathirana(M-3, W-5)177Maaz Sadaqat(M-3, R-58, Ave-29, W-3)173Muhammad Shehzad(M-3, R-110, Ave-36.66, W-0)157Shevon Daniel (M-3, R-71, Ave-35.50, W-1)135Irfan Khan(M-3, R-47, Ave-15.66)131 C & VC Selection :- PAK- U19 vs SL- U19 Safe Option :- - Sadisha Rajapaksa - Dunith Wellalage - Chamindu Wickramasinghe - Qasim Akram Risky Option :- - Irfan Khan - Yasiru Rodrigo - Faisal Akram Sunggested XI :- PAK- U19 vs SL- U19 Note-The team may change anytime after the toss for the Latest team information please join us on our official telegram channel the link is given down after the article. Team- 1 join us in our Official Telegram channel for fast team updates – BabaCric Telegram Channel
https://medium.com/@babacric/pak-u19-vs-sl-u19-dream11-prediction-today-with-playing-xi-pitch-report-players-stats-41fe317fa386
['Baba Cric']
2021-12-30 12:46:37.414000+00:00
['Cricket', 'Srilankacricket', 'Asiacup2021', 'Pakistancricket']
Bitupper: analyzing the services. Bitupper Block Explorer
All the might of the Bitupper Block Explorer is available on testnet.bitupper.com, where you can test its functionality and then leave feedback and suggestions in our social networks! What features does the Explorer provide and why is it needed at all? A block explorer is a tool which allows to track any transaction happened in the blockchain. As you know, the cryptoworld we live in right now, is built around the blockchain technology. Blockchain consists of transactions made between users, made to miners from reward centers, made between stock exchanges and so on. Every single cryptocurrency movement ever made is given it’s own “index” and is stored in blockchain. If the transaction is not found in the blockchain, then a transaction error occurred. The block explorer is similar to the history of a bank account, but it is unified for all accounts of all users of a certain cryptocurrency. This makes the cryptocurrencies transparent and eliminates the need for a third party. How many services do you use to work with cryptocurrency, to send funds, to check whether a transaction made in a certain cryptocurrency was successful? More than 1? And how many of them do you trust your money? Trust your assets only with yourself with the help of Bitupper! We believe that the count of services for working with cryptocurrency should be no more than one! We create a multicurrency Bitupper Wallet containing all supported cryptocurrencies (BTC, LTC right now and BCH, ETH, DOGE to be supported soon) and combined with Bitupper Explorer. What is the essence? Each transaction receives a unique index, which is tracked as part of the blockchain associated with it. The selection can be customized to get results, for example, for a certain period of time. You do not need to use each block explorer separately, everything works on the same website. You can search for transactions by addresses, block heights or hashes. If several matches are found during the search, all such transactions will be displayed to select the desired currency. Bitupper meets the requirements for providing a high level of security, which will be discussed in more details in one of the following publications.
https://medium.com/bitupper/bitupper-analyzing-the-services-bitupper-block-explorer-6e7450629b92
['Maria Fomina']
2018-05-23 18:45:18.778000+00:00
['Bitupper', 'Blockchain', 'Cryptocurrency', 'Crypto', 'Bitcoin']
The Truth and the Bearer
Truth told by anyone regardless of their standing or background is still the truth. That truth can never be diminished much more nullified, we only think it does because of how we view the bearer. - (The) Name’s Not At All Relevant Motley Concoctions In A Blank Mind
https://medium.com/@angmamangenhinyero/the-truth-and-the-bearer-1dd0de2eaf77
['Motley Concoctions In A Blank Mind']
2020-12-23 07:30:19.012000+00:00
['Truth And Life', 'Quotes', 'Quotations', 'Truth']
Rich Diffs for Jupyter Commits & Pull Requests
Version Control is one of the major challenges with Jupyter Notebooks. We can use git to version control notebooks but it’s hard to review notebook diffs i.e. see what changed from one notebook version to another. The issue stems from the fact that Jupyter uses JSON underneath & stores rich media (HTML, images) in the JSON itself. This kind of hybrid format is not well supported in Git. Hence git diffs for Jupyter Notebook are pretty hard to review & resolving merge conflicts is a source of pain. We’re going to look at the two tools that helps us solve this problem: nbdime & ReviewNB Disclaimer: I’m the founder of ReviewNB but this is an objective, factual review of both nbdime & ReviewNB. Please note, there are some other approaches to work around notebook version control. They focus on converting notebooks to .py files (e.g. jupytext) or stripping output from notebooks (e.g. nbstripout). While these might be suitable for some, they take away the most useful feature of notebooks (embedded images, widgets, graphs) from the version control, and subsequently from diff & review process. Hence, we are going to stick to nbdime & ReviewNB which actually work with the notebook format itself. nbdime nbdime provides tools for diff’ing & merging notebooks in your local environment. You can run nbdiff or nbdiff-web commands to see notebook diffs on the command line or web browser respectively. nbmerge supports three-way merge of notebooks with automatic conflict resolution. You can also configure git to use nbdime's diff & merge tools when git diff or git merge is run on a Jupyter notebook. Some limitations of nbdime are, It can’t render pull request diff (only commits or direct file names are supported with nbdiff ) ) There’s no way to write comments or provide feedback (to be fair, nbdime was not built for this purpose) ReviewNB ReviewNB provides diff & commenting for Jupyter Notebooks on GitHub. You can see rich notebook diffs for any commit or pull request. You can comment on a notebook cell and the appropriate email notifications are sent to anyone watching the repository (of course they can unsubscribe). It’s useful to see rich diffs, ask questions, provide feedback & work collaboratively in the context of notebooks. We can track all open discussions with conversation threads & have team conversations directly on notebooks (GDoc style comments for Jupyter). ReviewNB is a web application so you don’t need to install anything on your own machine (you can self host ReviewNB server application if required). It integrates directly with your GitHub and the app is verified by GitHub & available for sale in their marketplace. Some limitation of ReviewNB are, No way to merge notebooks (you’ll need nbdime for this) Only works with GitHub as of Dec-2020 (follow/upvote for GitLab & BitBucket support) It’s a paid service. They do offer free plans for open source & education. Summary As you’d have noticed, both nbdime & ReviewNB have their own strengths & limitations. ReviewNB is good at diff’ing your GitHub commits, PRs & have discussion around notebooks. While nbdime is good at local diff’ing & merging. You can use both the tools parallely to satisfy the notebook version control needs of your team!
https://medium.com/@amitrathi/rich-diffs-for-jupyter-commits-pull-requests-cc3d6046f9b5
['Amit Rathi']
2020-12-25 12:32:13.530000+00:00
['Github', 'Data Science', 'Data', 'Jupyter Notebook']
FALSE: The Chinese Government has not threatened to deport Kenyans infected with COVID-19
FALSE: The Chinese Government has not threatened to deport Kenyans infected with COVID-19 An article published by Nairobi Times claiming that the Chinese Government has threatened to deport Kenyans with novel coronavirus in retaliation to an order to deport Chinese workers in Kenya who lacked proper documentation is FALSE. According to the article, China has threatened to deport Kenyans in Wuhan infected with COVID-19 in retaliation to Interior Cabinet Secretary Fred Matiang’i’s order to deport four Chinese workers who were found to be working in Kenya illegally. One of the workers was caught on camera caning a Kenyan waiter at a restaurant in Nairobi. The article claims that the Chinese Supreme Court issued a note to CS Matiang’i asking him to find other legal avenues to deal with the four Chinese rather than deporting them. The article also does not indicate key details, such as when the note from the Supreme People’s Court was sent, and who specifically wrote it. While the article refers to the ‘Chinese Supreme Court’, it does not have the specifics of who at the court or in the Chinese government had given this order. Additionally, the court is officially known as the Supreme People’s Court, and does not contain any such communication on its official website. There are no reported cases of Kenyans infected with novel coronavirus since the outbreak started according to the World Health Organization. There have been a number of suspected cases, but none has tested positive. The deportation order referenced in the article was issued on 13 February by Interior Cabinet Secretary Fred Matiang’i, but was suspended by High Court judge Luka Kimaru on 17 February. Additionally, the claim that the Government of China had threatened to deport Kenyans in Wuhan has not been published in any mainstream news outlets, and the government has not mentioned it in any of their briefings. In a press briefing on 20 February, Government Spokesperson Cyrus Oguna announced that the Kenyan government has released Ksh.1.3 million for students who are trapped in Wuhan, China following the COVID-19 outbreak, adding to a Ksh. 500,000 donation by the Chinese Government to help the students get supplies. There was no mention of deporting Kenyans in Wuhan, which would have been communicated during the press briefing given its significance. Elseba Chepleting, a Kenyan lawyer, told PesaCheck that any communication from the Chinese Supreme People’s Court to the Government of Kenya would be through the ambassador to a court of equal jurisdiction, in this case the Supreme Court of Kenya, and not directly from the Court to the cabinet secretary as indicated in the article. There is no such communication on the website and the Twitter account of the Embassy of China in Kenya. PesaCheck has looked into the claim that the Chinese Government has threatened to deport Kenyans infected with novel coronavirus and finds it to be FALSE.
https://pesacheck.org/false-the-chinese-government-has-not-threatened-to-deport-kenyans-infected-with-covid-19-7a4cc637c7e5
[]
2020-02-20 13:48:36.895000+00:00
['Coronavirus', 'China', 'Misinformation', 'Fact Checking', 'Kenya']
Winter Solstice, Planetary Conjunction, and You
Winter Solstice, Planetary Conjunction, and You Not the conjunction, but you get the point! (Photo by Dave Hoefler on Unsplash) The winter solstice holds the longest night of the year. From here on out, the daylight time increases. Leading up to this, the darkness has increased. This is simple astronomical fact. As the Romantic writers and others have shown our species throughout history, we may consider the order of nature and the universe in order to look within ourselves individually and collectively. We don’t like to do that in this advanced day and age. How advanced are we? We are steeped in darkness. Many choose willing ignorance because they could not bear to see true light or love. Dictatorial rulers are making a comeback, because to those many who support them, the promise of light to them is wealth, power, control, and superiority. Darknesses like these have held sway throughout much of our history. Many others simply have not awakened to the power of Self, to avail themselves of the reason we have all come here: to express the unique composition of mortality and immortality, of ego and heart, of light and darkness, of body and spirit and acknowledge all these wonderful polarities and reconcile them into creating. Creators, that is what we are. How many have gained such spiritual consciousness? We are steeped in darkness; however, the numbers grow, numbers who heal in love and light, who express the higher frequencies and energies we come to experience, who relate to Self and the Other in love and light, who pursue happiness in the energy of Heart — not Ego. When we seek honest happiness, we bless one another. We grow in light. This time of year brings me to serious reflection about living the truth of who and what I am as a mortal human known by all that characterizes me. I love me, and that allows me to love others. The light grows when we reflect on our darkness, reconcile it with who we are, and call on Heart, that Spirit energy encompassing our existence. Such reflection on this particular solstice may prove to be a rich, personal experience. The universe will display the conjunction of Saturn and Jupiter, which has been seen only a handful of times in the past 2,000 years, and may account for the appearance of the so-called Christmas star, and if for no other reason, this association may be most appropriate. A light that shines into the darkness of humanity. Not only does our current darkness include political mayhem, but also this pandemic continues, economic collapses ensue, many millions are ravaged by years-long wars, and lies are being called facts. Darkness. Light, though, resides in you and me, as well as darkness. We can be that Christmas star shining into the dark recesses. My hope is we will each say yes to acknowledging the solstice, the planetary conjunction, and learn, grow, and experience love, peace, and light for ourselves and one another.
https://medium.com/@michaeldepung/winter-solstice-planetary-conjunction-and-you-19c2b6ec34c7
['Michael Depung']
2020-12-21 20:45:29.366000+00:00
['Spiritual Growth', 'Awakening', '2020', 'Metaphysics', 'Consciousness']
AI for Trading Series №4: Time Series Modelling
AI for Trading Series №4: Time Series Modelling Learn about advanced methods for time series analysis including ARMA, ARIMA. Photo by Isaac Smith on Unsplash In this series, we will cover the following ways to perform time-series analysis- Random Walk Moving Averages Model (MA Model) Autoregression Model (AR Model) Autoregressive Moving Averages Model (ARMA Model) Autoregressive Integrated Moving Averages (ARIMA Model) Random Walk Model The random walk hypothesis is a financial theory stating that stock market prices evolve according to a random walk and thus cannot be predicted. A Random Walk Model beleives that [1]: Changes in stock prices have the same distribution and are independent of each other. Past movement or trend of a stock price or market cannot be used to predict its future movement. It’s impossible to outperform the market without assuming additional risk. Considers technical analysis undependable because it results in chartists only buying or selling a security after a move has occurred. Considers fundamental analysis undependable due to the often-poor quality of information collected and its ability to be misinterpreted. A random walk model can be expressed as : Random Walk Equation This formula represents that location at the present time t is the sum of the previous location and noise, expressed by Z. Simulating Returns with Random Walk Importing libraries Here, we are importing important libraries needed for visualization and also for simulating random walk model. from statsmodels.graphics.tsaplots import plot_acf from statsmodels.tsa.stattools import acf import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np sns.set() plt.style.use('ggplot') plt.rcParams['figure.figsize'] = (14, 8) Now we generate 1000 random points by adding a degree of randomness to the previous point to generate the next point with 0 as starting point. # Draw samples from a standard Normal distribution (mean=0, stdev=1). points = np.random.standard_normal(1000) # making starting point as 0 points[0]=0 # Return the cumulative sum of the elements along a given axis. random_walk = np.cumsum(points) random_walk_series = pd.Series(random_walk) 2. Plotting the simulated random walk Now, lets plot our dataset. plt.figure(figsize=[10, 7.5]); # Set dimensions for figure plt.plot(random_walk) plt.title("Simulated Random Walk") plt.show() Simulated Random Walk 3. Autocorrelational Plots An autocorrelation plot is designed to show whether the elements of a time series are positively correlated, negatively correlated, or independent of each other.An autocorrelation plot shows the value of the autocorrelation function (acf) on the vertical axis. It can range from –1 to 1. We can calculate the correlation for time series observations with observations with previous time steps, called lags. Because the correlation of the time series observations is calculated with values of the same series at previous times, this is called a serial correlation, or an autocorrelation. A plot of the autocorrelation of a time series by lag is called the AutoCorrelation Function, or the acronym ACF. This plot is sometimes called a correlogram or an autocorrelation plot. random_walk_acf = acf(random_walk) acf_plot = plot_acf(random_walk_acf, lags=20) Autocorrelational Plot Looking at the corelation plot we can say that the process is not stationary. But there is a way to remove this trend. I am going to try to different ways to make this process a stationary one - Knowing that a random walk adds a random noise to the previous point, if we take the difference between each point with its previous one, we should obtain a purely random stochastic process. Taking the log return of the prices. 4. Difference between the 2 points random_walk_difference = np.diff(random_walk, n=1) plt.figure(figsize=[10, 7.5]); # Set dimensions for figure plt.plot(random_walk_difference) plt.title('Noise') plt.show() cof_plot_difference = plot_acf(random_walk_difference, lags=20); We see that this is the correlogram of a purely random process, where the autocorrelation coefficients drop at lag 1. Moving Average Model (MA Models) In MA models, we start with average mu, to get the value at time t, we add a linear combination of residuals from previous time stamps. In finance, residual refers to new unpredictable information that can’t be captured by past data points. The residuals are difference between model’s past prediction and actual values. Moving average models are defined as MA(q) where q is the lag. Representation of Moving Average Model with lag ‘q’; (Source: AI for Trading nano degree course on Udacity) Taking an example of MA model of order 3, denoted as MA(3): Representation of Moving Average Model with lag=3; MA(3) The equation above says that the position y at time t depends on the noise at time t, plus the noise at time t-1 (with a certain weight epsilon), plus some noise at time t-2 (with a certain weights), plus some noise at time t-3. from statsmodels.tsa.arima_process import ArmaProcess # start by specifying the lag ar3 = np.array([3]) # specify the weights : [1, 0.9, 0.3, -0.2] ma3 = np.array([1, 0.9, 0.3, -0.2]) # simulate the process and generate 1000 data points MA_3_process = ArmaProcess(ar3, ma3).generate_sample(nsample=1000) plt.figure(figsize=[10, 7.5]); # Set dimensions for figure plt.plot(MA_3_process) plt.title('Simulation of MA(3) Model') plt.show() plot_acf(MA_3_process, lags=20); As you can see, there is a significant correlation upto lag 3. Afterwards, the correlation is not significant anymore. This makes sense since we specified a formula with a lag of 3. Autoregression Models (AR Models) An auto-regressive models (AR Models) tries to fit in a line that is linear combination of previous values. It includes an intercept, that is indipendent of previous values. It also contains error term to represent movements that cannot be predicted by previous terms. AR Models (Source: AI for Trading nano degree course on Udacity) An AR model is defined by its lag. If an AR model uses only yesterday’s value and ignores the rest, its called AR Lag 1, if the model uses two previous days values and ignores the rest, its called AR Lag 2 and so on. AR Lag (Source: AI for Trading nano degree course on Udacity) Usually, autoregressive models are applied to stationary time series only. This constrains the range of the parameters phi. For example, an AR(1) model will constrain phi between -1 and 1. Those constraints become more complex as the order of the model increases, but they are automatically considered when modelling in Python. Simulating return series with autoregressive properties For simulating a AR(3) process, we will be using ArmaProcess. For this, let us take the same example that we used to simulate random walk model: Representation of MA(3) Model Since we are dealing with an autoregressive model of order 3, we need to define the coefficient at lag 0, 1, 2 and 3. Also, we will cancel the effect of a moving average process. Finally, we will generate 10000 data points. ar3 = np.array([1, 0.9, 0.3, -0.2]) ma = np.array([3]) simulated_ar3_points = ArmaProcess(ar3, ma).generate_sample(nsample=10000) plt.figure(figsize=[10, 7.5]); # Set dimensions for figure plt.plot(simulated_ar3_points) plt.title("Simulation of AR(3) Process") plt.show() plot_acf(simulated_ar3_points); Looking at the correlation plot, we can see that the coefficient is slowly decaying. Now lets plot the corresponding partial correlation plot. Partial Autocorrelation Plot The autocorrelation for an observation and an observation at a prior time step is comprised of both the direct correlation and indirect correlations. These indirect correlations are a linear function of the correlation of the observation, with observations at intervening time steps. It is these indirect correlations that the partial autocorrelation function seeks to remove. from statsmodels.graphics.tsaplots import plot_pacf plot_pacf(simulated_ar3_points); As you can see the coefficients are not significant after lag 3. Therefore, the partial autocorrelation plot is useful to determine the order of an AR(p) process. You can also view these values using the import statement from statsmodels.tsa.stattools import pacf from statsmodels.tsa.stattools import pacf pacf_coef_AR3 = pacf(simulated_ar3_points) print(pacf_coef_AR3) Auto Regressive Moving Average Model (ARMA) The ARMA model is defined with a p and q. p is the lag for autoregression and q is lag for moving average. Regression based training models require data to be stationary. For a non-stationary dataset, the mean, variance and co-variance may change over time. This causes difficulty in predicting future based on past. Looking back at the equation of Autoregressive Model (AR Model) : AR Model. (Source: AI for Trading nano degree course on Udacity) Looking at the equation of Moving Average Model (MA Model) : MA Model. (Source: AI for Trading nano degree course on Udacity) Equation of ARMA model is simply the combination of the two : ARMA Model Hence, this model can explain the relationship of a time series with both random noise (moving average part) and itself at a previous step (autoregressive part). Simulating ARMA(1, 1) Process Here, we will be simulating an ARMA(1, 1) model whose equation is : ar1 = np.array([1, 0.6]) ma1 = np.array([1, -0.2]) simulated_ARMA_1_1_points = ArmaProcess(ar1, ma1).generate_sample(nsample=10000) plt.figure(figsize=[15, 7.5]); # Set dimensions for figure plt.plot(simulated_ARMA_1_1_points) plt.title("Simulated ARMA(1,1) Process") plt.xlim([0, 200]) plt.show() plot_acf(simulated_ARMA_1_1_points); plot_pacf(simulated_ARMA_1_1_points); As you can see, both plots exhibit the same sinusoidal trend, which further supports the fact that both an AR(p) process and a MA(q) process is in play. Autoregressive Integrated Moving Average (ARIMA) This model is the combination of autoregression, a moving average model and differencing. In this context, integration is the opposite of differentiation. Differentiation is useful to remove the trend in a time series and make it stationary. It simply involves subtracting a point a t-1 from time t. Mathematically, the ARIMA(p,d,q) now requires three parameters: p: the order of the autoregressive process d: the degree of differentiation (number of times it was differenced) q: the order of the moving average process The equation can be expressed as follows: Representation of ARIMA model np.random.seed(200) ar_params = np.array([1, -0.4]) ma_params = np.array([1, -0.8]) returns = ArmaProcess(ar_params, ma_params).generate_sample(nsample=1000) returns = pd.Series(returns) drift = 100 price = pd.Series(np.cumsum(returns)) + drift returns.plot(figsize=(15,6), color=sns.xkcd_rgb["orange"], title="simulated return series") plt.show() price.plot(figsize=(15,6), color=sns.xkcd_rgb["baby blue"], title="simulated price series") plt.show() Extracting Stationary Data One way to get stationary time-series is by taking difference between points in time-series. This time difference is called rate of change. rate_of_change = current_price / previous_price The corresponding log return will become : log_returns = log(current_price) - log(previous_price) log_return = np.log(price) - np.log(price.shift(1)) log_return = log_return[1:] _ = plot_acf(log_return,lags=10, title='log return autocorrelation') _ = plot_pacf(log_return, lags=10, title='log return Partial Autocorrelation', color=sns.xkcd_rgb["crimson"])
https://medium.com/analytics-vidhya/time-series-modelling-d6531c9a6338
['Purva Singh']
2020-12-10 16:10:51.339000+00:00
['Artificial Intelligence', 'Ai For Trading', 'Time Series Analysis', 'Finance']
Bangkok: A truly terrifying egg
I know it’s been a while since I’ve written a blog (oh, how I still dislike that word) when I have to read through my last blog in order to remember what I have and what I haven’t written about… So, it’s been over a month since I last posted something… Well, I’ve posted stuff, obviously… We live in the world of social media after all… I’ve just not posted anything worth reading… Here’s to hoping THIS is worth reading eh? We’ve been living in the new apartment for a while now, and dear friends, I am loving it… I honestly wish we could have found this place when we first arrived in Thailand, a little under two years ago. Two years. In a few days time, I will have been living in Thailand for two years. It feels so unreal… Had you asked me three years ago where I would be living in the future, I would have given you one straight answer “I’ll be living in Dudley” I had no realistic option of moving away, I honestly believed I would live all of my life there and that one day I would be buried there… I did have a fantasy of one day living in the English seaside town of Great Yarmouth but as I say, it was a fantasy… I even had a beachfront house that I used to look at every time I visited… I’d look at it and say “One day I’ll live in that house”… Well, it looks like I will never live in that house in Great Yarmouth… I do however live in Bangkok, a city I love… Not a bad trade-off eh? For the record, we visited Great Yarmouth one night just before we left for Bangkok… Place out of season is, in fact, a bit of a shit hole… Sorry citizens of Great Yarmouth! I still find myself driving through town in a taxi (Jesus Christ, you will never find me driving a car here, it’d be too scary) looking out of the window when suddenly it will hit me “YOU LIVE IN BANGKOK!” A city I knew of, but a city I never dreamed of visiting, never mind living here… And yet here I am… I don’t know if I will ever truly get over this “pinch me, am I dreaming” state… Maybe I don’t really want to… I don’t want to be someone who takes this city, and what I have here for granted. *Squeal of brakes* I paused for a while and found myself looking back at previous blogs… My very FIRST blogs to be exact… I only intended on looking at the first one, but found myself reading the first six…. Bloody hell, I’d forgotten most of the stuff I’d written about! (shameless plug for my previous blogs) Where was I?….. Oh yeah… I live in Bangkok! We love our new apartment, our last place was great, but I have to say I love living here so much more… I may have mentioned it before but our new place is right in the middle of one of Bangkok’s red-light districts, we have beer bars, massage parlours and freelancers walking the streets… Our local bar (a great place by the way) is a beer bar, we’re pretty sure all the girls “work” there but that does not take away from the fact that it’s a great bar, the drinks are reasonably priced and we get amazing service there… Being a “regular” drinker at the bar, you do get to know the girls… “Jaeb” is friendly and seems to have built a strong friendship with Jo… Amy is friendly and outgoing, but is a terribly moody drunk… She will be the life and soul of the party one minute, then she’s slumped over the bar looking angry the next… There’s a younger girl who works in the bar who after a few weeks of drinking there, I have noted she has three facial expressions that she pulls when she is with “customers”… Smiley face — She has a huge smile when she is face to face with “customers” (for customer read, a man who has bought her a drink and therefore has bought her company) The grimace — This is when the customer is not looking in her direction or is otherwise occupied… It’s when the fake smile just can’t be held anymore. Misery — I’ve seen her pull this face as she wanders to the rear of the bar and collects her bag just before heading off into the night with her “customer” I don’t like it when she’s with customers… I hate to see her face change, I hate to see how uncomfortable she looks… Mostly I hate the men I’ve seen her sitting with… They all look the same… Slimy looking Europeans, old enough to be her dad… But… It’s the line of work she (and thousands of girls like her) is in… It pays the bills… I do however silently breathe a sigh of relief when I walk into the bar and see she’s ok. Other drawbacks…. Beggars. We have two “regular” beggars near our apartment, we have “one-legged Joe”, a beggar who as his name would suggest has only one leg… He sits near to the BTS station and politely raises his hand as you pass, if you give him 20 baht, he’ll smile politely… Don’t hand him cash and he’ll also smile… Just maybe not as much as if you had given him money. And then we have “Captain Jack”. I hate Captain Jack with a passion… I first encountered “Jack” a week or so after moving into our apartment… I was walking back from our old apartment when I walked past him, “Money! Money! Hungry!” He shouted while waving his plastic cup in the air… Now, I don’t take kindly to having people shout at me… Especially when they proclaim to be hungry while having a half-full bottle of Hong Thong whiskey with them… “Mai Dai” (cannot) I said but still he continued “Hungry! Hungry”… A short time later both Me and Jo walked past him… “You! Give me money!”… Fuck you, Captain Jack… Fuck you. This week Me and Jo were walking toward our apartment and we encountered a woman who seemed…. How shall I put this…. Off her tits on drugs. She walked past us then turned and ran toward the BTS station, straight to Captain Jack… They exchanged words as we passed… Bickering like an old married couple… Maybe the good captain is funding her habit… Fuck you once more Captain Jack… Fuck you. Our friends’ group here in Bangkok means a lot to us both… The COVID lockdown would have been a much darker experience if it hadn’t been for our weekly online meetups… Thankfully now Thailand is pretty much COVID free our meetups are now in person and usually alcohol-fueled… Our friend Nicola’s murder mystery birthday night out was an absolute delight… a group of us all sat in a bar playing characters, trying to solve a mystery… Great fun… Even if I did find that solving mysteries really isn’t my strong point… I am currently trying to write my own murder mystery tho… Let’s see how that turns out, shall we? *Squeal of brakes* I wrote a bit about a night out that didn’t go according to plan… A night that promised so much but actually turned out to be boring and complete letdown… However, I decided to delete it, businesses are struggling here and they don’t need people like me slagging them off via keyboard… I will be making my feelings known tho as soon as the elusive “feedback” form appears in my email inbox. From a night that didn’t go according to plan… To a night that far exceeded everything we could have hoped for… Mine and Jo’s Halloween/Housewarming/Birthday party for our friend Xaviera. We invited people, they turned up in varying styles of fancy dress… Vampires, candy skulls, various movie characters… And my friend Henry dressed as a truly terrifying egg. Our maids (you would have never thought you’d use that phrase before you came here) mom cooked food for our party (delicious by the way… The food… Not her mom… A lovely woman but I’m pretty sure eating her would be frowned upon) I think we decorated the apartment “just enough”… Halloween is a weird one, it’s so easy to overdo it when it comes to decorations… I had one of those strange occurrences when a throwaway idea I had happened to work perfectly… Yes, you CAN hide a projector in the head of a Grim Reaper and project images of ghosts onto a wall! And an even better level of success when we hid a speaker in the bathroom randomly playing spooky sounds throughout the night… Everyone seemed to have a great time, so much so that in the early hours of Sunday morning Jo suggested we all move to our local bar (the noise levels in the apartment didn’t seem to be going down)… At 5 am Me and Jo left the bar… I think we can call that a good night… Right? I suppose I wouldn’t be writing these things correctly if I didn’t approach the subject that has dominated Bangkok for the past few weeks… Amazingly it’s not COVID… It is, however, something I have to tread carefully around…. I’ll explain… Or get arrested while attempting to explain. There are things we cannot do here in Thailand that we can do quite freely in the UK. Back in England, I can freely proclaim that I feel that the British Royal family is a waste of time, a waste of money and should be gotten rid of. IF I had those same sentiments here, about the Thai royals… I couldn’t do that. The king holds a level of respect and everyone here is expected to respond accordingly… The king’s anthem plays in public? You stop what you’re doing, stand up and wait for it to finish. The king does something that you don’t agree with? You sure as shit don’t go on Facebook and slag him off. We have witnessed news programmes being blacked out if they are running a story that sheds a less favourable light on the monarchy… A bit like the classic 90’s dance track by SNAP!… They’ve got the power. There has been an uprising of late… Younger Thais have taken to the streets to show how they feel… They have issues with the government and with certain aspects of the monarchy, one protest… A PEACEFUL protest was met with a level of force that I feel was unacceptable, police used water cannons against these protesters, most of them being high school kids… It made for unpleasant viewing… Will there be change? I really don’t know… Do I think there should be change? As an ex-pat here I honestly feel it’s not my place to say… But the kids taking to the streets seem passionate about change and after all, it will be those kids taking this country into the future. See? It’s not all beers and gogo bars here in “The land of smiles”… So, until next time…
https://medium.com/@adecox/bangkok-a-truly-terrifying-egg-bf9e5536d2bf
['Ade Cox']
2020-11-05 02:53:33.334000+00:00
['Life', 'Bangkok', 'Halloween']
Everything You Have Right Now is Enough
When I was in my 20s and early 30s, I used to tell people how I loved the word more. Yikes — if only I could go back in time and kick my ass for being so ignorant. I sold national television advertising and I was good at it. But I always wanted more. More money, more recognition, and more opportunity. I was driven and accomplished a lot. I bought a house when I was 23. I had a nice car, money in the bank and I travelled. But sometimes I would lie awake at night and think, “Is this it? Is my life only meant to be about chasing more?” I had hit the wall of more because it meant I never had enough. And quite frankly — it was a crappy and dissatisfying feeling. Secretly I knew I wanted to focus on the gentle side of more. I wanted more joy, more giving, more love, more calm and more learning. I wanted more nature, more reading and time spent with family and friends. I wanted to be more grateful. And I finally started to figure it out that I had to start to love the word enough.
https://medium.com/mind-cafe/everything-you-have-right-now-is-enough-757d4a270f7a
['Kim Duke']
2020-12-19 17:46:08.750000+00:00
['Life Lessons', 'Self', 'Simplicity', 'Self-awareness', 'Life']
Back to Basics: The Neurobiology of Service — How 1+1 Can Equal 3
By Sheila Ohlsson Walker, Ph.D. Look no further than the extraordinary altruism in support of our country’s healthcare workers to see how selfless acts of service bring a community together in tangible and palpable ways. Lovingly home-cooked meals, handwritten notes of gratitude, Girl Scouts bearing cases of cookies, and neighborhoods whooping, cheering and banging on pots and pans every night at the same time. These generous acts fuel the fire that burns in the heart and soul of our communities, creating a vibrant, connective energy that permeates the DNA of all who live, work and play within them. Stronger together than apart is a phenomenon called collective efficacy. This group dynamic, one with widespread effects in the brain and body, creates a feeling of unity and convergence within a community, connecting every individual biopsychosocially with an energy force larger than themselves. While this larger entity is not comprised of a material, like bricks and mortar, the biochemical “upward spiral” driven by the feeling of connection is as concrete and substantive as it gets. This collective and fundamentally human dynamic is the core feature of “Blue Zone” communities — the healthiest and longest-living populations on our planet. Our brains may be evolutionarily hard-wired for altruism via the powerful, positive feedback loop that feeling part of an entity larger than oneself sets into motion. Selfless acts are associated with a specific pattern of brain activity within the limbic system (emotional processing), key cortical areas within the “mentalizing network” (reputation and self-referential processing), regions such as the nucleus accumbens and anterior cingulate cortex (reward-processing areas related to pleasure), as well as structures in the salience network such as the amygdala (which alerts us when to feel fear and how to react) that guide our attention toward what to focus on and away from what is deemed helpful to ignore. In this biosocial story, the emotions we feel when helping others were designed to feel good, partly to ensure survival of our species but also to actively promote the safety and well-being of those we love. Whatever the motivation, selfless behavior is contagious within groups, as shown by research demonstrating higher levels of cooperativeness when surrounded by those of like-minded orientation. Moreover, altruism can have a 3–1 multiplier effect (contagious up to 3 degrees of separation), fueled by biochemicals such as oxytocin (the “love” hormone, important for human connection), serotonin (induces positive mood states, higher levels combat depression), dopamine (linked to novelty, excitement, and adventure) and epinephrine (provides fast and immediate energy!). Service to others, helping those around us, is a core element in human flourishing, as it is characterized by the systemwide neurobiological response that fortifies broad measures of health, wellness and quality of life. In short, while “paying it forward” can improve the condition of the one who receives, studies reveal that altruism can be even more powerful for the physical health and psychological well-being of the one who gives. Not surprisingly, a service-to-others mindset is activated by meditation and an attitude of mindfulness. By slowing down the sympathetic nervous system and improving core psychological capacities such as attention and emotional self-regulation, the settling-down process allows us to open our minds and connect with the world around us, activating then actuating our inherent compassion for others. In this state of “human being” (vs. “human doing”) we can more easily access difficult emotions: grief, sadness, anxiety. In internalizing our own feelings of pain we gradually settle into our innate empathic qualities and extend this shared feeling out to others. Tonglen meditation (Tonglen is the Tibetan word for giving and taking) is an ancient practice that fosters compassion for our fellow travelers on planet earth with the premise that by healing others at an energetic level, we also heal ourselves. It takes only one person — in a single moment — to spark the neurobiological change that temporarily transports us away from our psychological burdens. Compassionate Acts of Service The “upward spiral” is triggered when we voluntarily choose kindness. Forced or mandated altruism stimulates a different set of neural circuitry. Here are ideas for you and the children in your life to choose from. Thank You! Express gratitude for those who are leaving exhaustion and fear behind in service to others. This can include teachers, healthcare workers, grocery store clerks, delivery people, bus drivers and those in your own family. Express gratitude for those who are leaving exhaustion and fear behind in service to others. This can include teachers, healthcare workers, grocery store clerks, delivery people, bus drivers and those in your own family. Help celebrate a graduation! Make a sign to put up in your neighbor’s yard or at their door. Write a note of hearty CONGRATULATIONS to the graduate! Make a sign to put up in your neighbor’s yard or at their door. Write a note of hearty CONGRATULATIONS to the graduate! Create a “Day of Kindness” once per week with your family or class. Choose 3–5 things that are easy to do: call a relative, draw a picture, do a household chore without being asked, learn a recipe from a grandparent via Zoom, and get creative to support our extraordinary frontline helpers! Choose 3–5 things that are easy to do: call a relative, draw a picture, do a household chore without being asked, learn a recipe from a grandparent via Zoom, and get creative to support our frontline helpers! Smile with your eyes and say, “Hello!” with your heart while out and about. The eyes are “windows to the soul,” a core part of our mammalian connective tissue, and give others a sense of the human being behind the mask. As we know from science, smiling with our eyes lights up an entirely distinct, joyful neurobiological pathway. Life is all about moments, and positive energy, even between two strangers, can be powerful. with your heart while out and about. The eyes are “windows to the soul,” a core part of our mammalian connective tissue, and give others a sense of the human being behind the mask. As we know from science, smiling with our eyes lights up an entirely distinct, joyful neurobiological pathway. Life is all about moments, and positive energy, even between two strangers, can be powerful. Run an errand for a neighbor, or cook a meal for a friend. for a neighbor, or for a friend. Give your full attention to a friend or colleague sharing the “new hard” of teaching during quarantine. “Zoom fatigue” may already be exacerbating the exhausting end-of-school-year race-to-summer. Empathize quietly around the common emotional thread that teaching right now is much more work for much less joy. Breathe together, connect, and heal… to a friend or colleague sharing the “new hard” of teaching during quarantine. “Zoom fatigue” may already be exacerbating the exhausting end-of-school-year race-to-summer. Empathize quietly around the common emotional thread that teaching right now is much more work for much less joy. Breathe together, connect, and heal… Check in with friends who are struggling and affirm that you are there for them. who are struggling and affirm that you are there for them. Offer opportunities that allow children’s natural propensity for kindness and generosity to shine. Honor wholehearted giving behavior with praise of character, not material rewards. Teach by example that actions speak louder than words. We are living through a defining event of our time, one that is likely to fundamentally alter aspects of how our society and our world operate for generations to come. Social isolation, more damaging to our health than smoking 15 cigarettes per day, is a major public health issue. Symptoms of post-traumatic stress disorder (PTSD), triggered by events perceived as traumatic in wartime and peace, often do not show up while IN the experience, but months or years later, with higher prevalence rates for our front line heroes, in particular healthcare workers and their families. Mental health, a growing crisis pre-pandemic, is now squarely on the front burner. These are just some of the issues we are facing at a societal and global level on the road ahead. Accordingly, while the 13th century phrase “This too shall pass” is apropos, we have much yet to endure. So let’s take a moment to “travel,” in our minds, into the future. When we look back on this chapter in history, what will we remember about the choices we made in order to survive, thrive, and lift others in our midst? What are the stories we will tell our children and grandchildren about the power of the human spirit, adaptation through adversity, and the moments that helped us hold on to a ray of hope? Here is the story. We weathered the storm, and emerged stronger, healthier, more resilient and more closely connected than we were before. We achieved post-traumatic growth by standing together, hand in hand. As Bryan Stephenson, civil rights advocate and founder of the Equal Justice Initiative says: if you want to make our world a better place, “get proximate.” Live your message by helping others in ways that are personal, feasible, and comfortable for you. By making service a part of your narrative, modeling it in your home, classroom, and community, you are a light to others who follow your lead. Every altruistic act boosts the balance in your neurobiological bank of human connection and resilience, an account that accumulates interest over time. The enduring personal resources (physical, psychological, intellectual, social) that accrue are yours. And here’s a bonus — withdrawal of assets only compounds the interest. Like the Magic Penny song goes: “Love is something if you give it away, you end up having more”… When our altruistic bank accounts are full, and we are living in “MWe,” (Me + We). it is easier to look out for everyone, not just those in our “tribe” — a dynamic that is one of the central and most formidable challenges of this time in our history. “We are all in the same storm, but we are not in the same boat.” In the end, we have agency and we have choice. Adversity can build us up or wear us down, and we malleable, dynamic, and resourceful social animals are stronger together than apart. “You must be the change you wish to see in the world.” — Mahatma Gandhi
https://medium.com/@turnaround/back-to-basics-the-neurobiology-of-service-how-1-1-can-equal-3-a0ad97cc8d9c
['Turnaround For Children']
2020-05-11 22:12:01.474000+00:00
['Covid 19', 'Coronavirus', 'Kindness', 'Helping Others', 'Altruism']
Find-aim.com
Find-aim.com is a company/organisation that gives Career guidance to help students and youth to find and achieve their future goals and make their career easy for them. Many peoples are happy with this organisation’s services and enjoy their happy and successful life. Many peoples take wrong decisions related to their future in young life and when they realize that mistakes, it is too late. Find-aim.com is providing you services that are clarify your goals and your future. They are really upset because thousands of students are unemployed after degrees and lot of skills. And They are very serious because you are the future of world. Our motive is that you achieve your goals, career and successful life. Career is an important part of life. We solve your career problems, guide you time to time and make your life full of happiness and enjoy-full. They provide services to those peoples and students who are facing problems in the way of their career, they are living under stress. And want help for their future and goals SERVICES #1 SUGGEST YOU YOUR CAREER OPPORTUNITIES, ‏‏‎‏‏‎ ‎‏‏‎‎‏‏‎ ‎‏‏‎ ‎‎‏‏‎ » If You want any suggestion about your future and dreams and others. Don’t worry, we are always ready to solve your problems. #2 SOLVE CONFUSION ABOUT CAREER, ‏‏‎‏‏‎ ‎‏‏‎‎‏‏‎ ‎‏‏‎ ‎‎‏‏‎ » Solve all type of confusions and problems that are coming in the way of your dreams and goals. #3 FILLING APPLY FORMS/ ONLINE FORMS, ‏‏‎‏‏‎ ‎‏‏‎‎‏‏‎ ‎‏‏‎ ‎‎‏‏‎ » We are also fill forms of your online exams or jobs on very low cost. #4 SOLVE CONFUSION OF 10/12TH, ‏‏‎‏‏‎ ‎‏‏‎‎‏‏‎ ‎‏‏‎ ‎‎‏‏‎ » Have problems in your schooling/colleges life? And want to solve it. So we are here to help you. #5 WHAT TO DO IN THE MIDDLE OF EXAM, ‏‏‎‏‏‎ ‎‏‏‎‎‏‏‎ ‎‏‏‎ ‎‎‏‏‎ » Have pressure in exams or interview? Don’t worry, We give you some tips and some motivation to crack this. #6 HOW ATTEND COUNSELINGS, »We know about problems in counselings and fear of filling the wrong form. So we will help you for your counseling and others. contact us. #7 WHICH COLLEGES/BRANCH IS BETTER FOR YOU, ‏‏‎‏‏‎ ‎‏‏‎‎‏‏‎ ‎‏‏‎ ‎‎‏‏‎ » We give all information about which colleges or branches are best for you depending on your interest, qualifications and talent. #8 PROVIDE JOBS ON THE BASES OF YOUR TALENT/STUDY, ‏‏‎‏‏‎ ‎‏‏‎‎‏‏‎ ‎‏‏‎ ‎‎‏‏‎ » Suggest jobs and exam on the bases on your talent and qualification. #9 CONTACT FREELY. ‏‏‎‏‏‎ ‎‏‏‎‎‏‏‎ ‎‏‏‎ ‎‎‏‏‎ » Connect us more easily with our contact number and email on “contact us” page. =============================== Join on social media: Instagram(@find_aim_com) : https://www.instagram.com/find_aim_com/ Twitter(@find_aim_com) : https://twitter.com/find_aim_com Facebook : https://www.facebook.com/officialfindaimcom
https://medium.com/@find-aim-com/find-aim-com-e49e8c451f62
[]
2020-12-17 16:20:08.938000+00:00
['Career Advice', 'Find Aim', 'Career Guidance', 'Career Development', 'Careers']
Scotland needs a new war on poverty
Few political leaders capture my imagination quite like Lyndon B. Johnson, the 36th President of the United States. President Johnson’s “War on Poverty”, a package of government initiatives aimed at creating jobs, investing in education, expanding access to health care and strengthening welfare benefits, was remarkable in its scope, ambition and legacy. LBJ, who as a school teacher in Texas saw first hand the challenges faced by poor families, considered the depth of poverty in the US a national disgrace that demanded a national response. He identified the cause of poverty not as the personal moral failings of the poor but as a societal failure. As Johnson put it in his 1964 State of the Union address, “Our aim is not only to relieve the symptoms of poverty, but to cure it and, above all, to prevent it.” Campaigning in Lanark and Hamilton East as Labour’s candidate in last summer’s election, President Johnson’s War on Poverty frequently came to mind. I spoke with a single mother in Hamilton, who couldn’t afford to pay for her children to take part in after-school activities with their friends; a family in Larkhall, in which three generations of men had never been able to find a job; and parents in rural Clydesdale, relying on a food bank in Carluke to feed their family. Recent analysis by End Child Poverty shows that around 24 per cent of children in the constituency suffer the indignity of poverty, a shameful statistic. But what’s equally striking is the extreme inequality between neighbouring towns: the child poverty rate in Larkhall is almost double that in Uddingston and Bothwell. Lanark and Hamilton East is no outlier. Almost one in four of Scotland’s children live in poverty, significantly higher than in most other European countries. This is projected to rise in the coming years, with the Institute for Fiscal Studies forecasting that more than a third of children in the UK will be living in poverty by 2022. That is 5.2 million children — the highest number since records began. Prime Minister Theresa May claims that “work is the best route out of poverty”, yet two-thirds of children growing up in poverty live in a family where at least one person works. The families of public service workers are increasingly vulnerable: according to the TUC, one in seven children with a parent working in the public sector will soon be living below the breadline. The reality is that eight years of Tory austerity has reversed the progress made by the previous Labour Government, which lifted two million people out of poverty across the UK. Further cuts to social security, and the rollout of Universal Credit, is only making the situation worse. Just this week, the Trussell Trust reported that food banks in Scotland distributed 170,000 emergency food parcels last year, a 17 per cent rise in food bank use over the past 12 months. They also showed that food bank use in areas where Universal Credit has been rolled out went up by 52 per cent in the first 12 months of the system, an increase four times higher than in areas where it is yet to be introduced. The Conservative’s ideological insistence that “the best government can do is get out of the way” is wrong at the best of times — and these are clearly not the best of times. The SNP, for their part, have failed to take meaningful action to tackle child poverty in over a decade in power. Earlier this year, Douglas Hamilton — the Scottish Government’s top advisor on poverty — cautioned that while rhetoric in Scotland is progressive, “we don’t have the actions to match up to that.” The Scottish Government’s Child Poverty Bill sets the laudable aim of reducing child poverty to ten per cent by 2030. But we can’t afford another lost generation of children, living without basic necessities and with the increased risk of chronic mental and physical illness, their opportunities restricted from the outset. Scotland’s children need radical action now. They need a War on Poverty. We need a national effort that brings together government, trade unions, business, and civil society to systematically address the myriad causes and consequences of poverty and inequality. Given the prevalence of in-work poverty, it should set out how we support those on low pay — starting with the introduction of a real living wage of £10 per hour — and guarantee job security for all. It should show how we can realise the right of every child to a decent, secure and heated home. And, as a matter of urgency, the Scottish Government should provide support for councils to follow the lead of Labour-controlled North Lanarkshire in providing free meals to children who need them 365 days a year. When it comes to welfare, there are a raft of measures that the Scottish Government could take immediately, any one of which would lift tens of thousands of children out of poverty. First, adopt Scottish Labour’s plan for a £5-a-week child benefit top-up — rather than continuing to vote with the Tories to block it. Second, increase the child element of universal credit: a top-up of £50 a month would reduce child poverty in Scotland by just under a fifth. Third, introduce a Scottish Child Tax Credit, using Holyrood’s new powers over social security to target the poorest households in the country. Fourth, initiate a pilot program to test a “minimum income guarantee”, a hybrid concept designed to combine the benefits of a universal basic income and a means tested household payment. Recent modelling by IPPR Scotland found that a minimum income guarantee could bring poverty rates down in Scotland more effectively than a universal basic income. Mr Hamilton recently described Holyrood’s new powers over tax and social security as a “game changer”, adding that “We’ll no longer be able to say ‘The reason why there’s so many people in Scotland is because of Westminster’s benefit policies.’” The time has come for the Scottish Government to match its progressive rhetoric with bold action. This article was originally published by The Scottish Fabians on 27 April 2018.
https://medium.com/@andrewhilland/scotland-needs-a-new-war-on-poverty-c247f6879737
['Andrew Hilland']
2020-12-16 04:28:39.681000+00:00
['Labour Party', 'Scotland', 'Human Rights', 'Poverty', 'Lyndon Johnson']
.NET Framework vs .NET Core: What is the Difference and the Future of .NET
Microsoft has been rapidly developing and releasing new technologies during the past years. When I started programming in 2015, .NET Core was just announced, and the primary software framework of Microsoft was the .NET Framework. .NET Core has come to the market as a modern solution to addresses the limitations of the traditional .NET Framework and has earned much popularity since it was released. Developing complex business applications, primarily for financial services companies, I have worked with both the .NET and .NET Core frameworks. Thus, in this article, I share my experience and focus on the comparison between the two .NET frameworks. I walk you through some .NET basics, give some examples from my daily projects, as well as what to keep eyes on. .NET Framework Ecosystem Before looking at the two frameworks, let’s start with some high-level picture of the .NET Ecosystem. It has three major components: .NET Framework, .NET Core and Xamarin, as it can be seen on the figure below. Xamarin is used for building cross-platform mobile applications. It targets iOS, Android or Windows Phone devices. It is not a subject of this article because if the solution you develop is a mobile application, Xamarin is currently the only choice. .Net Framework is the first software framework introduced by Microsoft ,back in 2000. It includes a runtime environment for creating Windows apps and web services and supports websites, services, and desktops apps. It has been in use for already 20 years. which proves its reliability, also backed up by the many tools and libraries developed for it. .Net Core was introduced in 2016 as an open-source and cross-platform framework. It is used for building applications for all operating systems, including Windows, Mac, and Linux. .Net Core supports multiple languages — C#, Visual Basic, and F#. The focus when creating the framework was on the highly-performing and scalable systems. It is also optimized for developing microservices. Read the whole article to learn when to use .NET vs. NET Core, as well as, to discover more about the future of .NET — .NET 5.
https://medium.com/accedia/net-framework-vs-net-core-what-is-the-difference-and-the-future-of-net-e980c4f980c4
['Todor Vasilev']
2020-11-25 15:46:34.273000+00:00
['Net Core', 'Dotnet Framework', 'Dotnet', 'Dotnet Core']
My Minimalist Approach to Combat Consumerism
Three secrets to achieving financial goals (1) Saving money requires setting goals I have been working for several years now. Because I am only required to pay for my own living expenses and don’t even need to pay rent, I actually have a pretty good life. If there is still a surplus of money in the account after bills are paid, I like to purchase new clothing, invest in a new mobile phone, or sometimes even buy a luxury bag worth $3,000 USD. Though I am not considered rich, I live my life comfortably. I came to understand that the reason I spent excess earnings was that I had not set a goal of saving money. If I had established a clear goal for saving money, I would know that buying a luxury bag would lead me far away from my financial goals. In addition, if I had identified what dreams I could pursue with my saved money, it would have motivated me to live with less in the present. As a result, it’s important to set tangible goals when saving money.
https://byrslf.co/a-minimalist-approach-to-combat-consumerism-3782ce07cd66
['Yu-Ching Lin']
2020-12-24 15:14:30.369000+00:00
['Minimalism', 'Personal Growth', 'Consumerism', 'Financial Planning', 'Beyourself']
Regex Basics in Ruby
While learning Ruby, I occasionally run into problems where I need conditional logic to validate user inputs by making sure they meet certain requirements. I have created input fields for things like dates, as well as inputs for Command Line Interface programs where a user could enter a number to select from a list of options displayed in the terminal. As I went about testing the program, I figured there should be a way to ensure that the only acceptable input would be a number. My first attempt was to just check if the input was an integer with is_a?(Integer) input = STDIN.gets.chomp if input.is_a?(Integer) puts "Valid" puts input else puts "Invalid entry" end >> 4 #=> "4" #=> Invalid entry This did not work since inputs using gets.chomp are returned as strings. So I my next strategy was to just convert the input to an integer and see what happened. input = STDIN.gets.chomp.to_i if input.is_a?(Integer) puts "Valid" puts input else puts "Invalid entry" end >> 4 #=> Valid >> Four #=> 0 #=> Valid It worked! Except now, if the user inputs non-integers, it still comes back as valid. This is happening because converting a string of non-numeric characters to an integer returns 0, which would still pass this test. “0” might still be an input I want available to the user, so I do not necessarily want to say “valid if greater than 0.” Rather than converting anything about the user input, I wanted to find a way to check the input string to confirm that it was a number and what that number was. input = STDIN.gets.chomp if input.match?(/\A\d+\z/) puts "Valid" puts input else puts "Invalid entry" end This crazy looking bit of code is called regex. Short for Regular Expression, regex was originated by mathematician Stephen Cole Kleene as a way to define a search pattern using a specific sequence of characters. It has been adapted over the decades into many unix based operating systems. It ends up being a very powerful way to find patterns in strings, which can be used to compare or extract information in that string. So here’s a breakdown of what is happening in the example above: ONLY_POSITIVE_DIGITS = /\A\d+\z/ / #every regex begins and ends with a forward slash. \A #back slash and capital A denotes the start of a string \d+ #back slash and a lowercase d and a plus will look for any positive digit \z #back slash lowercase z ends the string / This regular expression can then be checked in Ruby with the match? method which will return true if the regex conditions are all met. Ruby has several other regex specific methods through the class Regexp that can be taken advantage of as well. What if I wanted to validate a user input for a telephone number? I would want to make sure that only digits were entered into the field, and that the correct number of digits were entered. I would also want to account for white space as well as parentheses and hyphens, so that the user can enter their phone number in several common ways: (555) 555–5555 (555)555–5555 5555555555 Here is one regular expression to validate phone number entries: VALID_PHONE_NUMBER = /\A\(?\d{3}\)?[\s.-]?\d{3}[\s.-]?\d{4}\z/ / \A #back slash and capital A denotes the start of a string \(? # the question mark is checking for zero or one of an opening parentheses \d{3}\ # checking for three digits \)? # checking for zero or one of a closing parentheses [\s.-]? # checking for zero or one of either white space, a period, or hyphen \d{3} # checking for three digits again [\s.-]? # checking for zero or one of either white space, a period, or hyphen \d{4}. # checking for four digits \z # ends the string / As you can see, these regular expressions can get very long and very difficult to read. Fortunately there are good resources available to help construct the regex you might be looking for. Rubular is a great editor that allows you to test out expressions and see if you get the conditions you want. Here are some of the most common symbols for creating your own regex: ^ — marks the start of a line. $ — marks the end of a line. [xyz] — checks if a single character matches x, y, or z.. [a-z] — checks for any letter. \w — checks for any alphanumeric character and underscores. \W — checks for any non-alphanumeric characters. (a|b) — checks for either a or b. Some other common regular expressions that can be used for validation checks. VALID_IP_ADDRESS = /^((?:(?:^|\.)(?:\d|[1-9]\d|1\d{2}|2[0-4]\d|25[0-5])){4})$/ VALID_EMAIL = /\A[\w+\-.]+@[a-z\d\-]+(\.[a-z\d\-]+)*\.[a-z]+\z/i NO_NUMBERS_OR_SYMBOLS = /^[[:alpha:][:blank:]]+$/ There are also some options available to set at the end of the expression to modify the conditions you might be looking for. By adding these options tags to the end of the regex string you can ignore case sensitivity and white space. # Options tags \i # case insensitivity \x # ignore white space \m # make a dot match a new line In addition to checking to see if a string matches certain conditions, Ruby has other regex methods that can be used to scan for specific information out of a string, or split a string based on a regex condition. For example, if you had a string and you only wanted to return the numbers from it, you could use the scan method like so: def chapter_number(title) title.scan(/\d+/) end >> chapter_number("Chapter 5: Rad chapter") #=> 5 These examples of regex methods are actually returning a class object called MatchData. The MatchData class will actually encapsulate all the matches found in the regex pattern. This object can then be converted into an array and be iterated through for the results. Although they can be a bit cryptic and difficult to read, regular expressions can provide a powerful tool for pattern recognition, especially in cases where there are many conditions you would like to validate within a string.
https://medium.com/@dwisecar/regex-basics-in-ruby-e2ffa8b5fb01
['Dave Wisecarver']
2020-12-23 21:37:28.642000+00:00
['Regex', 'Ruby']
THE MOMENT YOU ALL HAVE BEEN WAITING FOR — CORD’S OWN DEFI PLATFORM!
THE MOMENT YOU ALL HAVE BEEN WAITING FOR — CORD’S OWN DEFI PLATFORM! cordfinance Jan 19·4 min read Token pairs may differ from those in screenshot Friends, this moment marks the beginning of a very new and exciting chapter for our fellow CORDians, in a project that has already grown leaps and bounds, yet will grow to still more dizzying heights. The pools, the rewards, we got you gentlemen! Let me brief you folks first about VACC our new reward token, then we will talk about CORD’s new DeFi staking pools we have worked on and tested rigorously, and importantly, were forked from a previously audited code base. First to VACC. Unlike other reward tokens, VACC is deflationary, yes you read that right! VACC had an initial total supply of 1.9 million, or only 100:1 vs. CORD which is already low, but by employing a “buyback and burn” mechanism to maintain an approximate 100:1 ratio (and thus protect CORD-VAC-LP holders from dreaded impermanent loss), the total supply can only drop further over time and never be minted. In fact, as of the time of writing this article, we have already burned 100 VACC in pre-launch tests! These can never be restored. Verify the current total supply of VACC any time (and see the correct contract address) at https://etherscan.io/token/0xb84E55c7e55F71b446b30FE40E79FF76f108edCd. In addition, the amount of new VACC tokens released from our escrow each week to fund the pools will drop steeply every single week, all of which helps to put positive pressure and a floor under VACC. But that is not the end of the tokenomics in place with the goal of protecting the VACC:CORD ratio! We have also designed our pool rewards to this same end. The first pool is the base pool, CORD -> VACC. This pool offers the lowest yield however it is the simplest pool to use and presents no risk of impermanent loss. Then we have the CORD-ETH-LP -> VACC pool, which will offer investors a higher APY/WPY (initially double the base pool) as it does require pairing your Ethereum with CORD. Finally, the CORD-VACC-LP -> VACC pool offers the highest APY/WPY (initially four times the base pool!). This gives investors something concrete to do with their VACC earned in other pools. And by the way, in all three pools, by staking higher amounts you can earn a 20, 50 or 100% multiplier on your rewards per token: threshold amounts for each pool are listed on the pool interface itself. Alongside offering CORD-VACC-LP stakers the highest APY/WPY as described in the section above, we strive to protect liquidity providers from suffering impermanent loss by keeping the CORD:VACC ratio pegged as closely as possible to 1:100. For more details about our tokenomics designed for this purpose, refer to our medium article, THE ECONOMICS OF VACC AND THE WAY FORWARD! Finally, it bears mentioning that the tokenomics behind VACC have been carefully designed by our experienced and professional mathematicians and statisticians; this is something very few DeFi projects offer. The great minds behind this unique active buy back and burn mechanism as we like to call it, will strive to maintain the value of VACC, and in turn protect the value of CORD and the overall project. By exercising our tools protecting against impermanent loss, investors may stake for the very highest rewards with greater peace of mind: the CORD:VACC ratio may bounce around a little as any token pairing can, but if/when it gets too far out of whack we will whack it right back! with our unique toolkit designed for this purpose, and over time, there will be fewer and fewer VACC in existence. Token pairs may differ from those in screenshot What might be coming in v2 of our pools? Sneak peak for you: besides optional liquidity locking for higher APY, there will also be a tiered VACC hodler’s bonus (extra reward multipliers just for hodling, without the need to “spend” your deflationary VACC) for all our DeFi pools including partner pools, and more, so stay tuned. So, with all of that said, what are you waiting for? GET STAKING, AND HODL THAT VACC!
https://medium.com/@cordfinance/the-moment-you-all-have-been-waiting-for-cords-own-defi-platform-1839b6ab896e
[]
2021-04-09 23:35:35.158000+00:00
['Cord', 'Yield Farming', 'Finance', 'Defi', 'Partnerships']
Held in Qatar, Aussie asks why our government seems reluctant to help him
Crikey 3 days ago Amber Schultz Australian grandfather Joe Sarlak was imprisoned in Qatar for nearly three years and is still stuck abroad with pending legal cases. He alleges the Australian government has done little to help and his representative and human rights activist Radha Stirling says the government has never contacted its diplomatic counterparts to campaign for his release. That is a stark contrast to the Department of Foreign Affairs and Trade’s (DFAT) efforts to secure the release of academic Kylie Moore-Gilbert who was detained in Iran. Meanwhile, Foreign Affairs Minister Marise Payne has been criticised for not doing more to protest against the treatment of Australian airline passengers who were forcibly strip-searched at Doha airport in October. It raises the question: is the Australian government reluctant to challenge Qatar? Where are the tensions? Sarlak, 70, from the Sydney suburb of Auburn, was jailed in July 2016 after his Qatari business partner embezzled funds, causing the company’s cheques to bounce — a criminal offence in Qatar. His company was building aircraft hangars for the royal family’s airline. He said he didn’t have a lawyer at his trial and was forced to sign a confession in Arabic. Sarlak was eventually cleared and released from prison, only to be detained due to not having the correct visa. He’s since been released again, but more legal cases have been filed against him, meaning he cannot leave the country. “There is a pattern of legal abuse against foreign nationals by members of the ruling elite,” Stirling has said. “Australia categorically needs to warn citizens that entrepreneurs, businessmen and investors who consider Qatar a safe commerce venue should consider they too could end up like Joe Sarlak.” Video: French police clash with protesters over new security law (Sky News Australia) DFAT told Crikey the Australian government had been providing consular assistance to Sarlak since 2016 but wouldn’t comment further due to privacy obligations. The Qatar embassy in Canberra told Crikey Sarlak “had been sentenced in several cases” and the Australian embassy had been notified, but it wouldn’t elaborate. Tensions between Australia and Qatar were heightened in October when a premature baby was abandoned at Doha airport. Planes were grounded and women — including 18 Australians — were forced into ambulances on the runway to be strip-searched for signs of having given birth. Qatar initially defended the actions of its officials, but apologised and said it would prosecute officers for violating procedure standards after an international investigation and a formal complaint from Australia. Although Payne called the incident “grossly, grossly, disturbing, offensive”, Labor leader Anthony Albanese criticised Prime Minister Scott Morrison and Payne for not immediately speaking to their Qatari counterparts. What’s at stake? Qatar is Australia’s second-largest two-way trading partner in the Middle East and North Africa region. Before COVID-19 hit, 3000 Australians lived there and 40,000 visited annually. After the airport incident, Qatar shut down a $300 million lamb trade deal with Australia — although Trade Minister Simon Birmingham said he didn’t think the two incidents were related. Australia is also relying on Qatar Airways to bring stranded nationals home during the pandemic. Australia and Qatar’s relationship is friendly and our trade relationship is important, former director of the Centre for Arab and Islamic Studies at the Australian National University Amin Saikal told Crikey. “Many Australian expats work within Qatar, and many Qatar experts have considerable investments in Australia,” he said. Although Australia raised concerns about the airline passengers’ treatment, Saikal said he thought the government had been “conscious not to engage in megaphone diplomacy with Qatar and jeopardise the relationship”. Should we be doing more? Fatima Yazbek from the Gulf Institute for Democracy and Human Rights told Crikey she’s not sure what more can be done to secure Sarlak’s release. “Usually Australian authorities know that they cannot affect the Gulf states’ decisions much because of the nature of the governments there,” she said. “The local laws in the Gulf are issued to serve the governments, not to facilitate the people’s lives. And usually the royal families control everything … there are huge concerns regarding the legal system and fair trials in the Gulf states, including Qatar.” Yazbek says Australia could have done more: “Those women’s rights were violated with no explanation … the safety, well-being, and dignity of any individual should be a priority of their government, and that should be protected by all means in any place.” But she warns tensions could already be rising: “Having the Australian government speaking publicly, condemning the act, and asking for explanations, ahead of Qatar’s hosting the World Cup 2022, is something Qatari authorities won’t accept easily.” The post Held in Qatar, Aussie asks why our government seems reluctant to help him appeared first on Crikey. www.gulfinjustice.news www.radhastirling.com www.detainedindoha.org www.detainedindubai.org
https://medium.com/@radhastirling-90634/held-in-qatar-aussie-asks-why-our-government-seems-reluctant-to-help-him-83cc57a63243
['Radha Stirling']
2020-12-20 13:39:22.682000+00:00
['Radha Stirling', 'Joseph Sarlak', 'Australia', 'Qatar', 'Detained In Doha']
Understanding Body Dysmorphic Disorder: What it is and How to Overcome it
We all, at times, have been guilty of staring at ourselves for too long in the mirror. Picking at what “could be” and “should be,” wishing for something different than what we’ve been given, and treating our flaws as the most important thing about us. There is a distinct difference, however, between the phenomenon of not liking everything about ourselves and the obsessive pattern of believing we are damaged. That difference is Body Dysmorphic Disorder (BDD), a mental illness that affects nearly 1 in 50 people in the United States population, according to the Anxiety and Depression Association of America. In my newly released book, Good Enough: Believing Beautiful through Trauma, through Life, through Disorder, I talk about BDD and how it manifested in my life. The below excerpt is an example: I was standing in my room trying on dresses for formal one day, and my friend Hannah was over. As I changed into dress after dress, she asked me how I’d lost so much weight. She was asking for her own benefit, and I did the thing I always did and played it cool as I listed off a few of the “natural” ways I had lost weight. “I just run a lot, eat super healthy, and don’t eat two hours before bed,” I’d say. My mom came in the room shortly after to see what we were up to and to hand over a couple more dresses I might consider wearing. It was the first time my mom had seen me undressed for a while, and I remember her eyes being planted on my body as she voiced out loud I was “too skinny.” As I pulled up one of the dresses, she told me she could see my ribs and collarbone coming through my skin. Hannah added how she never thought I needed to lose weight in the first place. I stood in front of my full-length mirror in a small pink dress, my mom behind me and Hannah to my side. I looked in the mirror, and for a second it was just me and my body. And I wanted to feel sorry for it. I wanted to see what both of them saw, but I couldn’t. The more I stared, the more flaws I found. “Too skinny?” I thought. “What is she talking about?” All I saw in that mirror was a fat, useless body which did nothing more than bring me trouble. My monsters encouraged and validated these thoughts. “There’s still weight to lose.” “You aren’t done yet.” “You can’t trust people, not even the ones who claim to love you.” I got dressed, ashamed for even thinking I might be able to escape judgment by changing in front of two people closest to me. Mental Health America defines BDD as the intense preoccupation with flaws or defects that may or may not be apparent to others. It’s starting in the mirror and body checking for hours at a time (3–8 hours a day on average) and experiencing high levels of anxiety from the shame and fear or being unwanted or undesirable. What might surprise some is BDD occurs more in men (2.5%) than it does women (2.2%). In both genders, environmental and genetic factors could be the driving force. Doctor of Nursing Practice, Amanda Perkins writes in an article that improper use of serotonin in the brain, as well as personality traits, abuse, and trauma could also impact the onset on BDD. She connects BDD to the drive for perfection: “EVERYTHING AROUND US focuses on beauty, from commercials to magazines, social media to movies. Already beautiful models are airbrushed to make them look ‘perfect’ in a way that is unattainable. People can easily apply filters to their selfies, removing even the slightest imperfections. In this way, our society reinforces the need to be beautiful.” On the rise is a form of Body Dysmorphic Disorder, termed Snapchat Dysmorphia, where individuals are asking plastic surgeons to make them look like the filter they’ve used on the smartphone app Snapchat. I mean, remember what it was like to just take a photo? When you couldn’t instantly apply a filter, send it to a thread of people, or post it on social media? Now, you add in apps like Snapchat that reinforce this need for beauty through unrealistic images of what you could look like if only your nose was smaller, your chin was rounder, your skin was softer, your lips fuller. Writing about BDD now, I think back to the younger me who would take selfie after selfie, trying to get the perfect angle so that what I perceived as “flawed” could be hidden. I think about, more than once, googling plastic surgery options to make my vagina look normal because I thought mine was deformed. I think about the time I stared at myself for so long in the mirror, I imagined one of my calves larger than the other. I think about days I cancelled plans with friends because I was so preoccupied by my flaws and couldn’t control my obsessive thoughts. I’d spend hours in the gym or outside running to try and change my appearance, and then I’d spend even more time examining every inch of my body after. It was never good enough. These are all symptoms of BDD, and that’s only scratching the surface. Mental Health America says individuals who battle this disorder might also pick at their skin, pull their hair, bite their nails or cheeks, constantly seek reassurance from those around them, spend an insane amount of money on beauty products, camouflage parts of their body with clothes or makeup, stop doing activities they once enjoyed, constantly compare themselves to others, and even go as far as to contemplate suicide. They explain that because of the serotonin impairment certain BDD victims experience, selective serotonin reuptake inhibitors (SSRIs) may be a successful way to medicate obsessive thoughts and compulsions. Cognitive-behavioral therapy (CBT) could also be a useful way to filter negative thoughts, get to the root of the issue of the dysmorphia, and create healthier patterns of living. Personally, I have found getting rid of my full-length mirror to be the most productive way to recover from BDD, along with building myself up with affirmations and recovering from an eating disorder I fought for years of my adolescence and adulthood. And I’m here to tell you this: recovery is possible. You might be reading this while thinking, “Wow. This all sounds pretty intense. I’m not that bad. That’s not me.” But I see you. And I see you because I was you — in denial and afraid of how I would be viewed if I were to say I had a disorder. I get it. The words “disorder” and “mental illness” are so unattractive, and these behaviors are keeping me safe. I can’t stop now. But you can stop now. I know the monsters in your head are telling you otherwise, yet they make promises they can’t keep, and once you’re no longer under their wrath, you will see this for yourself. You will find true freedom, identity, and belonging no longer tied to what you look like. You will be able to look in the mirror and say, “I’m not perfect and that’s okay.” Here are 5 quick ways, in addition to the ones mentioned earlier, in which you might free yourself from BDD: Stop weighing yourself and get rid of the scale Set a time limit for how long you get ready and stand in front of the mirror Set a time limit for how long you spend on social media Normalize negative thought patterns by opening up about your struggle to a close family member or friend — you might also consider talking to a therapist Write 3 things you love about yourself down in a journal every day that don’t involve your appearance If you want more ideas on how to overcome Body Dysmorphic Disorder or issues alike, I invite you to read my book, Good Enough: Believing Beautiful through Trauma, through Life, through Disorder. In it, I will show you how to cultivate a healthy relationship with who you are (on the inside and outside) through vulnerable accounts of my own life. You will lose false identities and lies, and you will gain self-love and narratives of truth. After all, “It doesn’t matter how many times someone looks you in the eyes and tells you you’re beautiful, you have to believe it — and if you don’t — you have to keep fighting.” You can connect with me on Instagram or Facebook, @believingbeautiful, for weekly posts including all things eating disorder recovery, recipes, and inspiration. In addition, you can find more in-depth content involving these topics on my website. Send me a direct message or use the comment box on my website to send over any questions you may have regarding Body Dysmorphic Disorder or topics alike. I can’t wait to meet you!
https://medium.com/@newbergcarly/understanding-body-dysmorphic-disorder-f6b787eadeee
['Carly Newberg']
2020-12-23 14:43:01.277000+00:00
['Body Dysmorphic Disorder', 'Self Acceptance', 'Eating Disorders', 'Body Image', 'Self Love']
How I Reduced Screen Time From 7 hr to 2 hr a day.
Photo by Lukas Blazek on Unsplash So one fine evening I was scrolling through youtube and I saw this video where a person had quit his social media for 1 straight year even though he was having a few thousand followers on Insta. Although he was having these many followers he was wasting a lot of time scrolling through the app for hours and this made him decide to quit social media. Now after watching that video, I decided to check how much time I was spending using my smartphone and it was shocking to see that I was using around 6 to 7 hours daily and there were even days when I used more than 8 hours per day. As I am in the final year of graduation I have online classes to attend, but these stats don’t include the time spent for online classes because I was not attending them at all and wasting my time in many different ways ;) This made me realize how much time I was wasting on this virtual stuff which doesn’t make any sense(Not All) or any difference. Even though most of the time I use my phone for gaining knowledge but it only lasts for a few days and I’ll be back on track wasting my time. So this time I decided to take some serious action on this to reduce screen time and focus on things that are important and that will help me in long run. Planning Phase I decided to reduce screen time, but I took n number of decisions in the past that didn’t last long. So this time I need a plan and I made one. According to the plan, I should only use around 2 hours/day and use a maximum of time doing other things that are important to my personal and professional life (if I have one). The plan contains the following things Mute Notifications For Apps That Are Not Important. Download Screen StopWatch (It is a wallpaper app that shows how much time you have consumed). Consume data that will help you personally or your work. Start a writing/reading habit (optional). This is the design of my plan and I decided to follow these rules strictly to help myself. Having a plan itself is a bonus point because it will help you do the task in a well-structured manner without hopping between different things. Implementation Phase No plan will ever reach its goal without proper implementation. By now everything was ready from my mindset to the plan. Let’s see how the week went day by day. Day 1 I was waiting for this day to start and finally, it happened. On the first day, I used my phone for around 2 hours as per the plan and I felt very happy for myself. Day 1 — Completed (Happy) Day 2 After completing day 1, I was ready for day 2. And day 2 was no different from day 1, everything went on smoothly and I reduced screen time as much as possible. Both days were quite similar in terms of how I spent them. Day 2 — Completed (Super Happy) Day 3 This time, I used more than day 1 and day 2, but it was just 10–20 min above 2 hrs. On both days, I managed to use less than 2 hrs, and three days combined the average boils down to 2 hrs. Day 3 — Completed (Satisfied) Day 4 On day 4 I was shocked to see how much I reduced my screen time. It was just a couple of minutes less than an hour. I not only reduced the screen time but was also doing things that are important. Day 4 — Completed (Very Very Happy) Day 5 Day 5 was similar to Day 3 and the screen time was slight above 2 Hrs. By this time I was habituated to the plan and was not checking the phone constantly. Day 5 — Completed (Satisfied) Day 6 Day 6 was a kind of black day for me, I used more than 4 Hrs. But on day 6 I was having some exam, so I need to check my phone constantly. Ok, I shouldn’t have done that but it eventually happened. Instead of giving excuses, I accepted that this day I used a lot more than required. No path to a great thing is a cakewalk. So it happens and we should learn from those fall downs and should not repeat them. And it was not over because the average was still just above 2 Hrs. Day 6 — Completed (Not So Happy) Day 7 After spending more than 4 Hrs the previous day, I wanted to reduce the average and reduce the screen time as much as possible. And finally, I used less than 2 Hrs on the final day of the week, and the average also reduced a bit. Day 7 — Completed (Felt Very Happy) Result Image From Gallery As you can see the average was 2 Hrs 32 min from 19th Dec to 25 Dec. From 6 or 7 Hrs/day to an average of around 2 Hrs/week it was a short journey where I learned how much I was wasting and how much time I can utilize in a day. These are the changes I observed in these 7 days. I spent more time with my family. Was away from all the internet bullshit. I was more productive than earlier. I realized how much time I have in a day and how I can use it. Increased my confidence levels. Organizing time properly. Spending time with myself. Was It Easy? HECK NO! Even though it seems very easy to tell, but it was tough to make it happen. During the first two days, I used to unlock my phone constantly to check any updates. But all the notifications were off and when I unlocked my phone, the wallpaper showed me how much time I spent. So this led me to lock my phone and keep it aside. After two days, the urge of checking my phone reduced, and I would spend hours without my phone. After one week I still want to continue this streak and if possible I will try to reduce more screen time. Key Takeaways You have a lot of time in your hands, you just need to prioritize things. Having a goal is good, but having a plan to achieve it is better. So have a plan. You will not be forever in this world, so spend time doing things that are important. Less bullshit means you have more space for good things to observe. If you think you will miss everything if you don’t use your phone, just think again. This is my first post on Medium. So ignore my mistakes, if any. Thank you for your time. Have a great day. Keep smiling :)
https://medium.com/@sujithkumar8897/how-i-reduced-screen-time-from-7-hr-to-2-hr-a-day-9666a3f3d043
['Sujith Dusa']
2020-12-26 04:03:41.646000+00:00
['Lifestyle', 'Planning', 'Life', 'Time Management', 'Productivity']
I Accidentally Cut Off Toxic Friends, and Now I Do It All the Time
I Accidentally Cut Off Toxic Friends, and Now I Do It All the Time Image by Gerd Altmann from Pixabay At the dinner table, my mom brought up an old friend's experience in middle school. When I had plans to hang out with other friends, this girl started to throw a fit. She screamed and cried until she was invited or until I ditched them to spend time with her. She took advantage of my friendliness. We got in fights and argued. She would call me names, and I would fire them right back at her. She influenced and encouraged rebellious acts, and I went along with it because it was exciting. Yet, at the same time, I felt a consistent urge to keep up with her out of fear of being left out. Then, she multiplied, and soon I had a group of friends, each like her in many different ways. In some cases, the others turned out to be detrimental to my mental health. The group of us best friends was together for over 8 years until things changed when I left for college across the country. I accidentally cut off my toxic friends by moving, but now I do it all the time — and here is how:
https://medium.com/@jesstt/i-accidentally-cut-off-toxic-friends-and-now-i-do-it-all-the-time-2e4439ab5ce
['Jess T']
2020-12-26 19:02:19.052000+00:00
['Self Improvement', 'Relationships', 'Self Love', 'Trust', 'Friendship']
CORD.Finance locked 99% of CORD-BNB PancakeSwap V2 liquidity for 3 years on UniCrypt
CORD.Finance locked 99% of CORD-BNB PancakeSwap V2 liquidity for 3 years on UniCrypt cordfinance Apr 1·2 min read 99% locked CORD-BNB liquidity on PancakeSwap V2 CORD.Finance locked 99% of CORD-BNB PancakeSwap V2 liquidity for 3 years on UniCrypt, to unlock May 31, 2024. CORD.Finance is a serious long-term project and this proves it. Feel secure investing in CORD. Confirm our locked liquidity here https://app.unicrypt.network/amm/pancake-v2/pair/0xD173c473e79B0EEd31653c6F076a8A07789F0733 Contract for CORD on Binance Smart Chain (BSC) 0xa3506A4f978862A296b29816C4e65cf1a6F54AAA Contract for VACC on Binance Smart Chain (BSC) 0xb01228C32F85db30b8F9fc59256B40C716b0E891 To buy or trade CORD-BNB on PancakeSwap V2, enter the BSC contract address of CORD in one of the PancakeSwap V2 exchange token fields, and select BNB in the other field. https://exchange.pancakeswap.finance/#/swap To buy or trade CORD-Sushi on SushiSwap, enter the BSC contract address of CORD in one of the SushiSwap exchange token fields, and select Sushi in the other field. https://app.sushi.com/swap To buy or trade CORD-VACC on PancakeSwap V2, enter the BSC contract addresses of CORD and VACC in the PancakeSwap V2 exchange token fields. https://exchange.pancakeswap.finance/#/swap The new vaults on Binance Smart Chain are located at https://pool.cord.finance For detailed tokenomics of CORD and VACC and the CORD.Finance project as a whole, visit https://cord.finance (and our Medium channel https://cordfinance.medium.com/) Stay tuned fellow CORDians, for big things in 2021 and beyond. The sky is truly the limit, or perhaps more accurate to say, the universe. - The CORD.Finance Team
https://medium.com/@cordfinance/cord-finance-locked-80-of-cord-bnb-liquidity-for-3-years-2-months-dfb24676e0d9
[]
2021-09-09 14:43:50.585000+00:00
['Cord', 'Yield Farming', 'Finance', 'Defi', 'Binance Smart Chain']
7 Bad Money Habits To Break In Your 20s
Learn how to break these 7 bad money habits in your 20s, so you can be successful with your finances while you are still young. Image by 1820796 from Pixabay I understand. When you’re in your early 20s most of us aren’t worried at all about our finances. We certainly don’t feel we have any bad money habits to break. We’re excited to either be out of college starting our new careers, or we are still hunting for that new career while hoping to soon move out of mom and dad’s house. Either way, this is the time to learn all you can about your finances. How to not only be successful with them but to not start any bad habits that could carry with you into your 30s or even 40s and beyond. Let’s get started. Bad Money Habit 1 > Not Having a Budget If you are not tracking all of your income and expenses, you are setting yourself up for failure. There’s really no excuse for this. When you plan ahead and set aside money for future spending you will inevitably end up having more money and saving more money. Do this by creating a budget. Use the right tools to help you succeed financially. You can download my FREE Budget Planner which you can save on your device and edit to your personal needs. Throughout each month, simply add your income and expenses so you can stay on top of your finances. Adding to the Budget Planner, getting a free app like Personal Capital or Empower will make it easy to keep track of all your assets and liabilities, income and expenses, and even your net worth. These two apps are fantastic. I’d suggest checking each of them out and use whichever one works for your individual tastes. After a few months of diligently following your budget, you will be able to find ways to lower or even cut unnecessary expenses. Bad Money Habit 2 > Carrying Credit Card Debt Carrying credit card debt month over month is one of the most harmful things you can do for your financial success. If you are doing this, stop now. Do everything you can to get this revolving bad habit out of your life and eliminate all credit card debt. This should be priority one. I’m not saying that using credit cards is evil. They definitely have their perks if used correctly (paying them off every month to a zero balance). There are plenty of reward points, travel discounts, and cashback incentives to be had. But if you are not disciplined enough to pay off your credit card balance each and every month, you are doing more harm than good. If this is you, just do not use a credit card, ever. The revolving interest charges on your credit card debt will have a snowball effect making your debt bigger and bigger. Eventually, you will get rolled over and be unable to get out from underneath it. Bad Money Habit 3 > Impulse Spending One of the worst habits young (or old) adults can have is impulse spending. Seeing something that you weren’t even looking to buy, but purchasing it anyway, simply because it caught your eye and now you just have to have it. If something is not a definite NEED, meaning if you do not have it right now your life will not continue, then do not purchase it. To help discipline yourself try this. Give yourself at least 3 days to contemplate whether you should make a purchase of a WANT. I do this all the time, and I raised my kids to do this. It’s simple, it creates spending discipline, and it works. During those 3 days, weigh out the positives and negatives of making the purchase. Review the expense with your budget and see if you can ultimately afford it. And if you’ll be able to still not go into debt over buying it, then maybe you can go back and purchase it. Realize this, impulse spending usually ends up going hand-in-hand with credit card debt. If you can break the bad habit of impulse spending, you are well on your way to controlling the bad habit of using your credit card too much. Bad Money Habit 4 > Not Improving Your Credit I often hear of people being told not to worry about their credit score. Just purchase everything with cash and you’ll never need to worry about how good your credit is. Well, the fact is, the overwhelming number of people out there are NOT able to simply purchase everything they own with cash. It sounds great but entirely unrealistic for everyone and for everything you purchase. Sure, having enough cash from your paycheck to make a somewhat small extra purchase that month should be with money or cash you have. But understand something. Having a good, or great, credit score is helpful in many ways. If you have a low credit score you will get charged higher interest rates, on everything you need a loan for. You will be charged a higher insurance premium, whether it’s for your car or for life insurance. You will have difficulty getting approved for an apartment and you might even have to pay a higher security deposit. Your mortgage interest rate will definitely be higher. Having a low credit score could even get you denied for employment. Your bad credit score will ultimately cause you to pay a premium on everything else. How are your finances supposed to improve if you are not doing everything you can to improve your credit? So, do everything you can to improve your credit score. Following these steps will help. Bad Money Habit 5 > Having Only One Source of Income It’s understandable when you’re in your 20s that you are trying to solidify your newfound career. Having a primary income is normal and what most of us do. Having a second income, or multiple streams of income is something you should strive for. This does not necessarily mean going out and getting a part-time job that eats up all of your extra time while paying you minimum wage, and not really helping your financial stability at all. I’m talking about creating income streams that are either passive or can become passive with a little bit of effort. Passive income is the income you are earning even while you sleep. It’s the best type of income for obvious reasons. And the more passive streams of income you have, the better. The fact is, in 2020 most of the wealthiest people have at least 7 different streams of income. If it’s working for them, why would you not try to do the same? Some great ways to generate passive income include the following: Starting a Blog — it won’t bring in money right away, but slowly a blog can grow into a perfect way to earn passive income. This is usually in the form of Affiliate Marketing or selling Digital Products and/or Courses among other things — it won’t bring in money right away, but slowly a blog can grow into a perfect way to earn passive income. This is usually in the form of Affiliate Marketing or selling Digital Products and/or Courses among other things Investing in Dividend Stocks — dividend stocks are investments that usually pay out quarterly providing a passive income stream you can set and forget — dividend stocks are investments that usually pay out quarterly providing a passive income stream you can set and forget Real Estate Investing — many types of real estate investments are passive, like owning rental property. Your tenant pays your mortgage and provides you additional income every month on top of that — many types of real estate investments are passive, like owning rental property. Your tenant pays your mortgage and provides you additional income every month on top of that Peer-to-Peer Lending — you become a lender and earn interest on the money you lend out, passively Some other sources of additional income can include various online ideas and even some odd ideas. The point is, don’t get stuck in only one source of income. Financial independence is earned through hard work done in a smart way. Having only one income stream does not define someone like this. Change this bad habit. Bad Money Habit 6 > Not Saving Everywhere Possible When you go shopping at the grocery store or a department store, aren’t you naturally drawn to the ‘Sales’ signs like I am? I’m constantly trying to find what is 30, 40, or 50% OFF today, or even what’s the Buy One Get One FREE deal. I think naturally we do this. And it’s easy when you are physically at a store. The signs are everywhere. But what about when you are shopping online, as so many of us do nowadays? Are you still finding these deals and saving everywhere possible? If you aren’t then this is another bad habit to break, now. A couple of very easy (and free) ways to shop online and save money on almost every transaction is by using one of the very common browser extensions, like Rakuten and Honey. I use both, and depending on which one gives me the better deal at the time, that’s the one I go with. Simply put, adding the Rakuten and Honey extensions allows you to automatically see when you get a discount on a particular website. Each one shows you the discount, applies a discount code when needed, and provides you money back at the conclusion of your purchase. So, in essence, you are saving money on your purchase, in the form of a percentage-off discount, and then you are earning money after your purchase on top of that. It’s a complete win-win and a definite no-brainer. If you are not saving everywhere possible like this, then get started now. You’ll be happy you did. Bad Money Habit 7 > Not Starting Investments Finally, a very common bad habit when you are in your 20s is to put off starting your investments. Whether it’s for retirement, or for a method of passive income, investing as early in life as possible pays off huge later in life. Once you have established your budget and taken care of some of the first steps towards financial success, like getting out of debt and building an emergency fund, your next step is to start investing. Retirement Account The easiest ways are either through your employer-matched 401k or a Roth IRA. If you are in your early 20s and you start investing only $100 per month, you could reach the normal retirement age of 65 with over $500,000, using the historic rate of return of 10%. Of course, adding even more per month compounds that figure much more. The point is, start early and continue to add to this investment. Make this investment an automatic one. Set it and forget it. Your money will grow compounding interest every year, and you won’t even need to be concerned about it. Real Estate Investing Other ways to invest include real estate. I’ve done fix and flip deals, I’ve owned rental properties and I’ve bought and sold real estate. All while still in my 20s. Not only can it be done, but it is done often by a lot of people in their 20s. You can do it too! One of the greatest ways to wealth creation is through real estate. And although you need a bit more of an active role during a real estate transaction, it can still be a way to generate passive income, especially when you own rental properties. Stocks, Bonds, & Options Another way I suggest getting involved with investing is through stocks, bonds, and options. And opening a brokerage account has never been easier. My favorite method is through the Robinhood phone app. It’s free to download, free to open an account, free to learn all you need to learn, and free to make all of your trades! On top of everything being free with Robinhood, they start off by giving you a free stock! And when you recommend them to a friend, your friend will get a free stock and you will get another free stock! There really is no downside to at least trying it. Obviously learning all you can about trading in stocks or options is extremely important. You can easily lose a lot of money if you don’t invest properly. Personally, I recommend sticking with the more commonly held stocks that make up some of the various index funds. But at the end of the day getting involved in trading in stocks, bonds, and options can be a great way to add to your wealth. And like I said before, start young and stay consistent. Your investment accounts will grow over time and you will stay well ahead of the game financially if you do. Final Thoughts on 7 Bad Money Habits to Break in Your 20s Over the years I have met many people who have had all 7 of these bad money habits well into their 40s. Some have faced foreclosures, some bankruptcies. Some have had their cars repossessed and others have simply had no form of retirement income and had to continue working well into their 70s. This is crazy! Don’t let this be you. Change the course of your financial success while you’re young. Get started today. What bad money habits do you need to break? What are you doing today to improve your financial success?
https://medium.com/@forlifeandcents/7-bad-money-habits-to-break-in-your-20s-3101c376517e
['Bill Kirkpatrick']
2020-12-17 11:13:12.415000+00:00
['Saving Money', 'Financial Freedom', 'Budgeting', 'Money Management', 'Personal Finance Tips']
How China used Artificial Intelligence to combat Covid-19
2020 will go down in history books as the year that witnessed one of its kind global crisis due to the Covid-19 pandemic. Of course, Covid-19 is not the first pandemic on a worldwide scale. There had been plenty of such outbreaks in the recorded history that affected different parts of the globe. But Covid-19 stands apart due to such high international travel. It spread quickly in no time worldwide, resulting in complete lockdown in most countries. At the time of writing this article, there had been 39 Million Covid-19 cases globally and 1.1 Million deaths just within 7–8 months. John Hopkins Covid-19 resource center (Source) Covid-19 is also different from earlier pandemics because it’s the first time government agencies and health organizations worldwide are using emerging technologies of Big Data and Artificial Intelligence to combat the disease. AI has always been portrayed as a technology that has the potential to change the world we live in, and this pandemic was a litmus test for AI as well to prove its promise. A fascinating case study of the use of AI to fight Covid-19 comes from China, which was the source of this virus. China had an initial surge of Covid-19 cases, but soon, it could control the spread within a few months while the rest of the world is still struggling with growing cases. If you observe the graph below, you will see a flat line for China, whereas to give a perspective, the USA and India, which also has a high population, is still seeing an exponential rise in cases. Covid-19 cases in China vs. India vs. the USA (source) Had Covid-19 or a similar pandemic happened ten years ago, and the above graph would have different results for China. Here is my previous article that supports this theory. What did China do differently from other countries to combat Covid-19? To fight with the Covid-19 situation, China relentlessly made use of AI-enabled technologies in all possible ways to control the spread, unlike other countries. The main focus area on artificial intelligence was on mass surveillance to prevent the spread and, secondly, healthcare to provide fast diagnosis and effective treatment. This should not come as a surprise because China is already one of the leading markets for artificial intelligence globally. As per one report, China’s AI market is forecasted to reach 11.9 Billion USD by 2023. So let’s take a close look at the various measures China took with artificial intelligence. 1. Mass Surveillance & Contact Tracing China is known to employ mass surveillance on its citizen without thinking twice about people’s data privacy. China has an estimated 200 million surveillance cameras powered by AI-based facial recognition technology to closely track its citizens. Such extensive use of AI for controlling citizens has always attracted global criticism to China. China’s existing Mass Surveillance (source) But when Covid-19 hit China, its already established mass surveillance system proved to be very efficient since the government could use this system to track patient’s travel history and predict which other people might be at possible risk who came in contact with the patient. Contact Tracing App (Source) China not only gathered people’s tracking data, but it also used this information to alert people of potential Covid-19 risk with the help of contact tracing mobile apps designed with the help of companies like Alibaba and Tencent. This app assigns color code to the users based on their risk profile. People with no risk are assigned a green color, whereas people with travel history or close proximity to other patients are given yellow or red based on the severity of the risk. The yellow color indicates self-quarantine, and people with red color are required to go to the hospital. China’s Health Code (Source) This health code has now become the benchmark for China to allow its citizen to use public places and services. Many health code scanners are installed in all public places like offices, subways, railway stations, and airports that screens out people for yellow or red code. China has also imposed rules to allow people to drive on roads only if their health code is green. Man Scanning Health Code in Subway (Source) Another very useful app for Chinese people was the Baidu Map, which gave real-time information on high-risk places so that people can stay away from those regions. It used data accessed by GPS location and medical data from health agencies to inform users about their exact distance from the Covid-19 hotspots to avoid them while traveling. Baidu Map showing Covid-19 Hotspots (Source) Wearing masks have become a norm for the people in 2020, and it is mandatory for one’s safety and prevents infecting others when not wearing a mask yourself. China’s AI companies like Baidu, Megvii, SenseTime, and Hanwang Technologies helped the government put up facial recognition surveillance capable of recognizing people with or without masks. The system immediately raises a security alert if it detects a person not wearing a mask. These systems are also equipped with thermal scans to raise alert for people with high temperatures in public areas. Baidu’s surveillance in Beijing Qinghe Railway Station was able to detect 190 suspected cases within a month of its installation in late January. Facial recognition with a thermal scan at Chinese Railway Station (Source) The most comprehensive data source for Chinese government agencies & healthcare technology companies comes from the mobile app, “weChat.” Chinese tech giant Tencent Group developed this mobile app, now have around 1.2 billion users. Tencent is Asia’s most valuable company with a market capitalization of 300 billion. “weChat” was one of the primary sources behind all Covid-19 contact tracing. When it was combined with mass surveillance data, it made contact tracing an easy task. “WeChat and mass surveillance together provided many grounds for Computer Vision(CV) and Natural Language Processing(NLP) experts to build a paradise of revolutionary Contact Tracing applications.” 2. Healthcare Services The major challenge which healthcare workers faced was the influx of the high number of Covid-19 cases that started coming in for the diagnosis in the early days in China. When Lungs CT scan became a parameter for initial diagnosis before PCR test confirmation, it was a nightmare for radiologists. The radiologists had to manually go through thousands of scans of people to confirm the diagnosis. Quick diagnosis and early medication/quarantine were essential to hinder the spread of Covid-19, but the diagnosis process itself became a bottleneck at that time. Soon the Chinese AI companies like Alibaba, Yitu Technologies stepped in with AI-assisted diagnosis of CT scan images to automate the process with minimal radiologist intervention. AI-assisted CT scan diagnosis for Covid-19 (Source) These systems were built using Deep Learning and proved to be fast and accurate. The use of artificial intelligence to evaluate CT scans was a big turning point for China as it sped up diagnosis. Alibaba’s diagnostic system could provide the diagnosis of Covid-19 within 20 seconds, with 99.6% accuracy. By March 2020, over 170 Chinese hospitals adopted this system, with 340,000 potential patients. On the other hand, Tencent AI lab worked with Chinese healthcare scientists to develop a deep learning model that can predict the critical illness in the Covid-19 patients, which can be fateful. They made this tool available online to be given high priority treatment well in advance. Covid-19 is a novel virus, meaning nothing was known about it by the medical researchers when it surfaced. As soon as this came to light, researchers worldwide started to study this virus’s genes for creating a diagnosis process that could also open the gates for vaccination. But such scientific research is not easy and requires extensive resources. Both Alibaba and Baidu have now made their proprietary AI algorithms available to the medical fraternity to speed up the research and diagnosis process. Alibaba’s LinearFold AI algorithm can reduce time to study Coronavirus RNA structure from 55 minutes to just 27 seconds, which is useful for fast genome testing. Similarly, Baidu’s open-sourced AI algorithm is also 120 times faster than the traditional approaches of genome studies. In such a crisis, drones are often useful to either provide supplies or to carry out surveillance. But this time, autonomous vehicles and Chinese robotics companies like Baidu, Neolix, Idriverplus have also joined the mission by letting out their self-driving vehicles to supply medical equipment and foods to the hospitals. Autonomous Vehicle disinfecting public place (Source) Idriverplus autonomous vehicles were also used to spray disinfectants in public places and hospitals for sanitization. Another company Pudu Technology that usually build robots for the catering industry, also deployed their robots to over 40 hospitals to support health workers. Chinese companies are also catering to global demands for autonomous robots. For example, Gaussian Robotics claims their robots have been sold to over 20 countries during the current pandemic. Conclusion Indeed China has left no stone unturned to use artificial intelligence to maximum advantage in its fight against Covid-19, and some might question why other countries were unsuccessful in doing so. One of the main reasons China was so successful was its relentless use of existing AI-enabled mass surveillance systems, which does not consider people’s data privacy. It is quite normal for China to use facial recognition to track citizens, but data privacy is a serious issue in other parts of the world where such surveillance systems cannot exist at such a large scale. Although there had been uses of artificial intelligence in the healthcare area in many parts of the world, no other country could implement mass surveillance to restrict Covid-19 spread as China did. Thus China went ahead in the race to control Covid-19 leveraging AI. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/how-china-used-artificial-intelligence-to-combat-covid-19-f5ebc1ef93d
['Awais Bajwa']
2020-10-21 17:52:05.557000+00:00
['Deep Learning', 'Healthcare', 'AI', 'Computer Vision', 'China Startup']
Dodgers past and present share memories and condolences after Tommy Lasorda’s passing
by Rowan Kavner Nearly everyone who met Tommy Lasorda during his 71 seasons in the Dodger organization had a story about the Hall of Fame manager, who passed away Thursday night at the age of 93. Many of those who knew Lasorda well shared their condolences and fond memories of the former Dodger player, scout, coach, manager, executive and special advisor, who was one of baseball’s all-time great ambassadors and unforgettable personalities. (*Note: These will continue being updated as they come in.) Mark Walter, Dodgers owner and chairman: “My family, my partners and I were blessed to have spent a lot of time with Tommy. He was a great ambassador for the team and baseball. A mentor to players and coaches, he always had time for an autograph and a story for his many fans and he was a good friend. He will be dearly missed.” Stan Kasten, Dodgers president and CEO: “In a franchise that has celebrated such great legends of the game, no one who wore the uniform embodied the Dodger spirit as much as Tommy Lasorda. A tireless spokesman for baseball, his dedication to the sport and the team he loved was unmatched. He was a champion who at critical moments seemingly willed his teams to victory. The Dodgers and their fans will miss him terribly. Tommy is quite simply irreplaceable and unforgettable.” Vin Scully, Hall of Fame broadcaster: “There are two things about Tommy I will always remember. The first is his boundless enthusiasm. Tommy would get up in the morning full of beans and maintain that as long as he was with anybody else. The other was his determination. He was a fellow with limited ability and he pushed himself to be a very good Triple-A pitcher. He never quite had that something extra that makes a Major Leaguer, but it wasn’t because he didn’t try. Those are some of the things: his competitive spirit, his determination, and above all, this boundless energy and self-belief. His heart was bigger than his talent and there were no foul lines for his enthusiasm.” Jaime Jarrín, Hall of Fame broadcaster: “Baseball has lost their greatest ambassador, and I have lost a very special friend with the passing of Tommy Lasorda. It was an absolute privilege to be around Tom for more than six decades. I met him in the late 50s when the Dodgers moved from Brooklyn to Los Angeles and he was a scout for them. I will never forget the times when after a defeat he asked me to take a walk around the hotel in cities like New York, Philadelphia or Chicago at 1 or 2 o’clock in the morning. Tom Lasorda was a winner in all facets of the game he loved with such intensity as a player, scout, coach, manager, executive and advisor. My profound condolences to Jo, his widow, and the rest of the family. Tommy Lasorda will live forever in the hearts of Los Angeles, baseball players and the millions of people in many Latin American countries and around the world.” Andrew Friedman, Dodgers president of baseball operations: “From afar, Tommy is synonymous with Dodger baseball. Once I joined the organization, I could see why and just how deep it ran. The passion he had for the Dodgers was infectious and he was incredible to me. I really valued our friendship. I’m overjoyed that he was able to be there to see us win the 2020 World Series. Rest in peace, Tommy.” Rob Manfred, MLB commissioner: “Tommy Lasorda was one of the finest managers our game has ever known. He loved life as a Dodger. His career began as a pitcher in 1949 but he is, of course, best known as the manager of two World Series champions and four pennant-winning clubs. His passion, success, charisma and sense of humor turned him into an international celebrity, a stature that he used to grow our sport. Tommy welcomed Dodger players from Mexico, the Dominican Republic, Japan, South Korea and elsewhere — making baseball a stronger, more diverse and better game. He served Major League Baseball as the Global Ambassador for the first two editions of the World Baseball Classic and managed Team USA to gold in the 2000 Summer Olympics in Sydney. Tommy loved family, the United States, the National Pastime and the Dodgers and he made them all proud during a memorable baseball life. I am extremely fortunate to have developed a wonderful friendship with Tommy and will miss him. It feels appropriate that in his final months, he saw his beloved Dodgers win the World Series for the first time since his 1988 team. On behalf of Major League Baseball, I send my deepest sympathy to his wife of 70 years, Jo, and their entire family, the Dodger organization and their generations of loyal fans.” Clayton Kershaw, former MVP, three-time Cy Young Award winner and Dodgers pitcher (2008-present): “He told the same stories and the same jokes over and over again. I still listened and laughed the same each time. His ability to command a room and make people laugh is what I will remember most. When I got drafted and came to Dodger Stadium my mom came with me and we got to meet Tommy that day. Every single time I saw him from that time on he asked me how she was doing and called her by her name, Marianne. Thankful for the time we had together, and most thankful that he could be here to see us win the whole thing. RIP Tommy.” Dusty Baker, Astros manager and former Dodgers outfielder (1976–83): “Tommy loved family, life, people, food and the Big Blue Dodger in the sky. There will never be another Tommy Lasorda.” Don Mattingly, Marlins manager and former Dodgers manager (2011–15): “I will never forget Tommy for his love for the Dodgers and also his kindness and willingness to help me in anyway that he could when I came to the Dodgers. In the worst times, he would call and always encourage me. Rest in peace, Tommy.” Fred Claire, former Dodgers general manager (1987–98): “The world of baseball has lost one of its greatest ambassadors and I have lost a dear friend with the passing of Tommy Lasorda. My heart and prayers go out to Jo and Tommy’s family and friends throughout the world. No one represented the spirit of the Dodgers more than Tommy and his contributions to the organization, to countless of great causes, and to baseball will last for all time.” Fernando Valenzuela, former Dodgers pitcher, Rookie of the Year and Cy Young Award winner (1980–90): “Tommy was a master at motivating the players. He was always pushing us and making us believe in ourselves. He gave us the confidence we needed. No one wanted to win more than Tommy. He was a baseball man who thoroughly loved the game and loved the Dodgers.” Charlie Hough, former Dodgers pitcher (1970–80): “It’s a sad day for baseball. Words can’t say how important he was for my career and so many other Dodger players. He will be missed by all of baseball.” Peter O’Malley, former Dodgers owner (1979–98): “The extraordinary impact Tommy made on the Dodger organization for more than a half century will never be forgotten. He was the best at supporting and motivating his players. No one communicated better with everyone in the Dodger organization, our great fans and the media. We will miss Tommy and his huge personality. Our thoughts are with Jo and Laura.” Rick Monday, Dodgers broadcaster and former Dodger player (1977–84) : “With great sadness this morning, I received word about Tommy Lasorda’s passing. We are truly blessed if we are fortunate enough to cross paths with someone who will forever leave a positive imprint on the rest of our lives. Tommy was that person in my life when I first met him as a 17-year-old kid trying to find my path. Tommy loved to encourage, motivate and even challenge everyone to accomplish more than what you even thought possible… ‘Don’t ever sell yourself short,’ he would say. RIP my friend, and THANK YOU!” Steve Sax, former Dodgers infielder and Rookie of the Year (1981–88): “Tommy was so instrumental in my life. I’ve been through some tough lows and some very great highs and he was always there for me. Tommy was like a father to me and many others. This is a real sad day. Joe Amalfitano, former Dodgers third base coach (1983–98): “Tommy was known as a great motivator and ambassador for baseball and no question about it…he was! However, just as great as his ability as a motivator, he truly was a Hall of Fame manager excelling from the first pitch to the last…he was right on with the squeeze play! Simply put, he was the very best! My 14 years working for Tommy were my most memorable in baseball. With my faith, I can only imagine Tommy holding court in heaven…God bless you, Tommy.” Ned Colletti, analyst and former Dodgers general manager (2005–14): “Tommy honored me with his friendship for 40 years and honored baseball with his love for more than 70 years. He brought passion for life and a zest for people every day. I traveled the baseball world with Tommy, and he was someone who was known everywhere — people who didn’t know baseball would come up to him and shake his hand. He loved life — his family, his country, his Dodgers, his heritage, his friends, this city and, of course, baseball. His was a life well-lived.” Reggie Smith, former Dodgers outfielder (1976–81): “My sincere condolences to Jo, Laura and the Lasorda family for your loss. And to your extended Dodger family that share in your loss. We will miss the ‘Skipper,’ our leader and motivator! Rest in peace, ‘Skip.’” Mike Scioscia, former Angels manager and Dodgers catcher (1980–92): “Tommy was a special guy who absolutely loved the game of baseball. He always set a great atmosphere of achieving and was the most competitive person I have ever been around. He meant so much to so many people he touched over the course of his life. He will be greatly missed, but his legacy will continue to inspire people to ‘be the best they can be.’” Kirk Gibson, 1988 MVP and former Dodgers outfielder (1988–90): “You can never realize after leaving your hometown team the bonus you would get by going to Los Angeles and being a Dodger. That was Tommy Lasorda. He really taught me, a gruff, mean SOB, the power of the hug. That was one of the finer things he did for me as a person. We all loved him. He was funny, witty and he loved to compete. He made sure we did too. He held us to that standard. His legacy will never be forgotten by all his teammates. Much love.” Joe Torre, Hall of Fame manager: “I’m sorry to hear about the passing of Tommy Lasorda. Tommy was a friend and a fierce competitor who cared deeply for his players and represented the Dodgers and the game of baseball with great passion. He will be deeply missed. I send my condolences to his wife, Jo, and the entire Dodger organization.” Jon SooHoo, Dodgers photographer: “Today I mourn with Los Angeles the passing of Tommy Lasorda. He made my life journey far more interesting than just baseball. I treasure the many moments and experiences he allowed me to photograph over the last 35 years. Rest in peace, my friend.” Bobby Valentine, former MLB manager and Dodgers player (1969–72), on MLB Network: “Tommy was like my father. I don’t know if it’s because I was the No. 1 choice or I was Italian or I was a big mouth and he was a big mouth, but whatever it was we really hit it off. I loved him. He loved me. To this day, 52 years later, until last night when I got the call at 3 in the morning, he’s been as close to me as anyone in my life.” Alex Avila, MLB player and Lasorda’s godson, on MLB Network: “He was as genuine a person as you would see on TV managing a game or doing interviews. That was him. He wasn’t faking with anything. It’s been a little shocking today, but the time that I got to spend with him as a kid growing up I’ll never forget.” Steve Yeager, former Dodger catcher (1972–85): “Tommy has been a part of my life since I met him in June of 1967. He made me a better player and a better person. He was like a father to me. He knew all his players and he knew their wives, their kids and their parents and remembered every name. He’s a big part of my heart with all the memories I have of Tommy. It’s a very sad day. All my love to the family, Jo and Laura.” Orel Hershiser, former Cy Young Award winner, World Series MVP and Dodgers pitcher (1983–94): “Tommy is and was the most influential person in my baseball life. God blessed us with Tommy Lasorda. I am so thankful to have known him. He was my manager, father, friend, running mate. He was the ultimate motivator. He brought the best out of me. I will always love him.” Mickey Hatcher, former Dodger player (1979–80, 1987–90): “Tommy was a true Dodger. He was a special person who set the tone for my life, and I thank him for that.” Rick Honeycutt, former Dodger pitcher (1983–87) and pitching coach (2006–19): “I’m deeply saddened to hear about the passing of my friend, mentor, and former skipper. Besides playing for Tommy, he was instrumental in me becoming a pitching coach in the Dodgers organization in 2001. Tommy was a great motivator and always challenged me to be my best. I’m so grateful that I got to be with him the night we won the World Series. He would say every spring that we had to win the World Series before he went to the big Dodger in the sky. My thoughts and prayers go out to Jo, Laura and Emily. R.I.P.” Eric Karros, former Dodger first baseman (1991–02): “Tommy used baseball as a vehicle to impact lives, as much as any man I have known.” Dave Roberts, Dodger manager (2016-present) and former player (2002–04): “I am going to miss you, my friend. Thank you for conversations, laughs and the relentless support. Thank you for teaching me the “Dodger Way” and how fortunate we are to put this uniform on each day. The entire Dodger family will miss you. It’s up to all of us to continue your legacy. Rest in peace, Skipper!”
https://dodgers.mlblogs.com/dodgers-past-and-present-share-memories-and-condolences-after-tommy-lasordas-passing-feeab2be04a1
['Rowan Kavner']
2021-01-09 01:31:38.967000+00:00
['In Memoriam', 'MLB', 'Baseball', 'Tommy Lasorda', 'Dodgers']
What’s the Difference Between Success and Failure on the Keto Diet?
Two women, each 41 years old, tried the Keto Diet. One shed several pounds in a week by eating perfectly. But then, she went out and drank a margarita… And it was downhill from there as she made one simple (yet impactful) mistake… The Second Woman Was Also Strict At First… She lost a few pounds during the first week… And also went out to eat, but nothing derailed her. She avoided that one mistake… And proceeded to drop 3 dress sizes in just 14 days! She felt unstoppable! There were no slip-ups and she was down 5 dress sizes within 4 weeks. So What Made The Difference? What was that ONE mistake? Well, most people on a new diet have no plan. They learn what to eat and not to eat. They try new recipes… But, they DO NOT have a daily plan to carry them through the critical first month. Without a plan, it’s very easy to fall for peer pressure… to be unprepared… and to make bad decisions. So, to be clear, that ONE mistake is having no plan in place. But we have that plan. And our plan has guided over 416,387 people to success. Yes I Am Ready To Take The Challenge — Click Here
https://medium.com/@pratapb379/whats-the-difference-between-success-and-failure-on-the-keto-diet-9ebf826f07f3
['Jessica Slashes']
2021-12-22 11:18:24.704000+00:00
['Ketogenic Diet', 'Healthy Foods', 'Healthy Lifestyle', 'Keto', 'Keto Diet']
FuboTV has a cheaper live TV alternative to Hulu (if you can find it)
FuboTV has a cheaper live TV alternative to Hulu (if you can find it) Cheryl Dec 20, 2020·5 min read For this week’s column, we spent $65 on FuboTV, so you don’t exactly have to. With Hulu + Live TV set to raise prices from $55 per month to $65 on Friday, I wanted to revisit FuboTV as a potentially cheaper alternative. The service offers a similar mix of cable channels, including local news, national sports, and a broad range of entertainment channels, but it has a starting price of $60 per month. That undercuts both Hulu + Live TV and YouTube TV by $5 per month. Fubo doesn’t make the extra savings easy to get, though. The $60-per-month option is buried behind multiple menu layers, and if you’re already a subscriber, there’s no clear way to switch without calling customer service. It’s almost as if Fubo doesn’t want you to sign up for its cheapest plan, which makes you wonder why the company bothers to offer it. [ Further reading: The best streaming TV services ]How to find Fubo’s hidden “Standard” planWith FuboTV’s $65-per-month “Family” plan, you get 250 hours of cloud DVR storage and can stream on up to three devices at the same time. The $60-per-month plan, called “Standard,” offers 30 hours of DVR storage and can stream on two devices at a time. Both plans have the same channel lineup, so if you don’t need a lot of DVR storage and don’t have a family full of cable-channel viewers, there’s no need to spend the extra $5 per month. Here are the steps for finding FuboTV’s $60-per-month “Standard” plan as a new subscriber: Head to the Fubo website, then scroll down and click the “Browse all available plans” button. Select the “Add-ons & More” tab from the next page, then scroll to the very bottom of the page. Click on the text that says “fubo Standard.” (While it doesn’t look selectable you can indeed click on it to choose that plan.) Jared Newman / IDG FuboTV’s Standard plan is buried at the bottom of a page that takes a few clicks to find, and it doesn’t even look selectable (we added the arrow, of course). In fairness, you can also just click this link to head straight to the Standard sign-up page, but Fubo doesn’t make that direct link available anywhere else on its site. Current subscribers get stuck paying moreThings get thornier if you’re already a FuboTV subscriber. Over the summer, Fubo automatically moved all its Standard plan customers onto the pricier Family package, while also raising prices by $5 per month across the board. The result, effectively, was a $10-per-month price hike on existing subscribers with no easy way to opt out. At the time, Fubo sent mixed signals about customers’ ability to downgrade. While The Verge reported that subscribers would need to call customer service to restore the Standard plan, spokeswoman Jennifer Press told me that customers could manage all changes to their accounts online. Hence our little experiment with signing up for FuboTV. Given the muddled messaging from Fubo and the convoluted sign-up process for new subscribers, I was curious to see whether Fubo’s website does, in fact, offer an easy way to downgrade to its Standard plan. As of this writing, I’ve been unable to find one. While you can manually remove Fubo’s extra DVR storage and additional stream, doing so only yields a savings of two cents per month. (Asked for clarification, Fubo’s Press again insisted that customers can switch between Family and Standard plans online, but has not provided instructions on how to do so.) Jared Newman / IDG Fubo’s “Change Your Plan” menu doesn’t include the $60-per-month option. All of which is to say that if you’ve already signed up for Fubo’s Family plan, and want to save $5 per month with the Standard plan instead, you have two options: Cancel your FuboTV subscription, wait until your billing cycle ends, then sign up again with the Standard plan link. Call FuboTV’s customer service line at 844-551-1005. Cable-company tacticsBetween forcing customers onto pricier plans and hiding cheaper options, FuboTV is starting to embrace the kinds of sales tactics we’ve seen from cable- and satellite-TV service providers. The company even charges a $5-per-month regional sports fee for customers in Houston and Pittsburgh, and users in Houston have told me that Fubo doesn’t display that fee to them in its advertised pricing. Why does FuboTV do this? My guess is that its Family plan is an easy way to make each subscriber more profitable, especially if they don’t use all the features they’re paying for. And while FuboTV could just discontinue its Standard plan altogether, you’ll notice that the company advertises its Family plan as a “15% savings” on its homepage. That marketing slight of hand wouldn’t hold up if $65 per month was simply the base price. It’s a shame, because FuboTV is a decent service once you get past the dodgy sales tactics. Its app is well designed, its grid-based TV guide is easy to use, and unlike Hulu, it doesn’t charge extra for the ability to skip ads in your DVR. On Apple TV, you can watch multiple channels at the same time in split-screen mode, which no other streaming TV bundle offers, and it’s been the only live TV streaming service to offer certain events in 4K HDR. It doesn’t offer channels from AT&T’s WarnerMedia—that means no TNT, TBS, or CNN—but its lineup is otherwise comprehensive. (My biggest knock on the service: There’s no visual preview when you’re fast forwarding except on Apple TV streaming boxes.) I’m not suggesting skipping FuboTV because of its sales tactics—you shouldn’t avoid saving money on TV service purely out of spite—but you should be aware that as cord-cutting becomes more of a mainstream trend, it’s going to attract some of the same practices that gave cable TV its unsavoriness. You’ll need to keep your guard up even after cutting cable. Sign up for Jared’s Cord Cutter Weekly newsletter to get this column and other cord-cutting news, insights, and deals delivered to your inbox. Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
https://medium.com/@cheryl59284205/fubotv-has-a-cheaper-live-tv-alternative-to-hulu-if-you-can-find-it-dcd363265e8e
[]
2020-12-20 08:19:12.454000+00:00
['Cutting', 'Mobile Accessories', 'Internet', 'Home Theater']
Biden’s First 100 Days: The End of Fossil Fuel Subsidies
By Anna St. Clair After considerable pressure from environmental activists in the Sunrise Movement and other organizations, during his campaign, Joe Biden said he would eliminate fossil fuel subsidies. Now that he has been elected, this must be a central part of his presidency if he is to be taken seriously as a climate president. Even if Democrats do not regain control of the Senate, this should be a legislative priority that can guide other executive action or legislation. No matter what, Biden’s administration can create a regulatory environment that is unfavorable to bank investment in fossil fuels, and they should support removing financial aid for the fossil fuel industry in any future COVID-19 stimulus bills. What are fossil fuel subsidies and why are we spending so much money on them? Fossil fuel subsidies are government policies that lower the costs of fossil fuel production, raise the sale price of oil, gas and coal for producers, or lower the cost that consumers pay. Some estimates put U.S. fossil fuel subsidies at $20 billion per year, but the real number could be much higher. In contrast, over the last decade, the renewable energy sector received, on average, $7.5 billion per year. The Trump administration has done considerable damage by phasing down subsidies for wind and solar energy, with the solar industry’s investment tax credit set to drop from 30% to 10% by 2021 for commercial buildings and eliminated entirely for residences. The wind industry’s investment tax credit is also set to phase out completely. Our priorities are clearly not in order. But how did we get to a point where the government is subsidizing so much oil, coal and gas production despite the overwhelming evidence that fossil fuels are making the planet inhospitable? How we got here In many cases, fossil fuel subsidies have been granted as temporary measures to boost American energy production during times of war, oil shortages or economic downturns. However, they stuck around much longer than intended. Coal companies can still take advantage of a measure passed in 1950, originally enacted to help fund the Korean War, that allows them to avoid tax increases. The percentage depletion allowance, which today makes crude oil and gas one of the most tax-advantaged financial investments, was first introduced in 1913 and allowed oil companies to deduct 5% of oil production income from their taxes. In 1926, Congress raised it to more than a quarter of income (27.5%). Senator Connally, who sponsored the bill, later admitted, “We could have taken a 5 or 10 percent figure, but we grabbed 27.5 percent because we were not only hogs but the odd figure made it appear as though it was scientifically arrived at.” Since the depletion allowance is based on what is produced, not the cost of a drill site, in many cases total tax deductions can exceed initial costs. Today, the allowance has been removed for large producers and reduced to 15% for small oil producers. In contrast, the production tax credit, which was a per-kilowatt-hour subsidy for generating wind power, was phased out at the end of 2019. Solar subsidies for consumers come in the form of a tax rebate, making solar panels impossible for those who cannot afford the upfront cost of installation or who do not file for income tax. This is to say nothing of all the informal subsidies that the fossil fuel industry receives in the form of interstate highways, car-centric urban planning, and defunded rail and public transportation. When it comes to transit, prioritizing car infrastructure effectively eliminates choice for many people. When the bus comes once an hour, if at all, a car becomes a necessity for getting anywhere on time. If someone was presented with a free, accessible, and frequent bus in a city where cars are only allowed on certain streets, versus the cost of owning and maintaining a car, which do you think most people would “choose?” Subsidies are rigging the game for failing fossil fuel companies While many politicians want people to believe that fossil fuels are good for jobs and the economy, the fossil fuel industry is facing a host of challenges. New solar and wind power is today cheaper than new coal, nuclear and natural gas projects. There is a persistent myth that fossil fuels remain dominant because of the market. But markets do not exist in a vacuum; all markets are guided and regulated by government and institutions, and both play a vital role in determining which businesses and products succeed and which do not. The government is paying the fossil fuel industry in the form of subsidies, and the industry in turn is ruthlessly lobbying politicians on both sides of the aisle to keep these subsidies and to prioritize fossil fuels ahead of any alternatives. True invisible-handers believe the market will somehow find a solution for climate change in spite of all this, but the “free market” has already been co-opted by polluters. The market incentives them to continue polluting. Without subsidies, oil and gas would be even less attractive on the market compared to renewable energy options. Oil and gas companies today are struggling, not only because the future of their product is in jeopardy due to climate change, but also because the COVID-19 pandemic has caused oil prices to hit record lows. And they are not creating jobs. According to a report by the accounting firm Deloitte, over 100,000 oil and gas jobs have been lost due to the COVID-19 pandemic and most are not coming back anytime soon. Can someone inform the Republicans that the government is spending tax dollars on an inefficient industry that doesn’t create jobs and is overall a drain on society? If our economy and markets only function when we’re producing and consuming toxic substances, then there is something fundamentally wrong with the system. The solution is to change our market systems and incentives, not become slaves to the “whims” of the market, which we know is controlled through decisions made by governments to favor corporate interests. An energy transition is necessary, possible and popular Voters want renewables, so elections aren’t the obstacle to our energy transition. A Fox News poll found that 68% of voters favor increasing government spending on green and renewable energy. Our energy transition is failing because our governments refuse to take action and are being held hostage by fossil fuel lobbyists. Republicans and moderate Democrats ask, “Where will the money for the Green New Deal come from?” We can start by generating up to $20 billion in tax revenue through the elimination of fossil fuel subsidies. Right now, the fossil fuel industry is on corporate welfare life-support. If we are to avert climate catastrophe, the Biden presidency must end these subsidies immediately and instead provide meaningful financial support for renewable energy. The sky was an apocalyptic color over large swaths of the country last summer on top of a record breaking Atlantic hurricane season. There is no time to wait. We at the Sunrise Movement demand Biden do what is needed and end fossil fuel subsidies as soon as possible.
https://medium.com/sunrise-movement-nyc/bidens-first-100-days-the-end-of-fossil-fuel-subsidies-ba605aff9404
['Sunrise Nyc']
2020-12-11 18:11:26.472000+00:00
['Fossil Fuels', 'Climate Change', 'Biden', 'Climate', 'Sunrise Movement']
How to Be an Effective Fiction Writer
How to Be an Effective Fiction Writer A helpful guide from start to finish Photo by Patrick Fore on Unsplash Being an effective writer doesn’t mean your writing needs to be perfect. It doesn’t need to resonate with every reader in the world. What one person views as excellent work can be frowned upon by others. With that being said, there are a few tools every writer should keep in their memory vault. Start with an idea Imagine the blank sheet of paper, or the blank computer screen, as a garden. You dig a hole and put the seed inside, then water it and watch to see it grow. Creating a root, a stem and then beautiful flowers. You did this with your idea. A thought comes into your mind and you need to plant it like a seed and watch it grow right before your eyes. The idea can come from anywhere, it could be something as minute as a neighbor mowing his lawn to something as monumental as seeing a car crash on your drive to the store. Whatever that idea is you need to plant it on the blank sheet of paper and allow it to flourish. Let the story write itself Allow your story to create itself. You want the words to flow naturally from your finger tips. Now this doesn’t mean that you are to write with no clear thought as to how the story goes, but you need to let the idea grow. Avoid trying to keep it going along the same original plot you imagined in your head. Just like with a garden the flowers will grow at their own pace and in their own direction. Your story craves to be free, allow it that chance to thrive. Keep your outline, if you made one, it will come in handy to keep you on track. But allow the idea of change to encompass your writing. The focus and unity will come when editing. Get your first draft done. Write what you know, but don’t be afraid of what you don’t know The best advice I was ever given by one of my professor’s was to write what I know, but don’t be afraid to write what you don’t. This was brilliant. I was able to go beyond the same old stories I had been feverishly writing for years and have come up with new story ideas. One thing you need to remember is there was a time when you didn’t know anything about what you know. You develop knowledge over time, each day is a new lesson waiting for you. A little research goes a long way when you want to write something you don’t know. Adding this into your story can give it the uplift it may need to not be the same old thing. Reading is a fundamental way to research for this type of writing. Typically I write stories that involve witches and worlds that don’t exist. I have been able write different themes based off reading books that I normally wouldn’t have touched before. Write what I know, but don’t be afraid to write what you don’t So pick up a book and start reading, you never know what you will learn. Because when you learn you will be able to your authority within your writing. Don’t edit while writing your first draft I have a nasty habit of erasing a sentence just so I can fix that one mistake. This only makes for messier writing. You get distracted from the thought you had and put the story in reverse. Backtracking takes up precious moments for you to get your idea onto the paper. As hard as it can be — believe me, I know first hand — the quicker you will get your first draft done. Save the editing for when you are done writing. Put the draft away and start a new project When you are done with your first draft you will be eager to start the editing right away. This will make you want to rip your hair out. Trust me, I have been there — done that. What happens is your brain will be on overload and you won’t be able to edit proficiently. And editing is already a tedious job to begin with. The best advice I was ever given was to put the draft away and start a new project. Something different. Give yourself as much distance from the draft as you can. If you drafted a piece of fiction, try a non-fiction or try a different genre for your next project. Try watching a movie, taking a walk, cooking, playing a game. Anything that distracts you from thinking about it. This will help with the next phase of writing. Return for the revise and edit Now that you have fresh eyes from stepping away, you can edit and revise your draft. Don’t be afraid to edit and revise a few times. Some of the first things to look for are: Grammar — do you have the right punctuation — periods, commas, apostrophes, etc. This step is important to the way you want your piece to be read. When in doubt there are many free tools to help you find ones you may have missed. Grammarly is a good tool, I prefer Ginger Edit, both of these offer free writing checks to help you. Focus — you want to make sure your writing is focused. Read your draft out loud, or have someone else read to you. I use a feature on my computer that reads it to me. I will sit there with my eyes closed and listen to how the story flows. Unity — as with focus you want to ensure each part works with the previous. If everything reads smoothly until you get to paragraph four and then suddenly you start wondering what in the world is wrong here, then you must rework that one. If you find it confusing then your reader will too. The saying ‘kill your darlings’ comes into play with this part. It could be a great piece, wonderfully written, but it does nothing for your story, you need to get it out of there. Delete fluff words — fluff words are words that are not needed to convey the message. Some of these words are that, just, very, absolutely, actually and literally. This is not a complete list, but they are the most common used in writing. Realistic — you want to ensure your characters and dialogue are as true to human nature as you can. (Unless you are dealing with other world plot lines) With dialogue you can study the way people talk, the words they use. For example; “You are going to the store.” the problem lies in the way most people would say this phrase in real life. A vast majority of speak with conjunctions. Realistic dialogue would go like this; “You’re going to the store.” because that is how we speak. Your character needs to realistic as well, this includes matching their dialogue to their personality, creating a uniformed backstory of the character that relates to real life, don’t deviate from the backstory and have the character know something they couldn’t possibly know. Recap To be an effective writer, you must show authority in your writing. Your reader needs to know you know what you are talking about. The words you write are used to do this. Follow these simple steps to help you become more effective with your writing.
https://medium.com/the-writers-bookcase/how-to-be-an-effective-fiction-writer-4cfde2f804ba
['Tammi Brownlee']
2019-11-21 18:38:11.029000+00:00
['Creativity', 'Creative Writing', 'Life', 'Writing', 'Writing Tips']
Python Data Science Getting Started Tutorial: NLTK
Photo by Fotis Fotopoulos on Unsplash I. Analyze words and sentences using NLTK Welcome to the Natural Language Processing series of tutorials, using Python’s natural language toolkit NLTK module. The NLTK module is a huge toolkit designed to help you with the entire Natural Language Processing (NLP) approach. NLTK will provide you with everything from splitting paragraphs to sentences, splitting words, identifying the part of speech, highlighting themes, and even helping your machine understand what the text is about. In this series, we will address the areas of opinion mining or sentiment analysis. As we learn how to use NLTK for sentiment analysis, we will learn the following: Participle — Splits the text body into sentences and words. Part of speech tagging Machine learning and naive Bayes classifier How to use Scikit Learn (sklearn) with NLTK together Training classifiers with data sets Real-time streaming sentiment analysis with Twitter. …and more. To get started, you need the NLTK module, as well as Python. If you don’t already have Python, go to python.org and download the latest version of Python (if you are on Windows). If you are on Mac or Linux, you should be able to run apt-get install python3 . Next, you need NLTK 3. The easiest way to install an NLTK module is to use pip . For all users, this is done by opening cmd.exe , bash, or any shell you use and typing the following command: pip install nltk Next, we need to install some components for NLTK. Open python in any of your usual ways and type: Import nltk Nltk.download() Select Download All for all packages and click Download. This will give you all the word breakers, blockers, other algorithms and all the corpora. If space is an issue, you can choose to manually download all content. The NLTK module will take up approximately 7MB and the entire nltk_data directory will occupy approximately nltk_data , including your nltk_data , parser and corpus. If you are running a headless version with VPS, you can install everything by running Python and doing the following: Import nltk Nltk.download() d (for download) All (for download everything) This will download everything for you. Now that you have all the things you need, let’s type some simple words: Corpus — The body of the text, singular. Corpora is its plural. Example: A collection of medical journals . . Lexicon — vocabulary and its meaning. For example: English dictionary. However, considering that there are different thesaurus in each field. For example, for financial investors, the first meaning of the word Bull is the person who is confident in the market. Compared with the “common English vocabulary”, the first meaning of the word is animal. Therefore, financial investors, doctors, children, mechanics, etc. all have a special vocabulary. is the person who is confident in the market. Compared with the “common English vocabulary”, the first meaning of the word is animal. Therefore, financial investors, doctors, children, mechanics, etc. all have a special vocabulary. Token — Each “entity” is part of a rule based split. For example, when a sentence is “split” into words, each word is a tag. If you split a paragraph into sentences, each sentence can also be a marker. These are the most frequently heard words when entering the natural language processing (NLP) field, but we will cover more words in time. So, let’s show an example of how to split something into tags with the NLTK module. From nltk.tokenize import sent_tokenize, word_tokenize EXAMPLE_TEXT = "Hello Mr. Smith, how are you doing today? The weather is great, and Python is awesome. The sky is pinkish-blue. You shouldn't eat cardboard." Print(sent_tokenize(EXAMPLE_TEXT)) At first, you might think that word segmentation by word or sentence is quite a trivial matter. For many sentences, it may be. The first step might be to execute a simple .split('. ') , or by a period, followed by a space split. Then maybe you will introduce some regular expressions, divided by periods, spaces, and then uppercase letters. The problem is something like Mr. Smith , and there are many other things that will cause you trouble. Segmentation by word is also a challenge, especially when considering abbreviations, such as we and we're . NLTK saves you a lot of time with this seemingly simple but very complicated operation. The above code will output the sentence, divided into a list of sentences, you can use the for loop to traverse. [ 'Hello Mr. Smith, how are you doing today?', 'The weather is great, and Python is awesome.', 'The sky is pinkish-blue.', "You shouldn't eat cardboard." ] So here we create the tags, they are all sentences. Let us divide the words by word this time. Print (word_tokenize(EXAMPLE_TEXT)) [ 'Hello' , 'Mr.' , 'Smith' , ',' , 'how' , 'are' , 'you ' , 'doing ' , 'today ' , '?' , 'The' , 'weather' , 'is' , 'great' , ',' , 'and' , 'Python' , 'is' , 'awesome ' , '.' , 'The' , 'sky' , 'is' , 'pinkish-blue' , '.' , 'You' , 'should' , "n't" , 'eat' , 'cardboard' , '.' ] There are a few things to note here. First, notice that punctuation is treated as a separate tag. Also, note that the word shouldn't separated into should and n't . The last thing to note is that pinkish-blue is indeed treated as "a word", which is what it is. Cool! Now, looking at the words after these participles, we must start thinking about what our next step might be. We began to think about how to get the meaning by observing these words. We can think about how to put value on many words, but we also see some words that are basically worthless. This is a form of “stop word” that we can also handle. This is what we will discuss in the next tutorial. II. NLTK and stop words The idea of ​​natural language processing is to perform some form of analysis or processing. The machine can at least understand the meaning, representation or suggestion of the text to some extent. This is obviously a huge challenge, but there are steps that anyone can follow. However, the main idea is that computers don’t understand words directly. It is shocking that humans will not. In humans, memory is broken down into electrical signals in the brain in the form of a neural group that emits patterns. There are still many unknown things about the brain, but the more we break down the human brain into basic elements, we will find the basic elements. Well, it turns out that computers store information in a very similar way! If we want to mimic how humans read and understand text, we need a way that is as close as possible. In general, computers use numbers to represent everything, but we often see the use of binary signals directly in programming ( True or False , which can be directly converted to 1 or 0, directly from the presence of an electrical signal (True, 1) or not Exist (False, 0) ). To do this, we need a way to convert words to numeric or signal patterns. Converting data into something that a computer can understand is called "preprocessing." One of the main forms of preprocessing is to filter out useless data. In natural language processing, useless words (data) are called stop words. We can immediately realize that some words are more meaningful than others. We can also see that some words are useless and are filled words. For example, we use them in English to fill sentences, so there is no such strange sound. One of the most common, unofficial, useless examples is the word umm . People often use umm to fill, more than other words. This word is meaningless unless we are looking for someone who may lack confidence, confusion, or not much. We all do this, there are... oh... many times, you can hear me say umm or uhh in the video. For most analyses, these words are useless. We don’t want these words to take up space in our database or take up valuable processing time. Therefore, we call these words “useless words” because they are useless and we want to treat them. Another version of the word “stop word” can be written more: the words we stop at. For example, if you find words that are often used for satire, you may want to stop immediately. Satirical words or phrases will vary depending on the thesaurus and corpus. For the time being, we will treat stop words as words that do not contain any meaning, and we will remove them. You can do it easily by storing a list of words that you think are stop words. NLTK uses a bunch of words that they think are stop words to get you started, you can access it through the NLTK corpus: From nltk.corpus import stopwords Here is the list: >>> set (stopwords. words ( 'english' )) { 'ourselves' , 'hers' , 'between' , 'yourself' , 'but' , 'again ' , 'there ' , 'about ' , 'once ' , 'during ' , ' out ' , 'very ' , ' Having' , 'with' , 'they' , 'own' , 'an' , 'be' , 'some' , 'for' , 'do' , 'its' , 'yours ' , 'such' , 'into' , 'of' , 'most' , 'itself' , 'other' , 'off' , 'is' , 's' , 'am' , 'or' , 'who ' , 'as' , 'from ' , ' Him' , 'each' , 'the' , 'themselves' , 'until' , 'below' , 'are' , 'we' , 'these ' , 'your ' , 'his' , 'through' , 'don' , 'nor' , 'me' , 'were' , 'her' , 'more' , 'himself ' , 'this' , 'down' , 'should' , 'our' , 'their ' , 'while ' , ' Above' , 'both' , 'up' , 'to' , 'ours' , 'had' , 'she' , 'all' , 'no' , 'when' , 'at' , 'any' , 'before' , 'them' , 'same' , 'and' , 'been' , 'have' , 'in' , 'will' , 'on' , 'does ' , 'yourselves ' , 'then ' , 'that ' , ' Because' , 'what' , 'over' , 'why' , 'so' , 'can' , 'did ' , 'not' , ' now' , 'under' , 'he' , 'you' , 'herself' , 'has' , 'just' , 'where' , 'too' , 'only' , 'myself ' , 'which' , 'those ' , 'i' , 'after' , 'few ' , 'whom' , 't' , 'being' , 'if' , 'theirs' , 'my' , 'against ' , 'a' , 'by ' , 'doing ' , 'it ' , 'how' , 'further ' , ' Was' , 'here' , 'than' } Here’s how to use the stop_words collection to remove stop words from the text: From nltk.corpus import stopwords From nltk.tokenize import word_tokenize Example_sent = "This is a sample sentence, showing off the stop words filtration." Stop_words = set(stopwords.words( 'english' )) Word_tokens = word_tokenize(example_sent) Filtered_sentence = [w for w in word_tokens if not w in stop_words] Filtered_sentence = [] For w in word_tokens: If w not in stop_words: Filtered_sentence.append(w) Print(word_tokens) Print(filtered_sentence) Our output is: ['This', 'is', 'a', 'sample', 'sentence', ',', 'showing', 'off', 'the', 'stop', 'words', 'filtration', ' .'] ['This', 'sample', 'sentence', ',', 'showing', 'stop', 'words', 'filtration', '.'] Thanks to our database. Another form of data preprocessing is “Stemming,” which is what we’ll discuss next. III. NLTK stem extraction The concept of stemming is a standardized approach. In addition to the tense, many variations of the word have the same meaning. The reason we extract the stem is to shorten the time of the search and normalize the sentence. consider: I was taking a ride in the car. I was riding in the car. These two sentences mean the same thing. in the car (in the car) is the same. I (I) is the same. In both cases, ing clearly expresses the past tense, so in the case of trying to figure out the meaning of this past-style activity, is it really necessary to distinguish between riding and taking a ride ? No, not. This is just a small example, but imagine every word in English that can be placed on every possible tense and affix on the word. Each version has a separate dictionary entry that will be very redundant and inefficient, especially since once we convert to numbers, the “value” will be the same. One of the most popular porcelain extraction algorithms is Porter, which existed in 1979. First, we have to crawl and define our stems: From nltk.stem import PorterStemmer From nltk.tokenize import sent_tokenize, word_tokenize Ps = PorterStemmer() Now let’s choose some words with similar stems, for example: Example_words = [ "python" , "pythoner" , "pythoning" , "pythoned" , "pythonly" ] Below, we can do this to easily extract stems: For w in example_words: Print (ps.stem(w)) Our output: Python Python Python Python Pythonli Now let’s try to extract stems from a typical sentence instead of some words: New_text = "It is important to by very pythonly while you are pythoning with python. All pythoners have pythoned poorly at least once." Words = word_tokenize(new_text) For w in words : Print(ps.stem(w)) Our results now are: It Is Import To By Veri Pythonli While You Are Python With Python . All Python Have Python Poorli At Least Onc . Next, we’ll discuss some of the more advanced content of the NLTK module, part-of-speech tagging, where we can use the NLTK module to identify the part of speech of each word in the sentence. IV. NLTK part-of-speech tagging A more powerful aspect of the NLTK module is that it can be used for your part-of-speech tagging. It means to mark words in a sentence as nouns, adjectives, verbs, etc. Even more impressive is that it can also be marked by tense, and others. This is a list of tags, their meaning and some examples: POS tag list: CC coordinating conjunction CD cardinal digit DT determiner EX existential there ( like : "there is" ... think of it like "there exists" ) FW foreign word IN preposition/subordinating conjunction JJ adjective 'big' JJR adjective, comparative 'bigger' JJS adjective, superlative 'biggest' LS list marker 1 ) MD modal could, will NN noun, singular 'desk' NNS noun plural 'desks' NNP proper noun, singular 'Harrison' NNPS proper noun, plural 'Americans' PDT predeterminer 'all the kids' POS possessive ending parent 's PRP personal pronoun I, he, she PRP$ possessive pronoun my, his, hers RB adverb very, silently, RBR adverb, comparative better RBS adverb, superlative best RP particle give up TO to go 'to' the store. UH interjection errrrrrrrm VB verb, base form take VBD verb, past tense took VBG verb, gerund/present participle taking VBN verb, past participle taken VBP verb, sing. present, non- 3 d take VBZ verb, 3 rd person sing. present takes WDT wh-determiner which WP wh-pronoun who, what WP$ possessive wh-pronoun whose WRB wh-abverb where , when How do we use this? When we deal with it, we have to explain a new sentence marker called PunktSentenceTokenizer . This marker enables unsupervised machine learning, so you can actually train on any text you use. First, let's get some imports that we plan to use: Import nltk From nltk.corpus import state_union From nltk.tokenize import PunktSentenceTokenizer Now let’s create the training and test data: Train_text = state_union.raw( "2005-GWBush.txt" ) Sample_text = state_union.raw( "2006-GWBush.txt" ) One is the State of the Union speech since 2005, and the other is the speech of President George W. Bush since 2006. Next, we can train the Punkt marker as follows: Custom_sent_tokenizer = PunktSentenceTokenizer(train_text) After that we can actually divide the word and use: Tokenized = custom_sent_tokenizer.tokenize(sample_text) Now we can complete the part-of-speech tagging script by creating a function that will traverse and mark the part of speech of each sentence as follows: Def process_content () : Try : For i in tokenized[: 5 ]: Words = nltk.word_tokenize(i) Tagged = nltk.pos_tag(words) Print(tagged) Except Exception as e: Print(str(e)) Process_content() The output should be a tuple list, with the first element in the tuple being a word and the second element being a part of speech tag. It should look like: [('PRESIDENT', 'NNP'), ('GEORGE', 'NNP'), ('W.', 'NNP'), ('BUSH', 'NNP'), ( "'S" , 'POS '), ('ADDRESS', 'NNP'), ('BEFORE', 'NNP'), ('A', 'NNP'), ('JOINT', 'NNP'), ('SESSION', 'NNP '), ('OF', 'NNP'), ('THE', 'NNP'), ('CONGRESS', 'NNP'), ('ON', 'NNP'), ('THE', 'NNP '), ('STATE', 'NNP'), ('OF', 'NNP'), ('THE', 'NNP'), ('UNION', 'NNP'), ('January', 'NNP '), (' 31 ', 'CD'), (',', ','), (' 2006 ', 'CD'), ('THE', 'DT'), ('PRESIDENT', 'NNP '), (':', ':'), ('Thank', 'NNP'), ('you', 'PRP'), ('all', 'DT'), ('.', '. ')] [('Mr.', 'NNP'), ('Speaker', 'NNP'), (',', ','), ('Vice', 'NNP'), ('President', 'NNP'), ('Cheney', 'NNP'), (',', ','), ('members', 'NNS'), ('of', 'IN'), ('Congress', 'NNP'), (',', ','), ('members', 'NNS'), ('of', 'IN'), ('the', 'DT'), ('Supreme', 'NNP'), ('Court', 'NNP'), ('and', 'CC'), ('diplomatic', 'JJ'), ('corps', 'NNS'), (',', ','), ('distinguished', 'VBD'), ('guests', 'NNS'), (',', ','), ('and', 'CC'), ('fellow', 'JJ'), ('citizens', 'NNS'), (':', ':'), ('Today', 'NN'), ('our', 'PRP $'), ('nation', 'NN'), ('lost', 'VBD'), ('a', 'DT'), ('beloved', 'VBN'), (',', ' ,'), ('graceful', 'JJ'), (',', ','), ('courageous', 'JJ'), ('woman', 'NN'), ('who', ' WP'), ('called', 'VBN'), ('America', 'NNP'), ('to', 'TO'), ('its', 'PRP$'), ('founding', 'NN'), ('ideals', 'NNS'), ('and', 'CC'), ('carried', 'VBD'), ('on', 'IN'), ('a', 'DT'), ('noble', 'JJ'), ('dream', 'NN'), ('.', '.')] [('Tonight', 'NNP'), ('we' , 'PRP'), ('are', 'VBP'), ('comforted', 'VBN'), ('by', 'IN'), ('the', 'DT'), ('hope' , 'NN'), ('of', 'IN'), ('a', 'DT'), ('glad', 'NN'), ('reunion', 'NN'), ('with' , 'IN'), ('the', 'DT'), ('husband', 'NN'), ('who', 'WP'), ('was', 'VBD'), ('taken' , 'VBN'), ('so', 'RB'), ('long', 'RB'), ('ago', 'RB'), (',', ','), ('and' , 'CC'), ('we', 'PRP'), ('are', 'VBP'), ('grateful', 'JJ'), ('for', 'IN'), ('the' , 'DT'), ('good', 'NN'), ('life', 'NN'), ('of', 'IN'), ('Coretta', 'NNP'), ('Scott' , 'NNP'), ('King', 'NNP'), ('.', '.')] [('(', 'NN'), ('Applause', 'NNP'), ('. ', '.'), (')', ':')] [('Pres Ident', 'NNP'), ('George', 'NNP'), ('W.', 'NNP'), ('Bush', 'NNP'), ('reacts', 'VBZ'), ( 'to', 'TO'), ('applause', 'VB'), ('during', 'IN'), ('his', 'PRP$'), ('State', 'NNP'), ('of', 'IN'), ('the', 'DT'), ('Union', 'NNP'), ('Address', 'NNP'), ('at', 'IN'), ('the', 'DT'), ('Capitol', 'NNP'), (',', ','), ('Tuesday', 'NNP'), (',', ','), ('Jan', 'NNP'), ('.', '.')] When we get here, we can start to get the meaning, but there is still some work to be done. The next topic we will discuss is chunking, where we follow the part of the word, and divide the words into meaningful groups. V. NLTK block Now that we know the part of speech, we can pay attention to the so-called block, which divides the word into meaningful blocks. One of the main goals of blocking is to group so-called “noun phrases”. These are phrases that contain one or more words of a noun, which may be descriptive words, a verb, or an adverb. The idea is to combine nouns and words related to them. In order to block, we combine the part-of-speech tag with a regular expression. Mainly from regular expressions, we want to take advantage of these things: + = match 1 or more ? = match 0 or 1 repetitions. * = match 0 or MORE repetitions . = Any character except a new line If you need help with regular expressions, see the tutorial linked above. The last thing to note is that the word tag is represented by < and > . We can also place a regular expression in the tag itself to express "all nouns" ( <N.*> ). Import nltk From nltk.corpus import state_union From nltk.tokenize import PunktSentenceTokenizer Train_text = state_union.raw( "2005-GWBush.txt" ) Sample_text = state_union.raw( "2006-GWBush.txt" ) Custom_sent_tokenizer = PunktSentenceTokenizer(train_text) Tokenized = custom_sent_tokenizer.tokenize(sample_text) Def process_content () : Try : For i in tokenized: Words = nltk.word_tokenize(i) Tagged = nltk.pos_tag(words) chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}""" chunkParser = nltk.RegexpParser(chunkGram) Chunked = chunkParser.parse(tagged) Chunked.draw() Except Exception as e: Print(str(e)) Process_content() The main line here is: chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}""" Split this line apart: <RB.?>* : Zero or more adverbs of any tense, followed by: <VB.?>* : Zero or more verbs of any tense, followed by: <NNP>+ : One or more reasonable nouns, followed by: <NN>? : Zero or one noun singular. Try a fun mix to group the various instances until you feel familiar. Not covered in the video, but there is also a reasonable task to actually access a specific block. This is rarely mentioned, but depending on what you are doing, this can be an important step. Suppose you print out the block and you will see the following output: ( S ( Chunk PRESIDENT/NNP GEORGE/NNP W./NNP BUSH/NNP) 'S/POS ( Chunk ADDRESS/NNP BEFORE/NNP A/NNP JOINT/NNP SESSION/NNP OF/NNP THE/NNP CONGRESS/NNP ON/ NNP THE/NNP STATE/NNP OF/NNP THE/NNP UNION/NNP January/NNP) 31 /CD , /, 2006 /CD THE/DT ( Chunk PRESIDENT/NNP ) :/ : ( Chunk Thank/NNP) you/PRP All/DT ./.) Cool, this helps us visualize, but what if we want to access this data through our program? So what happens here is that our “blocking” variable is an NLTK tree. Each “block” and “non-block” is the “subtree” of the tree. We can refer to them by something like chunked.subtrees . Then we can iterate through these subtrees like this: For subtree in chunked.subtrees(): Print (subtree) Next, we may only care about getting these blocks and ignoring the rest. We can use the filter parameter in the chunked.subtrees() call. For subtree in chunked.subtrees(filter= lambda t: t.label() == 'Chunk' ): Print(subtree) Now we perform filtering to display the subtree with the label “Block”. Keep in mind that this is not a “block” in the NLTK block attribute… this is a literal “block” because this is the label we gave it: chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}""" . If we write something like chunkGram = r"""Pythons: {<RB.?>*<VB.?>*<NNP>+<NN>?}""" , then we can pass "Pythons." Tags to filter. The result should be like this: - ( Chunk PRESIDENT / NNP GEORGE / NNP W . / NNP BUSH / NNP ) ( Chunk ADDRESS / NNP BEFORE / NNP A / NNP JOINT / NNP SESSION / NNP OF / NNP THE / NNP CONGRESS / NNP ON / NNP THE / NNP STATE / NNP OF / NNP THE / NNP UNION / NNP January / NNP ) ( Chunk PRESIDENT / NNP ) ( Chunk Thank / NNP ) The complete code is: Import nltk From nltk.corpus import state_union From nltk.tokenize import PunktSentenceTokenizer Train_text = state_union.raw( "2005-GWBush.txt" ) Sample_text = state_union.raw( "2006-GWBush.txt" ) Custom_sent_tokenizer = PunktSentenceTokenizer(train_text) Tokenized = custom_sent_tokenizer.tokenize(sample_text) Def process_content () : Try : For i in tokenized: Words = nltk.word_tokenize(i) Tagged = nltk.pos_tag(words) chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}""" chunkParser = nltk.RegexpParser(chunkGram) Chunked = chunkParser.parse(tagged) Print(chunked) For subtree in chunked.subtrees(filter= lambda t: t.label() == 'Chunk' ): Print(subtree) Chunked.draw() Except Exception as e: Print(str(e)) Process_content() VI. NLTK adds a gap (Chinking) You may find that after a lot of chunking, there are some words in your block that you don’t want, but you don’t know how to get rid of them by blocking. You may find that adding a gap is your solution. Adding a gap is similar to a block, which is basically a way to remove a block from a block. The block you removed from the block is your gap. The code is very similar, you only need to use }{ to code the gap, behind the block, not the {} block. Import nltk From nltk.corpus import state_union From nltk.tokenize import PunktSentenceTokenizer Train_text = state_union.raw( "2005-GWBush.txt" ) Sample_text = state_union.raw( "2006-GWBush.txt" ) Custom_sent_tokenizer = PunktSentenceTokenizer(train_text) Tokenized = custom_sent_tokenizer.tokenize(sample_text) Def process_content () : Try : For i in tokenized[ 5 :]: Words = nltk.word_tokenize(i) Tagged = nltk.pos_tag(words) chunkGram = r"""Chunk: {<.*>+} }<VB.?|IN|DT|TO>+{""" chunkParser = nltk.RegexpParser(chunkGram) Chunked = chunkParser.parse(tagged) Chunked.draw() Except Exception as e: Print(str(e)) Process_content() Now, the main difference is: }<VB.?| IN |DT| TO >+{ This means that we want to remove one or more verbs, prepositions, qualifiers or to words from the gap. Now that we have learned how to perform some custom partitioning and adding gaps, let’s discuss the block form that comes with NLTK, which is named entity recognition. VII. NLTK named entity recognition One of the most important forms of blocking in natural language processing is called “named entity recognition.” The idea is to let the machine immediately pull out “entities” such as people, places, things, locations, currencies, and more. This can be a challenge, but NLTK is built for us. NLTK’s named entity recognition has two main options: identifying all named entities, or identifying named entities as their respective types, such as person, location, location, etc. This is an example: Import nltk From nltk.corpus import state_union From nltk.tokenize import PunktSentenceTokenizer Train_text = state_union.raw( "2005-GWBush.txt" ) Sample_text = state_union.raw( "2006-GWBush.txt" ) Custom_sent_tokenizer = PunktSentenceTokenizer(train_text) Tokenized = custom_sent_tokenizer.tokenize(sample_text) Def process_content () : Try : For i in tokenized[ 5 :]: Words = nltk.word_tokenize(i) Tagged = nltk.pos_tag(words) namedEnt = nltk.ne_chunk(tagged, binary= True ) namedEnt.draw() Except Exception as e: Print(str(e)) Process_content() You can see something right away. When binary is false, it also picks the same thing, but breaks the term White House into White and House as if they are different, and we can see it in the option binary = True , named entity The identification says that White House is part of the same named entity, which is correct. According to your goal, you can use the binary option. If your binary is false , here is what you can get, the type of named entity: NE Type and Examples ORGANIZATION - Georgia -Pacific Corp . , WHO PERSON - Eddy Bonte, President Obama LOCATION - Murray River, Mount Everest DATE - June, 2008 - 06 - 29 TIME - two fifty am, 1 : 30 p . m . MONEY - 175 million Canadian Dollars, GBP 10.40 PERCENT - twenty pct, 18.75 % FACILITY - Washington Monument, Stonehenge GPE - South East Asia, Midlothian Either way, you may find that you need to do more work to get it right, but this feature is very powerful. In the next tutorial, we will discuss something similar to stem extraction, called “lemmatizing”. VIII. NLTK morphological restoration An operation that is very similar to stemming is called a morphological restoration. The main difference between the two is that, as you have seen before, stemming powers often create words that don’t exist, while word forms are actual words. So, your stem, that is, the word you end up with, is not something you can look up in the dictionary, but you can find a word form. Sometimes you end up with very similar words, but sometimes you get completely different words. Let’s look at some examples. From nltk.stem import WordNetLemmatizer Lemmatizer = WordNetLemmatizer() Print(lemmatizer.lemmatize( "cats" )) Print(lemmatizer.lemmatize( "cacti" )) Print(lemmatizer.lemmatize( "geese" )) Print(lemmatizer.lemmatize( "rocks" )) Print(lemmatizer.lemmatize( "python" )) Print(lemmatizer.lemmatize( "better" , pos= "a" )) Print(lemmatizer.lemmatize( "best" , pos= "a" )) Print(lemmatizer.lemmatize( "run" )) Print(lemmatizer.lemmatize( "run" , 'v' )) Here we have some examples of the form of the word we use. The only thing to note is that lemmatize accepts the part-of-speech parameter pos . If not provided, the default is "noun". This means that it will try to find the closest noun, which can cause you trouble. If you use morphological restoration, please remember! In the next tutorial, we’ll dive into the NTLK corpus that comes with the module to see all the great documentation, and they are waiting for us there. IX. NLTK Corpus In this part of the tutorial, I want to take a moment to dive into the corpus we downloaded all! The NLTK corpus is a collection of natural language data that is definitely worth a look. Almost all files in the NLTK corpus follow the same rules, accessing them by using the NLTK module, but they are nothing magical. Most of these files are plain text files, some of which are XML files, others are other format files, but can be accessed manually or in modules and Python. Let’s talk about viewing them manually. Depending on your installation, your nltk_data directory may be hidden in multiple locations. To find out where it is, go to your Python directory, where the NLTK module is located. If you don't know where, please use the following code: Import nltk Print (nltk. __file__ ) Run it and the output will be the location of the NLTK module __init__.py . data.py to the NLTK directory and look for the data.py file. The important part of the code is: If sys.platform.startswith( 'win' ): # Common locations on Windows: Path += [ Str( r'C: ltk_data' ), str( r'D: ltk_data' ), str( r'E: ltk_data' ), Os.path.join(sys.prefix, str( 'nltk_data' )), Os.path.join(sys.prefix, str( 'lib' ), str( 'nltk_data' )), Os.path.join(os.environ.get(str( 'APPDATA' ), str( 'C:\\' )), str( 'nltk_data' )) ] Else : # Common locations on UNIX & OS X: Path += [ Str( '/usr/share/nltk_data' ), Str( '/usr/local/share/nltk_data' ), Str( '/usr/lib/nltk_data' ), Str( '/usr/local/lib/nltk_data' ) ] There, you can see the various possible directories for nltk_data . If you're on Windows, it's probably in your appdata , in a local directory. To do this, you need to open your file browser, go to the top, and type %appdata% . Next click on roaming and find the nltk_data directory. There, you will find your corpus file. The complete path is like this: C: \Users \swayam.mittal\AppData \Roaming ltk _data \corpora Here you have all available corpora, including books, chats, movie reviews and more. We will now discuss accessing these documents through NLTK. As you can see, these are primarily text documents, so you can open and read documents using plain Python code. That said, the NLTK module has some nice ways to handle the corpus, so you may find that the way to use them is practical. Here’s an example of how we opened the Gutenberg Bible and read the first few lines: From nltk.tokenize import sent_tokenize, PunktSentenceTokenizer From nltk.corpus import gutenberg # sample text Sample = gutenberg.raw( "bible-kjv.txt" ) Tok = sent_tokenize(sample) For x in range( 5 ): Print(tok[x]) One of the more advanced data sets is wordnet . Wordnet is a collection of words, definitions, examples of their use, synonyms, antonyms, and so on. Next we will use wordnet in depth. X. NLTK and Wordnet WordNet is an English vocabulary database created by Princeton and part of the NLTK corpus. You can use WordNet and NLTK modules together to find word meanings, synonyms, antonyms, and more. Let’s introduce some examples. First, you will need to import wordnet : From nltk.corpus import wordnet Then we plan to use the word program to find synonyms: Syns = wordnet.synsets( "program" ) An example of a synonym: Print (syns[ 0 ].name()) # plan.n.01 Just the word: Print(syns[ 0 ] .lemmas ()[ 0 ] .name ()) # plan The definition of the first synonym: Print(syns[ 0 ].definition()) # a series of steps to be carried out or goals to be accomplished Example of the use of words: Print(syns[ 0 ].examples()) # [ 'they drew up a six-step plan', 'they discussed plans for a new bond issue'] Next, how do we identify synonyms and antonyms of a word? These forms are synonymous, and then you can use .antonyms find the antonym of the form. So we can populate some lists like: Synonyms = [] Antonyms = [] For syn in wordnet.synsets( "good" ): For l in syn.lemmas(): Synonyms.append(l.name()) If l.antonyms(): Antonyms.append(l.antonyms()[ 0 ].name()) Print( set (synonyms)) Print( set (antonyms)) '' ' {' beneficial ', ' just ', ' upright ', ' thoroughly ', ' in _force ', ' well ', ' skilful ', ' skillful ', ' sound ', ' unspoiled ', ' expert ', ' Proficient ', ' in _effect ', ' honorable ', ' adept ', ' secure ', ' commodity ', ' estimable ', ' soundly ', ' right ', ' respectable ', ' good ', ' serious ', ' ripe ', ' salutary ', ' dear ', ' practiced ', ' goodness ', ' safe ', ' effective ', ' unspoilt ', ' dependable ', ' undecomposed ', ' honest ', ' full ', ' near ', ' trade_good '} {' evil ', ' evilness ', ' bad ', ' badness ', ' ill '} ' '' As you can see, our synonym is more than an antonym, because we only look up the antonym of the first form, but you can easily balance this by performing the exact same process for the word bad . Next, we can easily use WordNet to compare the similarities of two words and their tenses, and combine the Wu and Palmer methods for semantic relevance. Let’s compare nouns ship and boat : W1 = wordnet.synset( 'ship.n.01' ) W2 = wordnet.synset( 'boat.n.01' ) print (w1.wup_similarity(w2)) # 0.9090909090909091 W1 = wordnet.synset( 'ship.n.01' ) W2 = wordnet.synset( 'car.n.01' ) print (w1.wup_similarity(w2)) # 0.6956521739130435 W1 = wordnet.synset( 'ship.n.01' ) W2 = wordnet.synset( 'cat.n.01' ) print (w1.wup_similarity(w2)) # 0.38095238095238093 Next, we will discuss some issues and start discussing the topic of text categorization. XI. NLTK text classification Now that we are familiar with NLTK, let’s try to deal with text categorization. The goal of text categorization can be quite broad. Maybe we are trying to classify the text as political or military. Maybe we try to classify by the gender of the author. A fairly popular text categorization task is to identify the body of the text as spam or non-spam, such as an email filter. In our case, we will try to create a sentiment analysis algorithm. To do this, we first try to use a movie review database that belongs to the NLTK corpus. From there, we will try to use vocabulary as a “feature”, which is part of a “positive” or “negative” movie review. The NLTK corpus movie_reviews dataset has comments and they are marked as positive or negative. This means we can train and test this data. First, let's preprocess our data. Import nltk import random from nltk.corpus import movie_reviews Documents = [(list(movie_reviews.words(fileid)), category) For category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] Random.shuffle(documents) Print(documents[ 1 ]) All_words = [] For w in movie_reviews.words(): All_words.append(w.lower()) All_words = nltk.FreqDist(all_words) Print(all_words.most_common( 15 )) Print(all_words[ "stupid" ]) It may take some time to run this script because the movie review dataset is a bit large. Let us introduce what happened here. After importing the data set we want, you will see: Documents = [( list (movie_reviews. words (fileid)), category) for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] Basically, in simple English, the above code is translated into: in each category (we have forward and exclusive), select all file IDs (each comment has its own ID), then store the word_tokenized version for the file ID (word list) followed by a positive or negative label in a large list. Next, we use random to disrupt our files. This is because we are going to train and test. If we order them in order, we may train all negative comments, and some positive comments, and then test on all positive comments. We don't want this, so we messed up the data. Then, in order for you to see the data you are using, we print out documents[1] that this is a large list where the first element is a list of words and the second element is pos or neg label. Next, we want to collect all the words we found, so we can have a huge list of typical words. From here, we can perform a frequency distribution and then find the most common words. As you can see, the most popular “words” are actually punctuation, the , a etc., but soon we will get valid vocabulary. We are going to store thousands of the most popular words, so this shouldn't be a problem. Print(all_words. most_common( 15 ) ) The 15 most commonly used words are given above. You can also find out the number of occurrences of a word by following the steps below: Print (all_words[ "stupid" ]) Next, we begin to store our words as features of positive or negative movie reviews. XII. use NLTK to convert words into features In this tutorial, we build on previous videos and compile a list of features for words in positive and negative comments to see trends in specific types of words in positive or negative comments. Initially, our code: Import nltk Import random from nltk.corpus import movie_reviews Documents = [(list(movie_reviews. words (fileid)), category) for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] Random .shuffle(documents) All_words = [] For w in movie_reviews. words (): All_words.append(w. lower ()) All_words = nltk.FreqDist(all_words) Word_features = list(all_words. keys ())[: 3000 ] Almost as before, just now there is a new variable that word_features contains the top 3000 most commonly used words. Next, we'll build a simple function that finds these top 3000 words in our positive and negative documents, marking their presence as yes or no: Def find_features (document) : Words = set(document) Features = {} For w in word_features: Features[w] = (w in words) Return features Below, we can print out the feature set: Print(( find_features(movie_reviews. words( 'neg/cv000_29416.txt' ) ) ) ) ) We can then do this for all of our documentation by saving the feature boolean values ​​and their respective positive or negative categories by doing the following: Featuresets = [(find_features(rev), category) for (rev, category) in documents] Awesome, now we have features and tags, what’s next? Usually, the next step is to continue and train the algorithm and then test it. So let’s continue to do this, starting with the naive Bayes classifier in the next tutorial! XIII. NLTK Naive Bayes Classifier It’s time to choose an algorithm that divides our data into training and test sets and then starts! The algorithm we first use is the naive Bayes classifier. This is a very popular text categorization algorithm, so we can only try it first. However, before we can train and test our algorithm, we need to first break the data into a training set and a test set. You can train and test the same data set, but this will give you some serious deviations, so you shouldn’t train and test the exact same data. For this reason, since we have disrupted the data set, we will first use 1900 out-of-order comments with positive and negative comments as the training set. Then we can test on the last 100 to see how accurate we are. This is called supervised machine learning because we are presenting data to the machine and telling it “this data is positive” or “this data is negative.” Then, after the training is complete, we show the machine some new data and ask the computer what we think of the new data based on what we have taught before. We can split the data in the following ways: # set that we'll train our classifier with training_ set = featuresets[: 1900 ] # set that we'll test against. testing_ set = featuresets[ 1900 :] Below, we can define and train our classifiers: Classifier = nltk .NaiveBayesClassifier .train (training_set) First, we simply call the naive Bayes classifier and use it in a row .train() for training. It’s simple enough, now it’s trained. Next, we can test it: Print ( "Classifier accuracy percent:" ,(nltk.classify.accuracy(classifier, testing_set)) *100 ) Hey, you got your answer. If you missed it, the reason we can “test” the data is that we still have the right answer. So, in the test, we present the data to the computer without providing the correct answer. If it correctly guesses what we know, then the computer is correct. Considering the disruption we’ve done, you and I may be different in accuracy, but you should see an average of 60–75% accuracy. Next, we can learn more about the most valuable words in positive or negative comments: classifier the .Show _most_informative_features ( 15 ) This is different for everyone, but you should see something like this: Most Informative Features Insulting = True neg : pos = 10.6 : 1.0 ludicrous = True neg : pos = 10.1 : 1.0 winslet = True pos : neg = 9.0 : 1.0 detract = True pos : neg = 8.4 : 1.0 breathtaking = True pos : neg = 8.1 : 1.0 Silverstone = True neg : pos = 7.6 : 1.0 excruciatingly = True neg : pos = 7.6 : 1.0 warns =True pos : neg = 7.0 : 1.0 tracy = True pos : neg = 7.0 : 1.0 insipid = True neg : pos = 7.0 : 1.0 freddie = True neg : pos = 7.0 : 1.0 damon = True pos : neg = 5.9 : 1.0 debate = True pos : neg = 5.9 : 1.0 ordered = True pos : neg = 5.8 : 1.0 lang = True pos : neg = 5.7: 1.0 What this tells you is that each word’s negative to positive appearance, or vice versa. So here we can see that the insulting word in the negative comment appears 10.6 times more than the positive comment. Ludicrous It is 10.1. Now let’s assume that you are completely satisfied with your results, that you want to continue, and perhaps use this classifier to predict what is going on. It is very impractical to train the classifier and retrain it whenever you need to use the classifier. Therefore, you can use the pickle module to save the classifier. We will do it next. XIV. Save the classifier with NLTK Training classifiers and machine learning algorithms can take a long time, especially if you train on a larger data set. Ours is actually very small. Can you imagine that you have to train the classifier every time you want to start using the classifier? So horrible! Instead, we can use the pickle module and serialize our classifier object so that all we have to do is simply load the file. So what should we do? The first step is to save the object. To do this, you first need to import at the top of the script pickle , and then after .train() training with the classifier, you can call the following lines: Save_classifier = open ( "naivebayes.pickle" , "wb" ) Pickle. dump (classifier, save_classifier) Save_classifier. close () This opens a pickle file and is ready to write some data in bytes. Then we use pickle.dump() to dump the data. pickle.dump() The first argument is what you write, and the second argument is where you write it. After that, we closed the file as we requested, which means that we now have a pickle serialized object in the script's directory! Next, how do we start using this classifier? .pickle Files are serialized objects, all we need to do now is to read them into memory, which is as simple as reading any other normal file. this way: Classifier_f = open ( "naivebayes.pickle" , "rb" ) Classifier = pickle. load (classifier_f) Classifier_f. close () Here we have performed a very similar process. We open the file to read the bytes. Then we use pickle.load() to load the file and save the data to the classifier variable. Then we close the file, that's it. We now have the same classifier object as before! Now we can use this object, and whenever we want to use it for classification, we no longer need to train our classifier. Although it’s all good, we may not be happy with the 60–75% accuracy we get. What about other classifiers? In fact, there are many classifiers, but we need the scikit-learn (sklearn) module. Fortunately, NLTK employees recognized the value of incorporating the sklearn module into NLTK, and they built a small API for us. This is what we will do in the next tutorial. XV. NLTK and Sklearn Now we have seen how easy it is to use a classifier, now we want to try more! The best module for Python is the Scikit-learn (sklearn) module. If you want to learn more about the Scikit-learn module, I have some tutorials on Scikit-Learn machine learning. Fortunately, for us, the people behind NLTK value the value of incorporating the sklearn module into the NLTK classifier approach. In this way, they created various SklearnClassifier APIs. To use it, you just need to import it like this: From nltk.classify.scikitlearn import SklearnClassifier From here on, you can use any sklearn classifier. For example, let's introduce more variants of the naive Bayesian algorithm: From sklearn.naive_bayes import MultinomialNB , BernoulliNB After how do you use them? As a result, this is very simple. MNB_classifier = SklearnClassifier(MultinomialNB()) MNB_classifier .train (training_set) Print( " MultinomialNB accuracy percent:" ,nltk .classify .accuracy (MNB_classifier, testing_set)) BNB_classifier = SklearnClassifier(BernoulliNB()) BNB_classifier .train (training_set) Print( "BernoulliNB accuracy percent:" ,nltk .classify .accuracy (BNB_classifier, testing_set)) It’s that simple. Let us introduce more things: From sklearn.linear_model import LogisticRegression,SGDClassifier from sklearn.svm import SVC, LinearSVC, NuSVC Now all our classifiers should look like this: Print( "Original Naive Bayes Algo accuracy percent:" , (nltk .classify .accuracy (classifier, testing_set))* 100 ) classifier the .Show _most_informative_features ( 15 ) MNB_classifier = SklearnClassifier(MultinomialNB()) MNB_classifier .train (training_set) Print( "MNB_classifier accuracy percent:" , (nltk .classify .accuracy (MNB_classifier, testing_set))* 100 ) BernoulliNB_classifier = SklearnClassifier(BernoulliNB()) BernoulliNB_classifier .train (training_set) Print( "BernoulliNB_classifier accuracy percent:" , (nltk .classify .accuracy (BernoulliNB_classifier, testing_set))* 100 ) LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression_classifier .train (training_set) Print( "LogisticRegression_classifier accuracy percent:" , (nltk .classify .accuracy (LogisticRegression_classifier, testing_set))* 100 ) SGDClassifier_classifier = SklearnClassifier(SGDClassifier()) SGDClassifier_classifier .train (training_set) Print( "SGDClassifier_classifier accuracy percent:" , (nltk .classify .accuracy (SGDClassifier_classifier, testing_set))* 100 ) SVC_classifier = SklearnClassifier(SVC()) SVC_classifier .train (training_set) Print( "SVC_classifier accuracy percent:" , (nltk .classify .accuracy (SVC_classifier, testing_set))* 100 ) LinearSVC_classifier = SklearnClassifier(LinearSVC()) LinearSVC_classifier .train (training_set) Print( "LinearSVC_classifier accuracy percent:" , (nltk .classify .accuracy (LinearSVC_classifier, testing_set))* 100 ) NuSVC_classifier = SklearnClassifier(NuSVC()) NuSVC_classifier .train (training_set) Print( "NuSVC_classifier accuracy percent:" , (nltk .classify .accuracy (NuSVC_classifier, testing_set))* 100 ) The result of running it should look like this: Original Naive Bayes Algo accuracy percent: 63.0 Most Informative Features Thematic = True pos : neg = 9.1 : 1.0 secondly = True pos : neg = 8.5 : 1.0 narrates = True pos : neg = 7.8 : 1.0 rounded = True pos : neg = 7.1 : 1.0 supreme = True pos : neg = 7.1 : 1.0 Layered = True pos : neg = 7.1 : 1.0 crappy = True Neg : pos = 6.9 : 1.0 uplifting = True pos : neg = 6.2 : 1.0 ugh = True neg : pos = 5.3 : 1.0 mamet = True pos : neg = 5.1 : 1.0 gaining = True pos : neg = 5.1 : 1.0 wanda = True Neg : pos = 4.9 : 1.0 onset = True neg : pos = 4.9 :1.0 fantastic = True pos : neg = 4.5 : 1.0 kentucky = True pos : neg = 4.4 : 1.0 MNB_classifier accuracy percent: 66.0 BernoulliNB_classifier accuracy percent: 72.0 LogisticRegression_classifier accuracy percent: 64.0 SGDClassifier_classifier accuracy percent: 61.0 SVC_classifier accuracy percent: 45.0 LinearSVC_classifier accuracy percent: 68.0 NuSVC_classifier accuracy percent: 59.0 So, we can see that SVC errors are more common than correct, so we should probably discard it. but? Next we can try to use all of these algorithms at once. An algorithmic algorithm! To do this, we can create another classifier and generate the results of the classifier based on the results of other algorithms. It’s a bit like a voting system, so we only need an odd number of algorithms. This is what we will discuss in the next tutorial. XVI. Using the NLTK combination algorithm Now we know how to use a bunch of algorithm classifiers, like a child on Candy Island, telling them that they can only choose one, and we may find it difficult to select only one classifier. The good news is that you don’t have to! The combined classifier algorithm is a commonly used technique, which is implemented by creating a voting system. Each algorithm has one vote and the most votes are selected. To do this, we want our new classifier to work like a typical NLTK classifier and have all the methods. Quite simply, with object-oriented programming, we can ensure that we inherit from the NLTK classifier class. To do this we will import it: From nltk.classify import ClassifierI from statistics import mode We also import mode (the majority) because this will be the way we choose the maximum count. Now let’s build our classifier class: Class VoteClassifier (ClassifierI) : def __init__ (self, *classifiers) : self._classifiers = classifiers We call our class VoteClassifier , we inherit NLTK ClassifierI . Next, we assign the list of classifiers passed to our class self._classifiers . Next, we will continue to create our own classification method. We intend to call it .classify so that we can call it later .classify , just like the traditional NLTK classifier. Def classify (self, features) : Votes = [] For c in self._classifiers: v = c.classify(features) Votes.append(v) Return mode(votes) Quite simply, what we are doing here is to iterate through our list of classifier objects. Then, for each one, we ask it to be based on feature classification. Classification is considered a vote. After the traversal is complete, we return mode(votes) , this is just the mode to return the vote. This is what we really need, but I think another parameter, confidence is useful. Since we have a voting algorithm, we can also count the number of support and negative votes, and call it “confidence.” For example, the confidence of a 3/5 vote is weaker than a 5/5 vote. Therefore, we can literally return the voting ratio as a measure of confidence. This is our confidence method: Def confidence (self, features) : Votes = [] For c in self._classifiers: v = c.classify(features) Votes.append(v) Choice_votes = votes.count(mode(votes)) Conf = choice_votes / len(votes) Return conf Now let’s put things together: Import nltk Import random From nltk.corpus import movie_reviews From nltk.classify.scikitlearn import SklearnClassifier Import pickle From sklearn.naive_bayes import MultinomialNB, BernoulliNB From sklearn.linear_model import LogisticRegression, SGDClassifier From sklearn.svm import SVC, LinearSVC, NuSVC From nltk.classify import ClassifierI From statistics import mode Class VoteClassifier(ClassifierI): Def __init__(self, *classifiers): self._classifiers = classifiers Def classify(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) return mode(votes) Def confidence(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) Choice_votes = votes.count(mode(votes)) conf = choice_votes / len(votes) return conf Documents = [(list(movie_reviews.words(fileid)), category) For category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] Random.shuffle(documents) All_words = [] For w in movie_reviews.words(): All_words.append(w.lower()) All _words = nltk.FreqDist(all_ words) Word _features = list(all_ words.keys())[:3000] Def find_features(document): Words = set(document) features = {} for w in word_features: features[w] = (w in words) Return features #print((find_features(movie_reviews.words('neg/cv000_29416.txt'))))) Featuresets = [(find_features(rev), category) for (rev, category) in documents] Training_set = featuresets[:1900] Testing_set = featuresets[1900:] #classifier = nltk.NaiveBayesClassifier.train(training_set) Classifier_f = open("naivebayes.pickle","rb") Classifier = pickle.load(classifier_f) Classifier_f.close() Print("Original Naive Bayes Algo accuracy percent:", (nltk.classify.accuracy(classifier, testing_set))*100) classifier.show _most_ informative_features (15) MNB_classifier = SklearnClassifier(MultinomialNB()) MNB _classifier.train(training_ set) Print("MNB _classifier accuracy percent:", (nltk.classify.accuracy(MNB_ classifier, testing_set))*100) BernoulliNB_classifier = SklearnClassifier(BernoulliNB()) BernoulliNB _classifier.train(training_ set) Print("BernoulliNB _classifier accuracy percent:", (nltk.classify.accuracy(BernoulliNB_ classifier, testing_set))*100) LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression _classifier.train(training_ set) Print("LogisticRegression _classifier accuracy percent:", (nltk.classify.accuracy(LogisticRegression_ classifier, testing_set))*100) SGDClassifier_classifier = SklearnClassifier(SGDClassifier()) SGDClassifier _classifier.train(training_ set) Print("SGDClassifier _classifier accuracy percent:", (nltk.classify.accuracy(SGDClassifier_ classifier, testing_set))*100) ##SVC_classifier = SklearnClassifier(SVC()) ##SVC_classifier.train(training_set) ##print("SVC_classifier accuracy percent:", (nltk.classify.accuracy(SVC_classifier, testing_set))*100) LinearSVC_classifier = SklearnClassifier(LinearSVC()) LinearSVC _classifier.train(training_ set) Print("LinearSVC _classifier accuracy percent:", (nltk.classify.accuracy(LinearSVC_ classifier, testing_set))*100) NuSVC_classifier = SklearnClassifier(NuSVC()) NuSVC _classifier.train(training_ set) Print("NuSVC _classifier accuracy percent:", (nltk.classify.accuracy(NuSVC_ classifier, testing_set))*100) Voted_classifier = VoteClassifier(classifier, NuSVC_classifier, LinearSVC_classifier, SGDClassifier_classifier, MNB_classifier, BernoulliNB_classifier, LogisticRegression_classifier) Print("voted _classifier accuracy percent:", (nltk.classify.accuracy(voted_ classifier, testing_set))*100) Print("Classification:", voted _classifier.classify(testing_ set[ 0 ][ 0 ]), "Confidence %:",voted _classifier.confidence(testing_ set[ 0 ][ 0 ])*100) Print("Classification:", voted _classifier.classify(testing_ set[ 1 ][ 0 ]), "Confidence %:",voted _classifier.confidence(testing_ set[ 1 ][ 0 ])*100) Print("Classification:", voted _classifier.classify(testing_ set[ 2 ][ 0 ]), "Confidence %:",voted _classifier.confidence(testing_ set[ 2 ][ 0 ])*100) Print("Classification:", voted _classifier.classify(testing_ set[ 3 ][ 0 ]), "Confidence %:",voted _classifier.confidence(testing_ set[ 3 ][ 0 ])*100) Print("Classification:", voted _classifier.classify(testing_ set[ 4 ][ 0 ]), "Confidence %:",voted _classifier.confidence(testing_ set[ 4 ][ 0 ])*100) Print("Classification:", voted _classifier.classify(testing_ set[ 5 ][ 0 ]), "Confidence %:",voted _classifier.confidence(testing_ set[ 5 ][ 0 ])*100) So at the end, we run some sorter examples for the text. All our output: Original Naive Bayes Algo accuracy percent: 66.0 Most Informative Features Thematic = True pos : neg = 9.1 : 1.0 secondly = True pos : neg = 8.5 : 1.0 narrates = True pos : neg = 7.8 : 1.0 layered = True pos : neg = 7.1 : 1.0 rounded = True pos : neg = 7.1 : 1.0 Supreme = True pos : neg = 7.1 : 1.0 crappy = True neg: pos = 6.9 : 1.0 uplifting = True pos : neg = 6.2 : 1.0 ugh = True neg : pos = 5.3 : 1.0 gaining = True pos : neg = 5.1 : 1.0 mamet = True pos : neg = 5.1 : 1.0 wanda = True neg : pos = 4.9 : 1.0 onset = True neg : pos = 4.9 :1.0 fantastic = True pos : neg = 4.5 : 1.0 milos = True pos : neg = 4.4 : 1.0 MNB_classifier accuracy percent: 67.0 BernoulliNB_classifier accuracy percent: 67.0 LogisticRegression_classifier accuracy percent: 68.0 SGDClassifier_classifier accuracy percent: 57.99999999999999 LinearSVC_classifier accuracy percent: 67.0 NuSVC_classifier accuracy percent: 65.0 voted_classifier accuracy percent: 65.0 Classification: neg Confidence %:100.0 Classification: pos Confidence %: 57.14285714285714 Classification: neg Confidence %: 57.14285714285714 Classification: neg Confidence %: 57.14285714285714 Classification: pos Confidence %: 57.14285714285714 Classification: pos Confidence %: 85.71428571428571 XVII. Investigate bias using NLTK In this tutorial we will discuss some issues. The main problem is that we have a fairly biased algorithm. You can test it by commenting out the scramble of the document, then training with the first 1900 and leaving the last 100 (all positive) comments. Test it and you will find that your accuracy is very poor. Instead, you can test with the top 100 data, all of which are negative and use 1900 training. Here you will find that the accuracy is very high. This is a bad sign. This can mean a lot of things, and we have a lot of options to solve it. In other words, the project we are considering suggests that we continue and use different data sets, so we will do so. Finally, we will find that there is still some deviation in this new data set, that is, it chooses negative things more often. The reason is that the negatives of negative reviews tend to be more positive than positive ones. This can be done with some simple weighting, but it can also be complicated. Maybe it’s another day’s tutorial. Now we are going to grab a new data set, which we will discuss in the next tutorial. XVIII. Using NLTK to improve training data for sentiment analysis So now is the time to train on the new data set. Our goal is to analyze Twitter’s sentiment, so we want every positive and negative statement in the dataset to be short. It happens that I have 5300+ positive and 5300+ negative movie reviews, which is a much shorter data set. We should be able to get more accuracy from a larger training set and fit Twitter tweets better. I hosted these two files here, you can find them by downloading a short comment . Save these files as positive.txt and negative.txt . Now we can build new data sets as before. What needs to be changed? We need a new way to create our “document” variable, and then we need a new way to create the all_words variable. Really no problem, I did this: Short_pos = open ( "short_reviews/positive.txt" , "r" ). read () Short_neg = open ( "short_reviews/negative.txt" , "r" ). read () Documents = [] For r in short_pos. split ( ' ' ): Documents.append( (r, "pos" ) ) For r in short_neg. split ( ' ' ): Documents.append( (r, "neg" ) ) All_words = [] Short_pos_words = word_tokenize(short_pos) Short_neg_words = word_tokenize(short_neg) For w in short_pos_words: All_words.append(w. lower ()) For w in short_neg_words: All_words.append(w. lower ()) All_words = nltk.FreqDist(all_words) Next, we also need to adjust our feature lookup function, mainly based on the words in the document, because our new sample has no beautiful .words() features. I continued and added the most common words: Word_features = list(all_words.keys())[: 5000 ] Def find_features (document) : Words = word_tokenize(document) Features = {} For w in word_features: Features[w] = (w in words) Return features Featuresets = [(find_features(rev), category) for (rev, category) in documents] Random.shuffle(featuresets) Other than that, the rest are the same. This is a complete script just in case you or I missed something: This process takes a while. You may want to do something else. It took me about 30–40 minutes to get it all running, and I ran it on the i7 3930k. At the time of writing this article (2015), a general processor might take several hours. But this is a one-off process. Import nltk import random from nltk.corpus import movie_reviews from nltk.classify.scikitlearn import SklearnClassifier import pickle From sklearn.naive_bayes import MultinomialNB , BernoulliNB from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.svm import SVC, LinearSVC, NuSVC From nltk.classify import ClassifierI from statistics import mode From nltk.tokenize import word_tokenize Class VoteClassifier (ClassifierI) : def __init__ (self, *classifiers) : Self._classifiers = classifiers Def classify (self, features) : Votes = [] For c in self._classifiers: v = c.classify(features) Votes.append(v) Return mode(votes) Def confidence (self, features) : Votes = [] For c in self._classifiers: v = c.classify(features) Votes.append(v) Choice_votes = votes.count(mode(votes)) Conf = choice_votes / len(votes) Return conf Short_pos = open( "short_reviews/positive.txt" , "r" ).read() Short_neg = open( "short_reviews/negative.txt" , "r" ).read() Documents = [] For r in short_pos.split( ' ' ): Documents.append( (r, "pos" ) ) For r in short_neg.split( ' ' ): Documents.append( (r, "neg" ) ) All_words = [] Short_pos_words = word_tokenize(short_pos) Short_neg_words = word_tokenize(short_neg) For w in short_pos_words: All_words.append(w.lower()) For w in short_neg_words: All_words.append(w.lower()) All_words = nltk.FreqDist(all_words) Word_features = list(all_words.keys())[: 5000 ] Def find_features (document) : Words = word_tokenize(document) Features = {} For w in word_features: Features[w] = (w in words) Return features #print((find_features(movie_reviews.words('neg/cv000_29416.txt'))))) Featuresets = [(find_features(rev), category) for (rev, category) in documents] Random.shuffle(featuresets) # positive data example: training_set = featuresets[: 10000 ] Testing_set = featuresets[ 10000 :] ## ### negative data example: ##training_set = featuresets[100:] ##testing_set = featuresets[:100] Classifier = nltk.NaiveBayesClassifier.train(training_set) Print( "Original Naive Bayes Algo accuracy percent:" , (nltk.classify.accuracy(classifier, testing_set))* 100 ) Classifier.show_most_informative_features( 15 ) MNB_classifier = SklearnClassifier(MultinomialNB()) MNB_classifier.train(training_set) Print( "MNB_classifier accuracy percent:" , (nltk.classify.accuracy(MNB_classifier, testing_set))* 100 ) BernoulliNB_classifier = SklearnClassifier(BernoulliNB()) BernoulliNB_classifier.train(training_set) Print( "BernoulliNB_classifier accuracy percent:" , (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))* 100 ) LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression_classifier.train(training_set) Print( "LogisticRegression_classifier accuracy percent:" , (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))* 100 ) SGDClassifier_classifier = SklearnClassifier(SGDClassifier()) SGDClassifier_classifier.train(training_set) Print( "SGDClassifier_classifier accuracy percent:" , (nltk.classify.accuracy(SGDClassifier_classifier, testing_set))* 100 ) ##SVC_classifier = SklearnClassifier(SVC()) ##SVC_classifier.train(training_set) ##print("SVC_classifier accuracy percent:", (nltk.classify.accuracy(SVC_classifier, testing_set))*100) LinearSVC_classifier = SklearnClassifier(LinearSVC()) LinearSVC_classifier.train(training_set) Print( "LinearSVC_classifier accuracy percent:" , (nltk.classify.accuracy(LinearSVC_classifier, testing_set))* 100 ) NuSVC_classifier = SklearnClassifier(NuSVC()) NuSVC_classifier.train(training_set) Print( "NuSVC_classifier accuracy percent:" , (nltk.classify.accuracy(NuSVC_classifier, testing_set))* 100 ) Voted_classifier = VoteClassifier( NuSVC_classifier, LinearSVC_classifier, MNB_classifier, BernoulliNB_classifier, LogisticRegression_classifier) Print( "voted_classifier accuracy percent:" , (nltk.classify.accuracy(voted_classifier, testing_set))* 100 ) Output: Original Naive Bayes Algo accuracy percent: 66.26506024096386 Most Informative Features Refreshing = True pos : neg = 13.6 : 1.0 captures = True pos : neg = 11.3 : 1.0 stupid = True neg : pos = 10.7 : 1.0 tender = True pos : neg = 9.6 : 1.0 meandering = True neg : pos = 9.1 : 1.0 Tv = True neg : pos = 8.6 : 1.0 low-key = TruePos : neg = 8.3 : 1.0 thoughtful = True pos : neg = 8.1 : 1.0 banal = True neg : pos = 7.7 : 1.0 amateurish = True neg : pos = 7.7 : 1.0 terrific = True pos : neg = 7.6 : 1.0 record = True Pos : neg = 7.6 : 1.0 captivating = True pos : neg = 7.6 :1.0 portrait = True pos : neg = 7.4 : 1.0 culture = True pos : neg = 7.3 : 1.0 MNB_classifier accuracy percent: 65.8132530120482 BernoulliNB_classifier accuracy percent: 66.71686746987952 LogisticRegression_classifier accuracy percent: 67.16867469879519 SGDClassifier_classifier accuracy percent: 65.8132530120482 LinearSVC_classifier accuracy percent: 66.71686746987952 NuSVC_classifier accuracy percent: 60.09036144578314 voted_classifier accuracy percent: 65.66265060240963 Yes, I bet you spent a while, so in the next tutorial we will talk about pickle everything! XIX. Use NLTK to create modules for sentiment analysis With this new data set and new classifiers, we can move on. As you may have noticed, this new data set takes longer to train because it is a larger collection. I have shown you that by pickel serializing or serializing the trained classifiers, we can actually save a lot of time. These classifiers are just objects. I have shown you how to use it pickel to implement it, so I encourage you to try it yourself. If you need help, I will paste the complete code... but be careful, do it yourself! This process takes a while. You may want to do something else. It took me about 30–40 minutes to get it all running, and I ran it on the i7 3930k. At the time of writing this article (2015), a general processor might take several hours. But this is a one-off process. Import nltk import random #from nltk.corpus import movie_reviews from nltk.classify.scikitlearn import SklearnClassifier import pickle from sklearn.naive_bayes import MultinomialNB , BernoulliNB from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.svm import SVC, LinearSVC, NuSVC from nltk. Classify import ClassifierI from statistics import mode from nltk.tokenize import Word_tokenize Class VoteClassifier (ClassifierI) : def __init__ (self, *classifiers) : Self._classifiers = classifiers Def classify (self, features) : Votes = [] For c in self._classifiers: v = c.classify(features) Votes.append(v) Return mode(votes) Def confidence (self, features) : Votes = [] For c in self._classifiers: v = c.classify(features) Votes.append(v) Choice_votes = votes.count(mode(votes)) Conf = choice_votes / len(votes) Return conf Short_pos = open( "short_reviews/positive.txt" , "r" ).read() Short_neg = open( "short_reviews/negative.txt" , "r" ).read() # move this up here All_words = [] Documents = [] # j is adject, r is adverb, and v is verb #allowed_word_types = ["J","R","V"] allowed_word_types = [ "J" ] For p in short_pos.split( ' ' ): Documents.append( (p, "pos" ) ) Words = word_tokenize(p) Pos = nltk.pos_tag(words) For w in pos: if w[ 1 ][ 0 ] in allowed_word_types: All_words.append(w[ 0 ].lower()) For p in short_neg.split( ' ' ): Documents.append( (p, "neg" ) ) Words = word_tokenize(p) Pos = nltk.pos_tag(words) For w in pos: if w[ 1 ][ 0 ] in allowed_word_types: All_words.append(w[ 0 ].lower()) Save_documents = open( "pickled_algos/documents.pickle" , "wb" ) Pickle.dump(documents, save_documents) Save_documents.close() All_words = nltk.FreqDist(all_words) Word_features = list(all_words.keys())[: 5000 ] Save_word_features = open( "pickled_algos/word_features5k.pickle" , "wb" ) Pickle.dump(word_features, save_word_features) Save_word_features.close() Def find_features (document) : Words = word_tokenize(document) Features = {} For w in word_features: Features[w] = (w in words) Return features Featuresets = [(find_features(rev), category) for (rev, category) in documents] Random.shuffle(featuresets) Print(len(featuresets)) Testing_set = featuresets[ 10000 :] Training_set = featuresets[: 10000 ] Classifier = nltk.NaiveBayesClassifier.train(training_set) Print( "Original Naive Bayes Algo accuracy percent:" , (nltk.classify.accuracy(classifier, testing_set))* 100 ) Classifier.show_most_informative_features( 15 ) ############### save_classifier = open( "pickled_algos/originalnaivebayes5k.pickle" , "wb" ) Pickle.dump(classifier, save_classifier) Save_classifier.close() MNB_classifier = SklearnClassifier(MultinomialNB()) MNB_classifier.train(training_set) Print( "MNB_classifier accuracy percent:" , (nltk.classify.accuracy(MNB_classifier, testing_set))* 100 ) Save_classifier = open( "pickled_algos/MNB_classifier5k.pickle" , "wb" ) Pickle.dump(MNB_classifier, save_classifier) Save_classifier.close() BernoulliNB_classifier = SklearnClassifier(BernoulliNB()) BernoulliNB_classifier.train(training_set) Print( "BernoulliNB_classifier accuracy percent:" , (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))* 100 ) Save_classifier = open( "pickled_algos/BernoulliNB_classifier5k.pickle" , "wb" ) Pickle.dump(BernoulliNB_classifier, save_classifier) Save_classifier.close() LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression_classifier.train(training_set) Print( "LogisticRegression_classifier accuracy percent:" , (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))* 100 ) Save_classifier = open( "pickled_algos/LogisticRegression_classifier5k.pickle" , "wb" ) Pickle.dump(LogisticRegression_classifier, save_classifier) Save_classifier.close() LinearSVC_classifier = SklearnClassifier(LinearSVC()) LinearSVC_classifier.train(training_set) Print( "LinearSVC_classifier accuracy percent:" , (nltk.classify.accuracy(LinearSVC_classifier, testing_set))* 100 ) Save_classifier = open( "pickled_algos/LinearSVC_classifier5k.pickle" , "wb" ) Pickle.dump(LinearSVC_classifier, save_classifier) Save_classifier.close() ##NuSVC_classifier = SklearnClassifier(NuSVC()) ##NuSVC_classifier.train(training_set) ##print("NuSVC_classifier accuracy percent:", (nltk.classify.accuracy(NuSVC_classifier, testing_set))*100) SGDC_classifier = SklearnClassifier(SGDClassifier()) SGDC_classifier.train(training_set) Print( "SGDClassifier accuracy percent:" ,nltk.classify.accuracy(SGDC_classifier, testing_set)* 100 ) Save_classifier = open( "pickled_algos/SGDC_classifier5k.pickle" , "wb" ) Pickle.dump(SGDC_classifier, save_classifier) Save_classifier.close() Now you only need to run it once. If you wish, you can run it anytime, but now you are ready to create a sentiment analysis module. This is what we call sentiment_mod.py a file: #File: sentiment_mod.py Import nltk import random #from nltk.corpus import movie_reviews from nltk.classify.scikitlearn import SklearnClassifier import pickle from sklearn.naive_bayes import MultinomialNB , BernoulliNB from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.svm import SVC, LinearSVC, NuSVC from nltk. Classify import ClassifierI from statistics import mode from nltk.tokenize import Word_tokenize Class VoteClassifier (ClassifierI) : def __init__ (self, *classifiers) : Self._classifiers = classifiers Def classify (self, features) : Votes = [] For c in self._classifiers: v = c.classify(features) Votes.append(v) Return mode(votes) Def confidence (self, features) : Votes = [] For c in self._classifiers: v = c.classify(features) Votes.append(v) Choice_votes = votes.count(mode(votes)) Conf = choice_votes / len(votes) Return conf Documents_f = open( "pickled_algos/documents.pickle" , "rb" ) Documents = pickle.load(documents_f) Documents_f.close() Word_features5k_f = open( "pickled_algos/word_features5k.pickle" , "rb" ) Word_features = pickle.load(word_features5k_f) Word_features5k_f.close() Def find_features (document) : Words = word_tokenize(document) Features = {} For w in word_features: Features[w] = (w in words) Return features Featuresets_f = open( "pickled_algos/featuresets.pickle" , "rb" ) Featuresets = pickle.load(featuresets_f) Featuresets_f.close() Random.shuffle(featuresets) Print(len(featuresets)) Testing_set = featuresets[ 10000 :] Training_set = featuresets[: 10000 ] Open_file = open( "pickled_algos/originalnaivebayes5k.pickle" , "rb" ) Classifier = pickle.load(open_file) Open_file.close() Open_file = open( "pickled_algos/MNB_classifier5k.pickle" , "rb" ) MNB_classifier = pickle.load(open_file) Open_file.close() Open_file = open( "pickled_algos/BernoulliNB_classifier5k.pickle" , "rb" ) BernoulliNB_classifier = pickle.load(open_file) Open_file.close() Open_file = open( "pickled_algos/LogisticRegression_classifier5k.pickle" , "rb" ) LogisticRegression_classifier = pickle.load(open_file) Open_file.close() Open_file = open( "pickled_algos/LinearSVC_classifier5k.pickle" , "rb" ) LinearSVC_classifier = pickle.load(open_file) Open_file.close() Open_file = open( "pickled_algos/SGDC_classifier5k.pickle" , "rb" ) SGDC_classifier = pickle.load(open_file) Open_file.close() Voted_classifier = VoteClassifier( Classifier, LinearSVC_classifier, MNB_classifier, BernoulliNB_classifier, LogisticRegression_classifier) Def sentiment (text) : Feats = find_features(text) Return voted_classifier.classify(feats),voted_classifier.confidence(feats) So here, in addition to the final function, there is nothing new, it is very simple. This function is the key to interacting with us from here. This function, which we call “emotion”, takes a parameter, text. Here, we use the find_features functions we have created to decompose these features. All we have to do now is to use our voting classifier to return the classification and return the confidence of the classification. With this, we can now use this file, as well as the emotion function, as a module. The following is a sample script that uses this module: Import sentiment_mod as s Print(s.sentiment( "This movie was awesome! The acting was great, plot was wonderful, and there were pythons...so yea!" )) Print(s.sentiment( "This movie was utter junk. There were absolutely 0 pythons. I don't see what the point was at all. Horrible movie, 0/10" )) As expected, python the comments with the movie are obviously good, and no python movie is junk. Both have 100% confidence. It took me about 5 seconds to import the module, because we saved the classifier and it might take 30 minutes without saving. Thanks to pickle your time, there will be big differences depending on your processor. If you keep going, I will say that you may want to see it too joblib . Now that we have this great module, it works very easily, what can we do? I suggest that we go to Twitter for real-time sentiment analysis! XX. NLTK Twitter sentiment analysis Now that we have an sentiment analysis module, we can apply it to any text, but it’s better to have short text, like Twitter! To this end, we will combine this tutorial with the Twitter Streaming API Tutorial. The initial code for this tutorial is: From tweepy import Stream from tweepy import OAuthHandler from tweepy.streaming import StreamListener #consumer key, consumer secret, access token, access secret. ckey= "fsdfasdfsafsffa" csecret= "asdfsadfsadfsadf" atoken= "asdf-aassdfs" asecret= "asdfsadfsdafsdafs" Class listener (StreamListener) : Def on_data (self, data) : Print(data) Return ( True ) Def on_error (self, status) : print status Auth = OAuthHandler(ckey, csecret) Auth.set_access_token(atoken, asecret) twitterStream = Stream(auth, listener()) twitterStream.filter(track=[ "car" ]) This is enough to print car all the data of a streaming live tweet containing words . We can use json modules json.loads(data) to load data variables, and then we can reference specific ones tweet : Tweet = all_data[ "text" ] Since we have a tweet, we can easily pass it to our sentiment_mod module. From tweepy import Stream from tweepy import OAuthHandler from tweepy.streaming import StreamListener import json import sentiment_mod as s #consumer key, consumer secret, access token, access secret. ckey= "asdfsafsafsaf" csecret= " asdfasdfsadfsa " atoken= "asdfsadfsafsaf-asdfsaf" asecret= "asdfsadfsadfsadfsadfsad" From twitterapistuff import * Class listener (StreamListener) : Def on_data (self, data) : All_data = json.loads(data) Tweet = all_data[ "text" ] Sentiment_value, confidence = s.sentiment(tweet) Print(tweet, sentiment_value, confidence) If confidence* 100 >= 80 : Output = open( "twitter-out.txt" , "a" ) Output.write(sentiment_value) Output.write( ' ' ) Output.close() Return True Def on_error (self, status) : Print(status) Auth = OAuthHandler(ckey, csecret) Auth.set_access_token(atoken, asecret) twitterStream = Stream(auth, listener()) twitterStream.filter(track=[ "happy" ]) In addition to this, we also save the results to the output file twitter-out.txt . Next, what is the data analysis without the chart is complete? Let’s combine another tutorial to draw a live streaming graph from sentiment analysis on the Twitter API. XXI. Using NLTK to draw Twitter real-time sentiment analysis Now that we have obtained real-time data from the Twitter Streaming API, why are there no activity diagrams showing emotional trends? To this end, we will combine this tutorial with the matplotlib drawing tutorial. If you want to learn more about how the code works, see this tutorial. otherwise: Import matplotlib.pyplot as plt import matplotlib.animation as animation from matplotlib import style import time Style.use( "ggplot" ) Fig = plt.figure() Ax1 = fig.add_subplot( 1 , 1 , 1 ) Def animate (i) : pullData = open( "twitter-out.txt" , "r" ).read() Lines = pullData.split( ' ' ) Xar = [] Yar = [] x = 0 y = 0 For l in lines[- 200 :]: x += 1 if "pos" in l: y += 1 elif "neg" in l: y -= 1 Xar.append(x) Yar.append(y) Ax1.clear() Ax1.plot(xar,yar) Ani = animation.FuncAnimation(fig, animate, interval= 1000 ) Plt.show() XXII. Stanford NER marker and named entity recognition The Stanford NER marker provides an alternative to NLTK’s Named Entity Recognition (NER) classifier. This marker is largely seen as a standard for named entity recognition, but because it uses advanced statistical learning algorithms, its computational overhead is greater than the options offered by NLTK. One of the big advantages of the Stanford NER marker is that we have several different models to extract named entities. We can use any of the following: Three types of models for identifying locations, people, and organizations Four types of models for identifying locations, people, organizations, and miscellaneous entities Seven types of models, identifying location, people, organization, time, money, percentage and date In order to continue, we need to download the model and jar files, because the NER classifier is written in Java. These are available free of charge from the Stanford Natural Language Processing Group . NTLK For the convenience of us, NLTK provides a wrapper for the Stanford marker, so we can use it in the best language (of course Python)! StanfordNERTagger The parameters passed to the class include: The path of the classification model (the following three types of models are used) jar Path to the Stanford marker file Path to the Stanford marker file Training data encoding (default is ASCII) Here’s how we set it up to mark sentences using three types of models: # -*- coding: utf-8 -*- From nltk.tag import StanfordNERTagger from nltk.tokenize import word_tokenize St = StanfordNERTagger( '/usr/share/stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz' , '/usr/share/stanford-ner/stanford-ner.jar' , Encoding= 'utf-8' ) Text = 'While in France, Christine Lagarde discussed short-term stimulus efforts in a recent interview with the Wall Street Journal.' Tokenized_text = word_tokenize(text) Classified_text = st.tag(tokenized_text) Print(classified_text) Once we follow the word segmentation and classify the sentences, we will see that the marker produces the following list of tuples: [('While', 'O'), ('in', 'O'), ('France', 'LOCATION'), (',', 'O'), ('Christine', 'PERSON') , ('Lagarde', 'PERSON'), ('discussed', 'O'), ('short-term', 'O'), ('stimulus', 'O'), ('efforts', 'O '), ('in', 'O'), ('a', 'O'), ('recent', 'O'), ('interview', 'O'), ('with', 'O '), ('the', 'O'), ('Wall', 'ORGANIZATION'), ('Street', 'ORGANIZATION'), ('Journal', 'ORGANIZATION'), ('.', 'O ')] Great!Each tag uses PERSON , LOCATION , ORGANIZATION or O tags (using our three types of models). O Represents only the other, unnamed entities. This list can now be used to test the annotated data, which we will cover in the next tutorial. XXIII. Testing the accuracy of the NLTK and Stanford NER markers We know how to use two different NER classifiers! But which one should we choose, NLTK or Stanford? Let’s do some testing to find out the answer. The first thing we need is some labeled reference data to test our NER classifier. One way to get this data is to find a large number of articles and mark each tag as a named entity (for example, people, organization, location) or other non-named entity. Then we can test our separate NER classifier with the correct label we know. Unfortunately, this is very time consuming! The good news is that there is a manually annotated data set that is freely available with over 16,000 English sentences. There are also data sets in German, Spanish, French, Italian, Dutch, Polish, Portuguese and Russian! This is an annotated sentence from the dataset: Founding O member O Kojima I -PER Minoru I -PER Played O guitar O on O Good I -MISC Day I -MISC , O and O Wardanceis the I -misc Cover O of O A O Song O by O UK the I -LOC Post O punk O industrial O band O Killing I -ORG Joke I -ORG . O Let’s read, split, and manipulate the data to make it a better format for testing. Import nltk from nltk.tag import StanfordNERTagger from nltk.metrics.scores import accuracy Raw_annotations = open( "/usr/share/wikigold.conll.txt" ).read() Split_annotations = raw_annotations.split() # Amend class annotations to reflect Stanford's NERTagger for n,i in enumerate(split_annotations): if i == "I-PER" : Split_annotations[n] = "PERSON" if i == "I-ORG" : Split_annotations[n] = "ORGANIZATION" if i == "I-LOC" : Split_annotations[n] = "LOCATION" # Group NE data into tuples def group (lst, n) : for i in range( 0 , len(lst), n): Val = lst[i:i+n] If len(val) == n: yield tuple(val) Reference_annotations = list(group(split_annotations, 2 )) Ok, it looks good! However, we also need to paste the “clean” form of this data into our NER classifier. Let’s do it. Pure_tokens = split_annotations[::2] This reads in the data, splits it by whitespace, and then takes split_annotations a subset of everything in increments of two (starting from the zeroth element) . This produces a data set similar to the (smaller) example below: ['Founding', 'member', 'Kojima', 'Minoru', 'played', 'guitar', 'on', 'Good', 'Day', ',', 'and', 'Wardanceis', ' Cover', 'of', 'a', 'song', 'by', 'UK', 'post', 'punk', 'industrial', 'band', 'Killing', 'Joke', '.' ] Let’s go ahead and test the NLTK classifier: Tagged_words = nltk.pos_tag(pure_tokens) nltk_unformatted_prediction = nltk.ne_chunk(tagged_words) Since the NLTK NER classifier generates trees (including POS tags), we need to do some extra data manipulation to get the proper form for testing. #Convert prediction to multiline string and then to list (includes pos tags) Multiline_string = nltk.chunk.tree2conllstr(nltk_unformatted_prediction) Listed_pos_and_ne = multiline_string.split() # Delete pos tags and rename Del listed_pos_and_ne[ 1 :: 3 ] Listed_ne = listed_pos_and_ne # Amend class annotations for consistency with reference_annotations for n,i in enumerate(listed_ne): if i == "B-PERSON" : Listed_ne[n] = "PERSON" if i == "I-PERSON" : Listed_ne[n] = "PERSON" if i == "B-ORGANIZATION" : Listed_ne[n] = "ORGANIZATION" if i == "I-ORGANIZATION" : Listed_ne[n] = "ORGANIZATION" if i == "B-LOCATION" : Listed_ne[n] = "LOCATION" if i == "I-LOCATION" : Listed_ne[n] = "LOCATION" if i == "B-GPE" : Listed_ne[n] = "LOCATION" if i == "I-GPE" : Listed_ne[n] = "LOCATION" # Group prediction into tuples Nltk_formatted_prediction = list(group(listed_ne, 2 )) Now we can test the accuracy of NLTK. Nltk_accuracy = accuracy(reference_annotations, nltk_formatted_prediction) print(nltk_accuracy) Wow, the accuracy rate .8971 ! Let us now test the Stanford classifier. Because this classifier generates output in tuples, testing does not require more data manipulation. St = StanfordNERTagger( '/usr/share/stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz' , '/usr/share/stanford-ner/stanford-ner.jar' , Encoding= 'utf-8' ) Stanford_prediction = st.tag(pure_tokens) Stanford_accuracy = accuracy(reference_annotations, stanford_prediction) Print (stanford_accuracy) .9223 The accuracy rate! better! If you want to draw this, here are some extra code. If you want to learn more about how this works, check out the matplotlib series: Import numpy as np Import matplotlib.pyplot as plt from matplotlib import style Style.use( 'fivethirtyeight' ) N = 1 ind = np.arange(N) # the x locations for the groups width = 0.35 # the width of the bars Fig, ax = plt.subplots() Stanford_percentage = stanford_accuracy * 100 rects1 = ax.bar(ind, stanford_percentage, width, color= 'r' ) Nltk_percentage = nltk_accuracy * 100 rects2 = ax.bar(ind+width, nltk_percentage, width, color= 'y' ) # add some text for labels, title and axes ticks ax.set_xlabel( 'Classifier' ) Ax.set_ylabel( 'Accuracy (by percentage)' ) Ax.set_title( 'Accuracy by NER Classifier' ) Ax.set_xticks(ind+width) Ax.set_xticklabels( ( '' ) ) Ax.legend( (rects1[ 0 ], rects2[ 0 ]), ( 'Stanford' , 'NLTK' ), bbox_to_anchor=( 1.05 , 1 ), loc= 2 , borderaxespad= 0. ) Def autolabel (rects) : # attach some text labels for rect in rects: Height = rect.get_height() Ax.text(rect.get_x()+rect.get_width()/ 2. , 1.02 *height, '%10.2f' % float(height), Ha= 'center' , va= 'bottom' ) Autolabel(rects1) Autolabel(rects2) Plt.show() XXIV. testing the speed of the NLTK and Stanford NER markers We have tested the accuracy of our NER classifier, but there are more issues to consider when deciding which classifier to use. Let’s test the speed! We know that we are comparing the same thing, we will test it in the same article. Use this episode in NBC News: House Speaker John Boehner became animated Tuesday over the proposed Keystone Pipeline, castigating the Obama administration for not having the project. Republican House Speaker John Boehner says there's "nothing complex about the Keystone Pipeline," and that it 's time to build it . "Complex? You think the Keystone Pipeline is complex?!" Boehner responded to a questioner. "It's been under study for five years! We build pipelines in America every day. Do you realize that are 200,000 miles of pipelines in the United States? " At The Speaker Wentworth ON : "And at The only reason at The President is's Involved in at The Keystone Pipeline IS Because IT Crosses AN International's boundary the Listen, WE CAN Build IT There's Nothing Complex the About at The Keystone Pipeline - IT's Time to Build IT..." Of Said Boehner at The President is NO excuse HAD AT the this Point to not give at The Pipeline at The Go-Ahead the After at The State Department Report Released A ON catalog on Friday Indicating, at The Project Impact Would have have A minimal ON at The Environment. Republicans have long pushed for construction of the project, which enjoys some measure of Democratic support as well. The GOP is considering conditioning an extension of the debt limit on approval of the project by Obama. The White House, though, has said that it has no timetable for a final decision on the project. First, we perform the import and process the article through reading and word segmentation. # -*- coding: utf-8 -*- Import nltk import os Import numpy as np Import matplotlib.pyplot as plt from matplotlib import style from nltk import pos_tag from nltk.tag import StanfordNERTagger from nltk.tokenize import word_tokenize Style.use( 'fivethirtyeight' ) # Process text def process_text (txt_file) : raw_text = open( "/usr/share/news_article.txt" ).read() Token_text = word_tokenize(raw_text) Return token_text Great! Now let’s write some functions to split our classification task. Because the NLTK NEG classifier requires a POS tag, we will add a POS tag to our NLTK function. # Stanford NER tagger def stanford_tagger (token_text) : st = StanfordNERTagger( '/usr/share/stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz' , '/usr/share/stanford-ner /stanford-ner.jar' , Encoding= 'utf-8' ) Ne_tagged = st.tag(token_text) Return (ne_tagged) # NLTK POS and NER taggers def nltk_tagger (token_text) : Tagged_words = nltk.pos_tag(token_text) Ne_tagged = nltk.ne_chunk(tagged_words) Return (ne_tagged) Each classifier needs to read the article and classify the named entities, so we wrap these functions in a larger function to make timing easier. Def stanford_main () : Print(stanford_tagger(process_text(txt_file))) Def nltk_main () : print(nltk_tagger(process_text(txt_file))) When we call our program, we call these functions. We will os.times() wrap our stanford_main() sum nltk_main() function in the function call , taking the fourth index, which is the elapsed time. Then we will plot our results. If __name__ == '__main__' : Stanford_t0 = os .times()[ 4 ] Stanford_main() Stanford_t1 = os .times()[ 4 ] Stanford_total_time = stanford_t1 - stanford_t0 Nltk_t0 = os .times()[ 4 ] Nltk_main() Nltk_t1 = os .times()[ 4 ] Nltk_total_time = nltk_t1 - nltk_t0 Time_plot(stanford_total_time, nltk_total_time) For our drawing, we use the time_plot() function: Def time_plot (stanford_total_time, nltk_total_time) : N = 1 ind = np.arange(N) # the x locations for the groups width = 0.35 # the width of the bars Stanford_total_time = stanford_total_time Nltk_total_time = nltk_total_time Fig, ax = plt.subplots() Rects1 = ax.bar(ind, stanford_total_time, width, color= 'r' ) Rects2 = ax.bar(ind+width, nltk_total_time, width, color= 'y' ) # Add text for labels, title and axes ticks ax.set_xlabel( 'Classifier' ) Ax.set_ylabel( 'Time (in seconds)' ) Ax.set_title( 'Speed ​​by NER Classifier' ) Ax.set_xticks(ind+width) Ax.set_xticklabels( ( '' ) ) Ax.legend( (rects1[ 0 ], rects2[ 0 ]), ( 'Stanford' , 'NLTK' ), bbox_to_anchor=( 1.05 , 1 ), loc= 2 , borderaxespad= 0. ) Def autolabel (rects) : # attach some text labels for rect in rects: Height = rect.get_height() Ax.text(rect.get_x()+rect.get_width()/ 2. , 1.02 *height, '%10.2f' % float(height), Ha= 'center' , va= 'bottom' ) Autolabel(rects1) Autolabel(rects2) Plt.show() Wow, NLTK is as fast as lightning! It seems that Stanford is more accurate, but NLTK is faster. This is important information to know when balancing the precision of our preferences with the computing resources required. But wait, there is still a problem. Our output is ugly! This is a small sample from Stanford University: [( 'House' , 'ORGANIZATION' ), ( 'Speaker' , 'O' ), ( 'John' , 'PERSON' ), ( 'Boehner' , 'PERSON' ), ( 'became' , 'O' ) , ( 'animated' , 'O' ), ( 'Tuesday' , 'O' ), ( 'over' , 'O' ), ( 'the' , 'O' ), ( 'proposed' , 'O' ) ,( 'Keystone' , 'ORGANIZATION' ), ('Pipeline' , 'ORGANIZATION' ), ( ',' , 'O' ), ( 'castigating' , 'O' ), ( 'the' , 'O' ), ( 'Obama' , 'PERSON' ), ( 'administration' , 'O' ), ( 'for' , 'O' ), ( 'not' , 'O' ), ( 'having' , 'O' ), ( 'approved' , 'O' ),( 'the' , 'O' ), ( 'project' ,'O' ), ( 'yet' , 'O' ), ( '.' , 'O' ) And NLTK: ( S ( ORGANIZATION House/NNP) Speaker/NNP ( PERSON John/NNP Boehner/NNP) became/VBD animated/VBN Tuesday/NNP over/IN the/DT proposed/VBN ( PERSON Keystone/NNP Pipeline/NNP) , /, Castigating/VBG the/DT ( ORGANIZATION Obama/NNP) administration/NN for/IN not/RB having/VBG approved/VBN the/DT project/NN yet/RB ./. Let’s turn them into a readable form in the next tutorial. XXV. Create a readable list of named entities using BIO tags Now that we have completed the test, let’s turn our named entity into a good readable format. Again, we will use the same news from NBC News: House Speaker John Boehner became animated Tuesday over the proposed Keystone Pipeline, castigating the Obama administration for not having the project. Republican House Speaker John Boehner says there's "nothing complex about the Keystone Pipeline," and that it 's time to build it . "Complex? You think the Keystone Pipeline is complex?!" Boehner responded to a questioner. "It's been under study for five years! We build pipelines in America every day. Do you realize that are 200,000 miles of pipelines in the United States? " At The Speaker Wentworth ON : "And at The only reason at The President is's Involved in at The Keystone Pipeline IS Because IT Crosses AN International's boundary the Listen, WE CAN Build IT There's Nothing Complex the About at The Keystone Pipeline - IT's Time to Build IT..." Of Said Boehner at The President is NO excuse HAD AT the this Point to not give at The Pipeline at The Go-Ahead the After at The State Department Report Released A ON catalog on Friday Indicating, at The Project Impact Would have have A minimal ON at The Environment. Republicans have long pushed for construction of the project, which enjoys some measure of Democratic support as well. The GOP is considering conditioning an extension of the debt limit on approval of the project by Obama. The White House, though, has said that it has no timetable for a final decision on the project. Our NTLK output is already a tree (just the last step), so let’s take a look at our Stanford output. We will mark the tag with BIO, B for the beginning of the named entity, I for the internal, and O for the other. For example, if our sentence is yes Barack Obama went to Greece today , we should mark it as Barack-B Obama-I went-O to-O Greece-B today-O . To do this, we will write a series of conditions to check the labels of the current and previous O tags. # -*- coding: utf-8 -*- Import nltk import os Import numpy as np Import matplotlib.pyplot as plt from matplotlib import style from nltk import pos_tag from nltk.tag import StanfordNERTagger from nltk.tokenize import word_tokenize from nltk.chunk import conlltags2tree from nltk.tree import Tree Style.use( 'fivethirtyeight' ) # Process text def process_text (txt_file) : raw_text = open( "/usr/share/news_article.txt" ).read() Token_text = word_tokenize(raw_text) Return token_text # Stanford NER tagger def stanford_tagger (token_text) : st = StanfordNERTagger( '/usr/share/stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz' , '/usr/share/stanford-ner /stanford-ner.jar' , Encoding= 'utf-8' ) Ne_tagged = st.tag(token_text) Return (ne_tagged) # NLTK POS and NER taggers def nltk_tagger (token_text) : Tagged_words = nltk.pos_tag(token_text) Ne_tagged = nltk.ne_chunk(tagged_words) Return (ne_tagged) # Tag tokens with standard NLP BIO tags def bio_tagger (ne_tagged) : Bio_tagged = [] Prev_tag = "O" for token, tag in ne_tagged: if tag == "O" : #O Bio_tagged.append((token, tag)) Prev_tag = tag Continue if tag != "O" and prev_tag == "O" : # Begin NE bio_tagged.append((token, "B-" +tag)) Prev_tag = tag Elif prev_tag != "O" and prev_tag == tag: # Inside NE bio_tagged.append((token, "I-" +tag)) Prev_tag = tag Elif prev_tag != "O" and prev_tag != tag: # Adjacent NE bio_tagged.append((token, "B-" +tag)) Prev_tag = tag Return bio_tagged Now we write the BIO-tagged tags to the tree, so they are in the same format as the NLTK output. # Create tree def stanford_tree (bio_tagged) : Tokens, ne_tags = zip(*bio_tagged) Pos_tags = [pos for token, pos in pos_tag(tokens)] Conlltags = [(token, pos, ne) for token, pos, ne in zip(tokens, pos_tags, ne_tags)] Ne_tree = conlltags2tree(conlltags) Return ne_tree Traverse and parse out all named entities: # Parse named entities from tree def structure_ne (ne_tree) : Ne = [] For subtree in ne_tree: if type(subtree) == Tree: # If subtree is a noun chunk, ie NE != "O" Ne_label = subtree.label() Ne_string = " " .join([token for token, pos in subtree.leaves()]) Ne.append((ne_string, ne_label)) Return ne In our call, we put all the additional functions together. Def stanford_main () : Print(structure_ne(stanford_tree(bio_tagger(stanford_tagger(process_text(txt_file))))))) Def nltk_main () : print(structure_ne(nltk_tagger(process_text(txt_file)))) Then call these functions: If __name__ == '__main__' : Stanford_main() Nltk_main() Here is a nice looking output from Stanford: [('House', 'ORGANIZATION'), ('John Boehner', 'PERSON'), ('Keystone Pipeline', 'ORGANIZATION'), ('Obama', 'PERSON'), ('Republican House', ' ORGANIZATION'), ('John Boehner', 'PERSON'), ('Keystone Pipeline', 'ORGANIZATION'), ('Keystone Pipeline', 'ORGANIZATION'), ('Boehner', 'PERSON'), ('America ', 'LOCATION'), ('United States', 'LOCATION'), ('Keystone Pipeline', 'ORGANIZATION'), ('Keystone Pipeline', 'ORGANIZATION'), ('Boehner', 'PERSON'), ('State Department', 'ORGANIZATION'), ('Republicans', 'MISC'), ('Democratic', 'MISC'), ('GOP', 'MISC'), ('Obama', 'PERSON'), ('White House', 'LOCATION')] And from NLTK: [('House', 'ORGANIZATION'), ('John Boehner', 'PERSON'), ('Keystone Pipeline', 'PERSON'), ('Obama', 'ORGANIZATION'), ('Republican', 'ORGANIZATION '), ('House', 'ORGANIZATION'), ('John Boehner', 'PERSON'), ('Keystone Pipeline', 'ORGANIZATION'), ('Keystone Pipeline', 'ORGANIZATION'), ('Boehner' , 'PERSON'), ('America', 'GPE'), ('United States', 'GPE'), ('Keystone Pipeline', 'ORGANIZATION'), ('Listen', 'PERSON'), (' Keystone', 'ORGANIZATION'), ('Boehner', 'PERSON'), ('State Department', 'ORGANIZATION'), ('Democratic', 'ORGANIZATION'), ('GOP', 'ORGANIZATION'), ('Obama', 'PERSON'), ('White House', 'FACILITY')] Work Hard. God Bless.
https://medium.com/datadriveninvestor/python-data-science-getting-started-tutorial-nltk-2d8842fedfdd
['Swayam Mittal']
2019-03-07 10:33:31.755000+00:00
['Nltk', 'Sklearn', 'Machine Learning', 'Python', 'Data Science']
DECOLONIZING THE CHURCH
DECOLONIZING THE CHURCH Reflections on a month of events celebrating the 30th anniversary of Black Catholic History month as part of our six-part series on “Decolonizing History” Lori Wysong Follow Dec 6, 2020 · 9 min read As part of the Lepage Center’s six-part event series on “Decolonizing History,” in November we explored the theme of “Decolonizing the Church.” We began with a talk by Dr. Tia Noelle Pratt, sociologist and curator of the #BlackCatholicSyllabus. We continued with a lecture by Dr. Shannen Dee Williams, Assistant Professor of History at Villanova University, entitled “The Real Sister Act.” She discussed her research on segregation in religious orders and the resilience of Black sisters who found various means of trying to serve God, including forming their own orders and eventually desegregating white orders. We concluded with a roundtable on “Preserving Philadelphia’s Black Catholic History and heritage in the 21st Century, featuring Adrienne Harris, a third-generation parishioner of the church and executive director of the Advocates and descendants of St. Peter Claver; Carolyn Jenkins, a religious educator, parishioner at St. Charles Borromeo Catholic Church and founding director of the Claver Center for Evangelization; and Jacqueline Wiggins, a longtime educator, founding member of Philadelphia’s Black Catholic Lay Caucus, and Parishioner at St. Martin de Porres. What is Black Catholic History Month? November marks the 30th anniversary of Black Catholic History month, established to encourage the study and amplify the histories of Black Catholics. Founded in 1990 at the meeting of National Black Catholic Clergy Caucus, it came in response to Father Cyprian Davis’s book The History of Black Catholics in the United States and used November, the birth month of St. Augustine and the month of St. Martin de Porres’s Feast day, to promote Black Catholic History. With Pope Francis recently appointing Wilton Gregory as Cardinal of the Archdiocese of Washington, this year is also a landmark due to his status as the first US Black conclave voting member of the college of Cardinals, meaning Gregory will likely have a say in choosing the next pope and has the potential to become pontiff himself. In honor of Black Catholic History Month, the Lepage Center’s November events questioned what it would mean to decolonize the Church. Around ¼ of the global Church is of African descent, as, increasingly, are its clergy members and sisters. Yet Dr. Tia Noelle Pratt introduced the idea of what she sees as a pervasive myth: that being Black and being Catholic don’t go together; that most Black Catholics are thought of as missionary converts rather than coming from a religious tradition several hundreds of years old. She works to fight this myth through her scholarship in identity studies and the #BlackCatholicSyllabus,. What is the purpose of this syllabus? “To combat erasure,” Pratt explained, “If we don’t tell our own stories… others will tell our stories for us, and we’ll get left out.” The United States Conference of Catholic Bishops (USCCB) creates curriculum at a K-12 level for Catholic schools, and Pratt believes that currently, “There is minimal exposure of the Black Catholic experience.” Though her own syllabus is geared toward scholars, she sees no reason why children’s learning can’t be decolonized as well. “We have to stop with this notion that everything has to be about white folks,” she said, “Let’s embrace the fact that St. Augustine was an African man.” “The African roots of the U.S. Catholic Church are just as old as the European roots,” Williams emphasized, adding that the beginning of slavery in the U.S. begins “not in 1619 but in 1565.” Her own research highlights the presence of Black Catholics in the Americas long before the formation of the United States, including enslaved people brought by the Spaniards (some by members of the Church, which was itself a corporate slaveholder) to the southern part of what is today the U.S. How have Black Catholics been erased from history? Despite official teachings against discrimination, in practice the Church has not consistently prohibited this in spaces of learning and worship. For example, many African Americans who felt a call to religious life faced challenges for centuries because of their skin color. Women consistently were denied admission by white sisterhoods, even by the very nuns who educated them in desegregated schools. The struggle to desegregate began in the nineteenth century, but Williams argues that most white communities “steadfastly” resisted this until the Civil Rights and Black Power era. Prior to that point, many women attempted to “pass” for white and relinquish their personal identity, or if they could not, endured verbal and sometimes physical harassment because of their skin color, sometimes being relegated to separate living quarters. “Many of these women do die young due to stress-related diseases,” Williams added. Mother Theresa Maxis Duchemin, who could pass as white, founded the historically white Sister Servants of the Immaculate Heart of Mary (IHM) in Michigan (originally the Sisters of Providence), and when her identity was discovered, she was exiled to another religious community in Canada and erased from the records of the community she’d worked to start. Though the Sisters apologized and opened their archives in the 1990s, there are many such stories yet to be told due to instances like these. The solution to this issue is multi-pronged. In the first place, oral histories and historical scholarship are needed. Both Williams and Pratt described Black Catholic history as a heavily under-researched field, and one with frequently inaccessible archival materials. And because the context of Black parishes being established to avoid humiliation or segregation in other religious spaces is being lost as more and more Black churches close, many parishioners are the remaining repositories of these histories. “The people who lived so much of this, you know, are now our elders, and if we don’t do this work soon, we won’t be able to hear directly from them,” Pratt warned. How does erasure continue today? Erasure from the historical record is not the only challenge Black Catholics face. St. Peter Claver church in Philadelphia, whose parish records for a long time obscured the contributions of prominent Black donors who helped found the parish, but which also faced physical demolition when the Archdiocese of Philadelphia shuttered the building in 2014. In this case, along with the historical erasure from public record, there is the threat of erasure of a physical space, where important life events and sacraments of parishioners took place. This struggle began in 1985 when the Archdiocese began slowly withdrawing support from the parish with intent to eventually close it. Harris detailed her struggle with the Archdiocese of Philadelphia to keep the church open as a sacred space or shrine. In the late 19th century, Black Catholics paid for the church out of their own donations. Parishioner Adrienne Harris noted, “The Archdiocese didn’t pay one dime for this Church,” yet it intends to sell it. St. Peter Claver is considered the “Mother Church” as the first historically Black parish in Philadelphia and the root of Black Catholicism in that city. Harris observed that “if you kill the roots, nothing will grow.” While Carolyn Jenkins’s parish, St. Charles Borromeo, is not closing, its history and culture are being visibly erased. This erasure traces to the Archdiocese’s appointment of Father Esteban Granyak six years ago, who has instituted controversial policies like holding separate, secret masses for Neocatechumenal parishioners, selling off the parish buildings dedicated to social events, discontinuing the Parish Council, firing Black personnel, locking the Church sanctuary, and destroying the interior historic fabric of the Church when it was closed early in the pandemic. “It’s almost unbelievable to believe that the things that were happening then are happening now,” Jenkins said, “The Archdiocese which claimed to be antiracist is still allowing this to go on.” Though St. Charles Borromeo parishioners have complained through proper channels to no avail, Jenkins’s hope is that the recent USCCB meeting in DC will produce results. “That is our church. We have been loyal to the Archdiocese of Philadelphia for all these years…we feel as though that should be recognized.” Erasing a physical space can lead directly to the erasure and underrepresentation of Black Catholic identity. Pratt argues that as many as 30% of parishioners at a historically Black parish will not seek out a new Catholic church once theirs closes. Though the USCCB currently numbers around 3 million Black Catholics in the US, she believes this number would be much higher if fallen-away Catholics were accounted for, as well as other under-represented groups including Black Latinos. Jacqueline Wiggins shared how her own church attendance faltered once her home parish in Philadelphia, St. Elizabeth’s, closed in 1993. She cited a lack of transparency between the Archdiocese and the members of closing urban churches, saying “The planning process for these parishes was a sham,” and that the Diocese had already decided the end result without trying to support or save historically Black parishes. Wiggins eventually returned to church, after learning that an African American priest was assigned to St. Martin de Porres. She claims the experience heightened her “level of awareness about racism in the Church.” What can we do to fight it? In order to understand the experiences and stories of Black Catholics (particularly of those who left the Church), more personal communication as well as statistical research is necessary. Pratt believes “the onus is on us as scholars.” Her own success in in interviews comes from building trust. “I’m coming at this from a very specific set of experiences and not just a scholarly perspective.” Williams, similarly, conducted much of her research by personally interviewing Black nuns. She happened upon the topic by chance after reading an article about the first meeting of the National Black Sisters’ Conference (NBSC) in 1968. “They begin to counter the narrative of the Church in its ambivalence toward the Black freedom struggle,” she observed, acknowledging that Black sisterhoods declined in the late twentieth century. Though this trend has since been countered with African-born nuns, Williams contends there is a lack of research into the stories of those who lived through this crucial time period. In order to “decolonize” the Church, our panelists believe acknowledgement of past wrongs is necessary. Williams argues that “It has to come from Rome,” but cites successful examples of making amends from dioceses within the United States. In Iowa, for example, credit for founding the diocese long went to a slaveholding Bishop, who actually paid for its establishment with wages earned by an enslaved woman named Mary Louise, whom he left behind in the South and rented out. Loras College in Iowa has since named a scholarship for Marie Louise and the college’s first Black students and removed a statue of the Bishop. Yet, decolonization does not mean the rejection of flawed individuals in the Church’s history. Some Catholic saints at various points in their lives held slaves or were in favor of segregation. Pratt doesn’t believe these people should be de-canonized and sees it as okay to “lean in” to their imperfections to understand their status as less unattainable. “We have to get more comfortable with our saints being sinners.” To preserve the physical spaces of worship, Pratt called upon bishops to “Halt the hemorrhaging of parishes,” particularly traditionally Black parishes in cities. While she understands how changing demographics require the consolidation of churches, she urged dioceses to develop better, more transparent organizational processes, taking into account the fact that “the physical structure of a church building matters to people.” Several panel participants explicitly called upon preservationists to help with present-day erasure of physical religious spaces, and even asked assistance from Villanova University. Harris suggested that rather than continuing a fruitless dialogue with the Archdiocese of Philadelphia, Villanova take over St. Peter Claver to ensure its preservation as a sacred space. “I think Villanova University would be the perfect partner for Peter Claver,” she urged, noting that it would be a “win-win situation” that would provide a space for dialogue between students and Black members of the Catholic Church, in addition to a site for religious and social community. Regardless of whether such a partnership takes place, Villanova will certainly be focusing more on Black Catholic History in the future. In addition to holding Lepage Center events on decolonizing the Church this month, beginning next year the university will host the first annual Mother Mary Lange lecture in Black Catholic History, named for the founder of the Oblate Sisters of Providence, the first historically Black order of nuns in the US, and the first to teach in Philly and minister to the African American Community. Resources: · #BlackCatholicSyllabus, Dr. Tia Noelle Pratt’s comprehensive list of websites, books and articles relating to Black Catholic History
https://medium.com/hindsights/decolonizing-the-church-on-the-30th-anniversary-of-b-lack-catholic-history-month-d552ca4fac29
['Lori Wysong']
2021-03-06 04:04:20.557000+00:00
['Decolonization', 'African American', 'History', 'Black Women', 'Catholic']
How Regulations Could Empower The Crypto Industry
Cryptocurrencies are renowned for their decentralized nature and the fact that there is no central authority in control. The financial freedom aspect of it is a huge draw card, knowing that no one can freeze your account or control your money. As cryptocurrencies gain popularity and increased adoption, many organizations have started working on regulations to have some control over the system. Many view these regulations as a negative element, hindering the “freedom” aspect of our beloved crypto industry. But is this really the case? In this article we’re going to explore how regulations could empower the crypto industry. Who Is Behind Crypto Regulations? While no one entity is in control of cryptocurrencies in general, governments have taken things into their own hands and have begun creating and implementing crypto regulations. While many early adopters despise the regulations, there might be some long term potential to implementing regulations. For now, as governments implement new regulations, investors typically rush to buy or sell various digital assets to increase gains and reduce losses. Obviously this takes its toll on the price of various assets, as a massive sell off would decrease the value while a large purchase would pump the price up. A classic example of this is in 2018 when news got out that the South Korean government was looking at implementing heavy controls, even bans, on the use of cryptocurrencies. Following this speculative news, the value of cryptocurrencies across the board dropped sharply in a matter of hours. Bitcoin dropped nearly 15%, while other cryptocurrencies like Ethereum, Ripple and Litecoin also saw losses around that amount. The Upside To Crypto Regulation However, not all tales are negative. When South Korea eventually decided to take a more positive approach to the world of digital assets, the value of the cryptocurrencies again rose. Japan’s heightened interest and adoption of Bitcoin also contributed to the rise in value, as well as when India lifted their cryptocurrency ban as recently as March 2020. So, do regulations impact crypto prices? The answer is yes, and it’s not necessarily a bad thing. As the industry looks to increase adoption, one might not think at first but regulations could be a positive step in the right direction. For a new user wanting to enter the market but uncertain over whether cryptocurrencies will be legal in years to come, or what the tax implications might be, regulations offer a safety net. Knowing that there are laws in place to structure and safeguard your investment is very assuring to a new time crypto trader. How Regulations Could Empower The Crypto Industry Often viewed as a hindrance to the market’s potential, regulations might actually empower the crypto industry. Here are a few instances on how regulations can benefit the market: Increase levels of certainty to new investors, a draw card that eliminates the unpredictability for the long term feasibility. Increase use cases, as with more regulation more fiat operating merchants can start accepting cryptocurrency as payment options. The more the governments are on board, the more the use cases increase. Accountable asset ownership, meaning that users employing cryptocurrencies for illicit activities can better be managed due to rules and regulations in place to hinder them. Which leads to: More positive market overview due to the nature in which crypto is used — for accountable payments rather than dark web antics. Which again, circles back to the first point. Face Them As They Come While cryptocurrencies have been in existence for over 11 years, it is only in the past few that regulatory bodies have been thinking about and implementing basic levels of regulations. The industry has earned a name for itself as being the “Wild West”, however it also stood its ground through the pandemic. It’s anyone’s guess what might happen, all we can do is remain vigilant to updates, stay clear of FUD-inducing media (Fear Uncertainty, Doubt), and rather look at how regulations could empower the crypto industry.
https://medium.com/@oobit/how-regulations-could-empower-the-crypto-industry-858d53daee84
[]
2020-11-19 09:02:10.666000+00:00
['Oobit', 'Fintech', 'Regulation', 'Cryptocurrency', 'Finance']
Introduction to Affiliate Marketing: Everything You Need to Know
Every wise individual knows that the key to becoming richer and making more profit is multiple incomes. This does not mean that you have to focus on more than one business to gain what you want. The possibility of earning money is endless in the digital era. When it comes to making money with minimal effort, affiliate marketing is one of the most popular methods. We must mention that affiliate marketing is not as easy as pie, but it is not rocket science either And in the long run, it is more than satisfying. There are just so many things to say about affiliate marketing. Luckily, today we are here to share with you everything you need to know about it regarding what it is, why it is important and different methods of affiliate marketing. Without further ado, let’s get started. What is Affiliate Marketing? Affiliate marketing is the process of earning a commission as you promote a company’s brand, which results in purchasing action. In other words, an affiliate would earn money as someone purchases an item in the advertisement or the link they provide. The process typically involves an affiliate who has a proper platform to promote a product and promotes it with whichever means they want to. For each person who purchases through the promotion done by the affiliate, the company gives a commission to the affiliate. You should take note that not all member-promoting programs are very similar. Some will pay you for just each buy, while some will pay for clicks or downloads. Commonly, member programs are allowed to take part, which makes them so well known — in light of the fact that there isn’t anything to lose. Over the long haul, member promoting can mean two things: a decent side hustle or your essential pay source. Obviously, the two closures require time and exertion, however, subsidiary promoting would carry brilliant outcomes with a decent methodology and information. Why is Affiliate Marketing a Good Idea? We referenced how most member-promoting programs are free, and it is riskless to go along with one. You may be wondering, is this all? Would it be a good idea for you to begin a subsidiary program since you should just go for broke and there is a shot at you bringing in cash? All things considered, obviously, “having everything to gain by simply trying” is certifiably not a sensible expression. As you may definitely know, there isn’t anything like a free lunch; everything accompanies a cost. What about the time and exertion you will place into subsidiary advertising? Is it awesome? To respond to those inquiries and comprehend the idea of associate showcasing better, ideally, let’s see the justifications for why member promoting is a smart thought. Adaptability The primary motivation behind why member showcasing is a smart thought is its adaptable nature. You don’t need to follow a relentless timetable, have gatherings, or have the information on a specialist to do associate advertising. Obviously, more exertion implies more benefit, however, you don’t need to do any of that. You can begin with what you need in regards to how long you need to function for associate advertising, what you need to realize, and so forth, and go with your own speed. The adaptable idea of offshoot promoting makes it conceivable to learn it as you do it. Obviously, there are many sources on the web (like this one that you are perusing right now) that can show you numerous things. In any case, there is a piece of offshoot promoting that will be extraordinary to your crowd and friends that you need to learn while doing it. The adaptability of partner promoting establishes an incredibly advantageous climate to learn and improve continually. Would you like to play greater? Certainly, what difference would it make? It is safe to say that you are worn out and need to dial back the speed a smidgen? Feel free to rest a tad. You work for yourself in associate advertising. The adaptability goes past that. It doesn’t make any difference what time it will be, it doesn’t make any difference where you are, and it doesn’t make any difference whether assuming you are dozing or alert: you can generally bring in cash with subsidiary showcasing. Evergreen Income Another motivation behind why member advertising is a smart thought is that it is evergreen. At the point when you consider it, there are no justifications for why offshoot promoting can age significantly formed. Organizations make more deals, and therefore, the offshoot gets a pay. It is a mutually beneficial arrangement where no one gets injured. Moreover, when you become amazing at member showcasing, there is no restriction to the cash you can make. You can begin with just 50 dollars per month, yet you can cause a great many dollars as you improve. Given the adaptable nature, making a huge number of dollars out of offshoot showcasing isn’t terrible in any way, right? Thus, most would agree that subsidiary promoting will save its notoriety as long as we have the web (which is potential until the end of time). Investigate different ideas identified with “evergreen” Variety It very well may be trying to observe something you are energetic with regards to occupations. In any case, with associate promoting, it isn’t similar to that. The slam dunk what you will do is sure: you will advance an item. In any case, it is totally dependent upon you which item will that be. It tends to be everything from infant fundamentals to cultivating items; you will get to conclude what it will be. Obviously, when you are the leader in this sense, you can get to pick something you are enthusiastic about. Everyone loves discussing the things they love, so why not advance them and get cash accordingly? The variety of offshoot promoting isn’t restricted to the items. There are bunches of specialties and techniques to do partner promoting. You simply need to know them (which we will talk about in the accompanying segments) and pick whichever fits you. Obviously, the adaptable idea of associate showcasing comes and thinks that we are here as well. You can change your items or specialty at whatever point you might want to. There are no agreements or anything in offshoot promoting. Just at whatever point and whatever! Peaceful There are practically no pay sources where you can say, “what I do is totally tranquil.” At the point when you ponder the pressure wellsprings of occupations, there are a few things we can list: Others (clients or associates) Cash issues Using time effectively Things being what they are, does member promoting include any of them? No. We should expand on that somewhat more profound. The clients you are advancing are not yours. They have a place with the organizations with which you are an offshoot. No clients = no client support required. You needn’t bother with any other person to do partner advertising, which implies no associates can trouble you. Associate promoting ordinarily begins as a side hustle and assuming you are bringing in minimal expenditure, that is fine since you are not reliant upon it. Obviously, assuming you begin to make a great many dollars, you can stop your principal pay source and be an associate promoting sovereign (or lord). What’s more, as we referenced a few times before, everything in associate showcasing is adaptable as hell. Thus, no pressure because of time usage issues. You don’t have any reports to convey due tomorrow! No Risks Maybe the best thing about offshoot showcasing is that it doesn’t imply any dangers. We referenced that you don’t have to pay anything or sign anything in the majority of the member programs. What might occur assuming you were unable to make any deals? Literally nothing. The organizations are paying you for every deal (or snap or call) you make. They will do nothing in case you don’t make any deals. All in all, what makes you terrified of beginning it? Simply find out with regards to the means we will make reference to now and attempt them. You will see, there isn’t anything to terrified of. How to do Affiliate Marketing? The advantages and the motivations behind why member showcasing is a smart thought generally sound so fulfilling and worth difficult. Be that as it may, how would you start? Everything goes through the means of how to begin offshoot showcasing. When you got a handle on every one of the means, you can layout how your associate advertising system will resemble. As you get your framework and vital means, you can begin yours as well. How to do Affiliate Marketing? You ought to recollect that not all associate showcasing programs are something very similar, and there can be contrasts. Prior to beginning, you would need to become familiar with everything about your member showcasing project and set your means in like manner. Notwithstanding, there are some broad advances all associate showcasing programs share for all intents and purpose, and finding out with regards to those would give you an incredible knowledge into your conceivable way in member promoting. We should see those means and how you can begin offshoot promoting! Pick Your Niche First of all: who are you advancing your items? The response to this inquiry will assist you with being determinate the response to the subsequent advance. Truth be told, the initial two stages are somewhat tangled, which implies the two of them ought to be thought of us together. In any case, it very well may be ideal, to begin with, your specialty. Picking your specialty will limit every one of the unlimited chances for your wellbeing and help you during your course of picking a stage. Indeed, picking a specialty is tied in with reducing your choices. While reducing your choices can seem like something terrible, it truly isn’t. For instance, you should zero in on items that are identified with pets. Notwithstanding, the classification of “pets” is tremendous, and there are huge loads of items identified with pets. Is it accurate to say that you will elevate clinical stuff to veterinarians? Is it accurate to say that you will advance feline items or parrot items? As you may as of now surmise, the ideal interest group of feline proprietors and parrot proprietors are totally unique. Those two crowds have various interests, ways of life, and interests. With a more modest specialty, you can more readily get what to zero in on and what to leave outside and have a superior associate-promoting methodology. Pick Your Platform The second thing you should begin an associate-promoting program is to pick a stage. We referenced associate showcasing as a cycle wherein you advance the results of a specific organization. Yet, where? Obviously, the adaptable idea of member advertising permits you to pick. So which stage might you want to advance the items? There can be numerous determinators with regards to a brilliant stage to advance items. For instance, you may as of now have a crowd of people on a web-based media channel like Instagram. Assuming your specialty is principally on Instagram, it would be a good thought to begin there. Assuming you don’t have a prepared crowd, that is fine. You simply need to pick a stage where the main interest group of the items is. The appropriation of socioeconomics in various stages isn’t something you can neglect. Would anyone be able to express that individuals are in TikTok and Facebook has similar socioeconomics? In no way, shape, or form. Obviously, there can be segments where socioeconomics and spaces of interest cross over among individuals in online media channels. However, you need to create the most extreme gain out of it, isn’t that right? Things being what they are, the reason not center around the channels where you can make the most extreme? With a little exploration, you can observe an overall thought regarding where your specialty is. Assuming that you are feeling lost and don’t have the foggiest idea where to begin, there are two safe choices: web journals and YouTube. For a long time, web journals and YouTube have been famous spots for offshoot advertising. Since you can reach fundamentally anybody with web access, your possibilities would increment. With the enhancement techniques in YouTube and websites, you can contact a huge crowd you craving for partner showcasing. Obviously, nobody says that it is all sparkle and rainbows until you contact an enormous crowd. However, from that point forward, nothing is preventing it from being all sparkles and rainbows! Pick an Affiliate Program Since you know what to elevate and where to advance, it’s an ideal opportunity to pick an associate program. The objective here is straightforward: to track down the most ideal choice. However, what does “the best” mean in this specific circumstance? All things considered, it implies the most exorbitant cost at the least conceivable volume or the least cost at the most elevated volume conceivable. A few projects offer a specific sum for a characterized number of deals, and there are programs that offer a particular commission for every deal you make. Think about your choices, crunch the numbers, and pick the one that fits best for you. It would be best not to surge things up here and to settle on the ideal decisions. Begin Creating Content After you settle on a subsidiary program, the time has come to deal with your substance. On the off chance that you don’t as of now have a current crowd prepared to pay attention to you, you want to make one. While this excursion may appear to be a difficult advance, with the right moves, it truly isn’t. For instance, we should consider online journals. We referenced how web journals are a protected spot to begin subsidiary showcasing. When your items and specialty are prepared, you should work just to make the content they will appreciate and see as important. For instance, assuming you chose to advance feline items, for example, feline litter, litter boxes, wet food, and so forth, you should make content that spotlights on feline wellbeing; what sort of a litter box should you use for your feline, etc. Keep in mind, your point is to observe individuals who may be keen on what you advance. An individual who is keen on observing a decent litter box is doubtlessly attempting to buy one. In this way, by offering them great substance which at last prompts the member interface, you can make your subsidiary showcasing system work. Making extraordinary, top caliber, and fascinating substance is the key here. Individuals who click on your substance ought to be persuaded that your connection will give them precisely what they need. You should zero in on SEO to both increments and deal with great substance. Keep in mind, the circumstance will change contingent upon which channel you are utilizing to advance your items. In case you are utilizing YouTube or Instagram, your way will be unique. Another thing to keep in mind, regardless of the channel you use, is to try the products before promoting them. If you have the chance, purchase the products yourself first, and then give. A genuine content regarding your experience, but of course in a way that will encourage others to shop. Market your Affiliate Marketing Strategy After you made your substance, there is only one thing left: contacting a crowd of people able to purchase the stuff you advance. This may be the most difficult aspect of partner advertising, however here is the most critical part. Assuming you don’t invest energy into individuals seeing and cherishing your substance, why and how might anybody purchase from it? As we referenced before, in case you are chipping away at a blog, you would need to zero in on SEO. Search engine optimization is an extraordinary apparatus to build your blog’s permeability, get more snaps, and thus, get more clients. Obviously, in some other channel, you would need to deal with those angles, as well. You should find out with regards to the online media stages’ calculations to see what to do and keep away from. In any case, to take your advertising technique somewhat further, there are a few things you should attempt. First and foremost, pay-per-click promotions can be your dearest companion in the associate advertising system. Considering the sums you can get from offshoot showcasing; it would be a little cost to pay for clicks. Also, assuming you are devoted to not paying anything for the member promoting, email postings can merit a shot. It is an extraordinary method of speaking with your current and future customers, and ideally, let’s stay in contact with them constantly. Make Sales! In the wake of getting individuals to tap on your substance, there is only another progression: make individuals buy it! Obviously, you can’t drive anybody into purchasing something. In any case, there is nothing off about convincing them. With the right words, web composition, and connection arrangements, you can establish an incredible climate that will drive individuals to buy. Keep in mind, it will take some time before you ace partner advertising. The cycle includes a stage where you attempt to comprehend your crowd’s conduct and contemplations. As you get them, all that will be a lot more straightforward since you will see what is working and what isn’t. Maybe the most significant part is to be available to learn and not expect supernatural outcomes short-term. Beneficial things set aside time. Be patient, and try to improve! Offshoot showcasing is one of the most famous side hustles and pay sources. The straightforwardness and adaptability make it evergreen, and many individuals are beginning to do it as every day passes. Coming up next are the absolute most regularly posed inquiries about offshoot showcasing that may assist you with understanding it better.
https://medium.com/@g-c-srinisalem2021/introduction-to-affiliate-marketing-everything-you-need-to-know-3ebbada7ba2c
['G C Srinisalem']
2021-12-13 07:34:56.333000+00:00
['Affiliate Marketing Tips', 'Internet Marketing', 'Affiliate Marketing']
What is the Best Font for Subtitles?
Photo by Andrea Piacquadio from Pexels The joys of watching videos on the Web are only really enhanced by the fact that the videos you watch often contain subtitles. In these cases, these captions will be in the same font as the video you’re watching. This is great for simplifying things but not for designing your own custom title sequences to your liking. Are subtitles important for movies as well? Almost every movie (except some blockbusters that probably wouldn’t include subtitles) does offer subtitled scenes, but many used to make the mistake of reading the subtitles with their eyes rather than the characters. Below are some suggestions for choosing the best subtitle fonts for subtitles. Overall, I’d say, the best font for video titles and captions is Roboto. I’ve done a lot of work with Roboto, and I think it looks nice on just about any background — too bad some output designers haven’t been able to get this simple font to look great on video as well. Roboto is the default font on Android and most of Google services such as Google+ (RIP), Google Play, YouTube, Google Maps, and Google Images. The Arial Font is also a contender, but again, unfortunately it doesn’t look great on any video. If you have used a PC, then you are familiar with this font since before 2007, Arial was the default font for Window OS. I didn’t know that before I wrote this post, either. In the end, there’s no right or wrong font for subtitles. You just need to consider how recognizable the font will be to all of your viewers. That means that subtitle text may have to be far enough from the eyes of your viewers that it may be difficult to read, as well as large enough to be easier to read.
https://medium.com/@technology-ai-trends/what-is-the-best-font-for-subtitles-c10559c5342e
['Dominik Sievert']
2020-12-14 23:29:48.586000+00:00
['Design', 'Tech', 'Subtitles', 'Videos', 'Fonts']
Crypto Asset Management and Trading Strategies
The awareness around cryptocurrencies and blockchain technology has increased in the past few years, especially after a growing number of early investors have made a killing from their Bitcoin investments. Take the Winklevoss twins for example — the twin investors became the first Bitcoin billionaires after the 120,000 Bitcoins that they bought in 2012 at a value of $120 per coin became valued at $18,500 apiece in 2017. Although Bitcoin’s value has fallen since then, the two investors remain a perfect example of how cryptocurrency can make investors extremely wealthy. (Source: https://www.cbsnews.com/news/winklevoss-twins-bitcoin-billionaires-investment-price-surges/) However, today’s investors aren’t as lucky as the early adopters. In part, this is because the value of cryptocurrencies has become unpredictable in recent times. Bitcoin, the world’s most popular cryptocurrency, is one of the most volatile coins. Its price fell as low as $6,127.21 since the currency crossed the $20,000 mark. (Source: http://markets.businessinsider.com/currencies/btc-usd) This means that investors must adopt smart strategies if they want to reap the benefits of crypto investment. Investors can decide to trade or invest in cryptocurrency. For the purposes of this article, investing means buying and holding of cryptocurrency over a longer amount of time with the aim of building wealth. Crypto trading, on the other hand, is the buying and selling of cryptocurrency in order to generate returns. Here are some of the smartest investment and trading strategies that can help investors in gaining some profits from cryptocurrency. Investment Diversification For those who want to invest in cryptocurrency, the best advice is to diversify. Putting your eggs in one basket is risky. This is especially true in the cryptocurrency markets, which are characterized by volatility and lack of regulation. Diversification helps to reduce risk by preventing a total loss that could occur if one crypto price fluctuates below one’s inital investment. For example, Bitcoin’s value went as high as $19,500 in 2017. Since the beginning of 2018, Bitcoin’s price has at times fallen below $7,000. This means that some people have experienced severe losses, especially if they bought the coin when it was valued at its highest. At the same time, there are those who have earned from this fluctuation — those who bought it when at its lowest and sold it after its recovery. Source: (https://www.statista.com/statistics/326707/bitcoin-price-index/) Therefore, choosing another altcoin to invest is a good idea, since other currencies may not be as volatile as Bitcoin. One of the best altcoin to invest in at this moment is Ethereum. Although the currency has also experienced some fluctuations in the past few months, it is not as volatile as Bitcoin. There are also possibilities that the price of Ethereum will rise in the long run and generate some good profits. Be a Wise Trader or Investor Apart from diversification, any crypto trader or investor needs some good information and education about Bitcoin and the entire crypto market before investing or trading. In fact, this information is necessary before making the decision to be a trader or a long term investor in the first place. An investor should be more interested in the long-term market performance while a trader should be interested in short-term market performance. This is because an investor believes that the investment will grow over time, while a trader is interested in maximizing their gains from the short-term market performance. Whether their gains are vested on long or short term performance, both investors and traders should be keen about the overall market performance. However, the investor shouldn’t care more about the short time price fluctuations, but should look at other market characteristics such as a growing user base. In contrast, a trader must be vigilant in her observation of short term trends. “Buying the Dips” Another strategy that is working for most crypto investors is the “buying the dip” strategy. This involves buying a currency when the price goes below the average, and later selling it at a profit when the price rises again. When taking such steps, it is good to be aware of the short-term as well as long-term market trends so as to avoid losses. Source: (https://cryptocurrencyfacts.com/crypto-investing-strategy-buying-dips/) Invest in an Initial Coin Offerings (ICO) Instead of buying and selling Bitcoin or the already established coins such as Ethereum and Ripple, one can make the decision to invest in an Initial Coin Offering (ICO). This involves contributing a certain amount of cryptocurrency to get another amount of a new token. This has often proven to be a good investment strategy. For example, Ethereum that started below $10 and has now gone above $600. This means that those who invested in the Ethereum ICO have gained much more than they gave. However, even though this has proven to work with some currencies, it is always good to take care since not all ICOs get such returns as Ethereum. There are also many examples of fraudulent ICOs that steal their participants’ funds. Reallocate Your Investment to Match the Crypto Market The crypto market has been changing, and will continue to be subject to change in the near future. Therefore, any currency can take a negative or positive turn depending on a number of different factors, including the buyer and seller behavior. Therefore, an investor should be ready to re-allocate his or her investments based on new developments in the crypto market. Just like other strategies, this one needs some serious study of the market before making any decisions. Other Strategies Apart from these main strategies, there are others that can help in earning profits and avoiding losses in this new market. One of them is coming up with clear and solid investment goals before investing or trading. Another piece of advice that a trader or investor should take home is that it is good to put some of your investment in the most stable coins in the market, such as Bitcoin and Ethereum. Lastly, it is always good to start small and learn while growing. However, all of these decisions are dependent on the risk that one is ready to take when investing. About CINDX CINDX is an investment platform that allows individuals to combine several crypto exchange accounts into one trading terminal, and gives them the option to connect to the best managers without having to transfer their funds. Moreover, implementation of blockchain-based transactions will allow the trading history to be saved, and a rating system will be used to differentiate the successful managers from the less successful ones. Join our Telegram group or other social media to stay updated. Website • Telegram • Facebook • Twitter • LinkedIn • BitcoinTalk • Reddit • Instagram • YouTube
https://medium.com/cindx/crypto-asset-management-and-trading-strategies-e67ffc4e060a
[]
2018-08-03 18:54:27.764000+00:00
['Cryptocurrency', 'Crypto', 'Bitcoin', 'Cindx']
The Best 124 Moments In Taskmaster History (As A Way Of Escaping The Despair of 2020 [And By Extension 2021]) [60–41]
The Best 124 Moments In Taskmaster History (As A Way Of Escaping The Despair of 2020 [And By Extension 2021]) [60–41] Jack Bernhardt Jan 7·35 min read [Part 1] [Part 2] [Part 3] It’s 2021! Hooray! I was able to distract myself so much writing these that I barely registered that it’s not 2020 any more! Take that, 2020, you terrible year — hello 2021, a beautiful, pure and brilliant year where absolutely nothing terrible happens! I haven’t checked the news yet this year but I guess I don’t need to do the rest of this countdown because everything’s fixed now, right? Let me just quickly turn on CNN to check and- OK NOPE BACK TO TASKMASTER LOVELY TASKMASTER WHERE THERE’S NO VIOLENT COUPS OR ANGRY BEAN DADS Last time out we had Jon Richardson failing Spanish 101, Roisin Conaty trying to take a boulder out for a daytrip and Phil Wang’s hypnotic penis. But we’re into the top 60 now, so get ready for some real heavyweight pairings. I’m talking Rhod Gilbert and rope. Sally Phillips and water. And first up, Kerry Godliman and DRAMA. The incestuous season finale of Cul de Sac. 60: Jessica Knappett and Kerry Godliman Star In “Cul De Sac” (Series 7, Ep 6) Task: Write and perform the most suspenseful soap opera cliffhanger, lasting at most one minute. What happened: While Team Wang, Acaster and Gilbert) plumped for a soap opera called “Feelings” (including the hit theme song Everybody’s Got Feelings/Feelings Hurt and Feel Good), Team Knappett and Godliman came up with the Eastenders inspired Cul de Sac, where every character is called Donna and every conversation ends in a punch-up. In this week’s episode, Donna (Godliman) suspects Donna (Knappett) of stealing her man and she confronts her in a caravan. After a surprisingly violent bit of fighting (culminating in a moment where Donna kicks Donna right in the stomach and sends her flying into a bath), Knappett reveals the horrible truth to Godliman — she can’t be with the man…because she’s his mum. Cue pantsless Alex Horne coming out of the caravan and Godliman throwing up. Cul de Sac could run and run. Why is it so good: It’s all about the Donna acting from Knappett and Godliman — there’s a moment where Jessica Knappett stares daggers at Godliman through the net curtains that should be nominated for a “Take A Break” soap opera award. Combined with the brilliantly weak dialogue (“How’d you find me?” “GOOGLE MAPS”) and the fact that Knappett’s character is asthmatic and needs to keep puffing from an inhaler every thirty seconds (even after, as I say, kicking a woman into a bath), it makes for surprisingly compelling viewing. I worry that if Knappett and Godliman made a full spin-off for that show I’d spend the rest of my days writing 70,000 word mediocre Medium essays dedicated to every single episode. Position in task: Second. True artists are never fully appreciated in their own lifetime. Watch out for: James Acaster not fully understanding why Kerry threw up at the end (“I thought it was just because she’d had sex with Alex Horne.”) 59: Mawaan Rizwan Makes A Really Good Lighthouse Out Of Cards (Series 10, Ep 9) Task: Make the biggest beermat house on this table. While building your house, you must ring the doorbell after exactly one minute, then again after 58 seconds, then another 56 seconds and so on. If you make more than two mistakes with the doorbell timing, you will be disqualified. Your time ends when you ring the doorbell for the last time. What happened: While other contestants struggled with the concept of the doorbell — either sprinting out of the room so quickly that it destroyed their house of cards (Vegas) or forgetting entirely and quietly fuming at the pointlessness of this whole endeavour (Parkinson) — Mawaan Rizwan had a moment of unbridled Osmanesque clarity. There was nothing in the task that said the doorbell couldn’t be moved from its place by the door — and that’s what Mawaan did, pulling it off the wall and placing it down next to his beet mats. From there it was a simple matter of building his house of beer mats, casually leaning over and ringing the doorbell at the appropriate time — allowing him to create a superb, looming beer mat lighthouse with a working rotating light. As Mawaan says, not to brag — but it’s a work of art. Why is it so good: It’s the way Mawaan makes it look so easy — while walking to the door for the second time he goes “oh, you know what?” and…just takes the doorbell. Intercut with Richard Herring (who is galloping about like a crazed horse from the front door to the room), it’s just so effortless on Mawaan’s part — “I think having the doorbell’s just alleviated a lot of stress,” he murmurs, transfixed on the beer mat house, almost as if he’s not participating in a task at all, but engaging in an ancient meditative process. But it’s not just the cleverness of removing the doorbell. It would have been so easy for Mawaan to just build the tallest beermat house, safe in the knowledge that it would probably be enough to win. But no, Mawaan is not interested in playing it safe. He craves perfection— and the moment that he grabs the torch from out of a nearby lamp and asks Alex if he has something that turns, you know that he will achieve it, the most glorious completion of a task. As the lamp spins around at the end, you’re aware you’ve witnessed something special — a flawless performance, the Taskmaster equivalent of a maximum break in snooker. Meanwhile Richard Herring has somehow managed to get blood all over the door. Position in task: First, perhaps the greatest single task performance in Taskmaster history. Watch out for: The way that Katherine Parkinson marvels at the house at the end and says “Oh wow, it’s a windmill.” Never change, Katherine. The rage of Joe Thomas simmers underneath. 58: Joe Thomas Finally Loses His Cool (Series 8, Ep 10) Task: Completely erase this eraser. What happened: When the contestants were tasked with “erasing” an eraser as quickly as possible, Iain Stirling and Paul Sinha immediately throw their rubbers in the toilet, much to the mild annoyance of Joe Thomas, who had spent several minutes rubbing his eraser down into nothingness using sandpaper and a lifetime of repression. As Iain and Paul argued that the eraser had been “erased” from society, Joe Thomas finally snapped — “I’m so fed up of putting loads and loads of genuine physical effort into these tasks and these other guys coming up with some smartarse workaround, just put some fucking effort in!” A beat and then… “I don’t know where that’s come from, I’ve been really nice so far.” The crowd goes wild, Greg leads the contestants in a standing ovation. They’ve done it. They’ve finally broken Joe. Why is it so good: When Joe starts his tirade, Iain Stirling looks shocked — “Where the fuck has this come from?” But in reality, it’s been there the whole time. You do feel like Series 8 is sometimes a social experiment — how far can you push a lovely sweet man like Joe Thomas before he breaks? This is the pent up rage of an entire series of effort with no reward — Joe looks as if he’s trying so, so hard throughout the show and just having an absolutely horrible time no matter what he does. Even in victory, he’s miserable — he wins an episode with a near perfect score (24 out of 25) and still finds time in the episode to berate himself for a weak pun. This, though, is the first moment when he turns the self-loathing energy out onto the people who deserve it — his fellow contestants. This is the war cry of the tryhard, the boy who is just a bit extra. He has worked until his eraser is a nub and yet here he sits, last in the task because he didn’t just lazily throw it into the toilet? No. He will make a stand and he will make NO apologies for it. Except immediately afterwards, when he apologises. A cathartic moment. Position in task: Last, technically, but he got some bonus points for his erasing (although at least one of those was surely for his rant). Watch out for: The way Joe looks vaguely embarrassed by the standing ovation. You’d like to think that underneath that melancholy expression of shame, there’s a little nugget of pride deep within him for standing up. But it’s probably just a nugget of more shame. Poor Joe. A moving piece of art. 57: Paul Chowdry Puts A Slushy On A Bunny Rabbit (Series 3, Ep 1) Task: Make the best snowman. What happened: For this task (set on a very unsnowy day, obviously), some contestants used typical ingredients — Sara Pascoe went for ice cream (cute!), Dave Gorman plumped for instant mash (upsettingly beige!) and Al Murray filled a shelf of a freezer with ice and then stuck a carrot in (what a serial killer would do!). But Paul Chowdry would not use “typical ingredients”. No, instead, he took a toy rabbit, put several blocks of ice on his head and then methodically poured a slush puppy on top. “The expression and the tears,” he said, watching the deep royal blue colour drip down the rabbit’s face, “reflects what I’m going through on the inside.” Melancholic piano music soars as Paul takes a step back from his creation. He has visualised his pain, his sorrow, in this one simple piece. And it looks like a rabbit with a slush puppy on its head. Why is it so good: You can see Greg’s rising excitement as we get closer to Paul’s attempt. He doesn’t know exactly what it will be, but it will be something transcendent, something awe-inspiring, and most importantly, something utterly shit. Of course, Paul doesn’t disappoint. Of all the contestants throughout the show, Paul is perhaps the most unknowable — often it feels like he doesn’t really want to be there, staring back at Alex during tasks with a kind of miserable, dead-behind-the-eyes expression. And yet below that there’s also this sad yearning to succeed, to be understood — you can see a look of confusion on his face as everyone is crying with baffled laughter at his “snowbear” creation (he insists on calling it a snowbear, even though it’s very obviously a rabbit). This sense of unease comes across to the rest of the comedians — Sara Pascoe says, “I’m starting to think we shouldn’t be laughing at Paul”, while Greg’s first question (of which he has many) is “is everything alright, mate?” Perhaps we’re all just not quite on Paul’s wavelength yet — perhaps in years to come we will look back at his work here and realise that in truth, he had created the most accurate depiction of what a snowman feels. You’re just ahead of your time, Paul. Position in task: First for artistry, last for actually looking like a snowman. Watch out for: Paul’s final, glorious line in the VT as the blue liquid drips down the rabbit’s head, set to echo throughout the ages in Taskmaster history — “The bastard’s crying, innit.” The best Tommy Vercetti impression in the show’s history. 56: James Acaster Recreates Grand Theft Auto (Series 7, Ep 10) Task: Physically recreate a classic video game. What happened: While all of the video game recreations were fantastic in their own way (Rhod Gilbert hitting young people with boxes on their heads with tennis balls, Jessica Knappett’s previously mentioned moustached golf karting Mario, Phil Wang’s wonderful Goldeneye and Kerry Godliman’s Tetris meltdown), but James Acaster’s stood out for the sheer effort that went into it — from the way he swaggered, jumped and ran sideways like Tommy Vercetti out of Vice City, to the rocketlauncher made out of cardboard, to the reference to the soundtrack (as soon as James got into his “car” he started fiddling with the radio and saying ‘music, lots of music’), to the glorious “Wasted” end. A wonderful, nostalgic thrillride. Why is it so good: James didn’t have the best time on Taskmaster (from hulahooping disasters to many, many quite real temper tantrums), but this must have made it all worth it. Most tasks exist to torture comedians, but this felt like dessert after the main course— the show giving James the chance to do something fun after so many hours of pain and cruelty. James had the time of his life — mostly because he got to beat up Alex Horne (first with kicking, then with a cricket bat, then with a rocket launcher), but also because you felt like he was living out his adolescence, putting so many wasted hours to good use. And the accuracy! I cannot stress to you how pitch-perfect his impression of the jumping is. This is a man who has studied the GTA jump over and over, who could guide you from Prawn Island to Escobar International Airport like a seasoned taxi driver. A wonderful thing to watch. Position in task: Mission passed [smooth piano riff]: first place. Watch out for: James getting on a bike and shouting “stunt bonus! Stunt bonus!” while he tried to do an extremely unsatisfying bunnyhop. Artistry. Katherine Parkinson, pictured here with an item better suited for carrying water than a net (ie, anything) 55: Katherine Parkinson Pours Water Into A Net (Series 10, Ep 8) Task: Fill this cup until it overflows. The cup must remain atop the pole. Only liquids may touch the red green. What happened: In a task where contestants had to fill up a cup on a very tall stick without touching the surrounding green — which literally none of them managed to do as they each dropped non-liquid onto the green — Katherine Parkinson still managed to stand out for baffling, brilliant nonsense. First she filled up a very heavy bucket with water and attempted to throw it onto the cup (didn’t work, just managed to throw a bucket of water on herself). But that was just the warm-up — the most wonderful brainfart was still to come. She took a net and tried to fill it using a hose. “I don’t think this is going to be the fastest time,” she said accurately, not registering the water was entirely falling out of the net (due to how nets work), “but I might achieve the task,” she added, inaccurately. After a few seconds she looked down at the net and realised what she was doing. An exasperated little tut, and it’s impossible to know if she was more furious with herself, or the net for betraying her. “I…should have expected that.” Why is it so good: I could watch this moment on a loop and never tire of it. It sums up everything that is brilliant about Katherine Parkinson — a quiet, hopeful dignity even while doing the stupidest things. Her hopeful face, looking up at the cup, while the water that she’s hoping to transfer into it runs through the net and sloshes about her feet. There is something so pure and sweet about it all — the look in her eyes, the earnest belief that this, somehow, will work. But of course it won’t. It’s a net. It is literally designed to contain everything but water. And the dialogue, oh the most perfect dialogue. “I don’t think this is going to be the fastest time, but I might achieve the task.” If you wrote this scene in a film, the scripteditor would tell you to take it out — it’s too much, too self-knowing, too inherently ridiculous. And yet Katherine, quiet, dignified, utterly baffling Katherine does believe it. She might achieve the task. It just might happen. A net might be able to carry water. And who are we to tell her otherwise? Position in task: fortunately for Katherine, this moment was overshadowed by everyone’s failure in this task, so she escaped mockery in the studio like water through a net. Watch out for: Katherine at the start of the task, looking at Alex and asking “where’s the liquid?” indignantly, like there’s some kind of official branded Taskmaster liquid that she’s supposed to be using. 54: Joe Thomas Sneaks Up On Alex (Series 8, Ep 1) Task: Sneak up on Alex. He will duck down and up every 10 seconds until you are spotted. What happened: In the first task of Series 8 set in the exciting new trainyard, contestants had to creep across the landscape, ducking behind objects without being spotted by Alex who is standing on a railway bridge. It’s basically a giant game of Grandmother’s Footsteps, but Alex is your grandmother and he has a siren (a horrifying thought). While Lou Sanders hid in a red bin and ran about like a confused Dalek, Joe Thomas used brilliant stealth tactics and perfect timing actually managed to make it to the bridge and get within a metre of Alex. A champion performance. Why is it so good: After a fairly innocuous start to the show (coming last after in two of the first three tasks), this was Joe Thomas announcing his arrival — by being as quiet as a ghost. On the one hand, it’s a genuinely thrilling task —Joe sprinting from station wall to piles of wood, dodging Alex’s sight by mere milliseconds as Alex whispers “come on, where are you” like he’s Tommy Lee Jones in The Fugitive. There’s one moment where he cuts it so fine the audience are literally egged on by their own gasps into giving Joe a round of applause, an amazing feat. And yet on the other hand, there’s something so beautifully pathetic about Joe at the end — after an absolutely magnificent performance of stealth and skill, he declares that he “made it within cuddle distance”… and then just walks around at the top of the bridge. Alex looks at him blankly — “are you going to say?” “If I’m allowed to,” Joe replies hopefully, as if he’s finally found a kindred spirit, a fellow awkward person who likes to walk around in abandoned train stations. “It’s quite nice up here.” “One of us has to go,” says Alex, ominously, like a line out of Waiting For Godot, and suddenly Joe’s face falls. “Oh.” The rare chance of friendship, brutally taken away, and Joe walks off sadly. Oh well. Position in task: First in task, but in happiness? Who knows. Watch out for: Joe’s little face staring up at Alex when he’s underneath the bridge, and the way it slowly creeps out behind the stairwell as Alex is busy klaxoning — he’s so desperate to get as close as he can to Alex. And yet he will be rejected. As always. “Oug! Oug! Oug! A gauche!” 53: Hugh Dennis Guides A French Mel Giedroyc Around An Obstacle Course In A Wheelie Bin (Series 4, Ep 7) Task: Navigate a wheelie bin across an obstacle course whilst blindfolded and without communicating in English. (Team task) What happened: Hugh immediately put Mel in the bin (and attempted to close the lid), while Mel instructed him in flawless French (speaking as someone whose French is seriously flawed) through the obstacle course. The problem being, of course, that Hugh doesn’t speak French — cue a moment where Mel explained in detail that he had to turn around, reverse through a gate and then move on, and Hugh just standing there blankly, like a student who forgot to bring in his Tricolore book. Other highlights include Hugh waiting on a carpet for 11 seconds and then immediately crashing into a sign, and Mel telling Hugh to pop a balloon by saying “tu dois faire de BOOF”. Like watching a combination driving exam and GCSE French aural. Why is it so good: It’s mostly the contrast between the levels of enthusiasm of Britain’s favourite double act, Mel and Hugh (clearly the joke Alex Horne was proudest of in the show) — Mel, like a substitute French teacher who’s had seven coffees, and Hugh, like a Geography teacher who’s waiting until 5pm so he can start drinking. Hugh stumbles about while Mel spends most of the task yelling “a gauche! A gauche! A gauche!” — it really had the vibe of a tired dad taking his child around the supermarket in the trolley. Mel’s instructions are consistently wonderful — at one point she struggles to describe a high five in French, and just says “we do the sign of basketball! We do the sign of cool!” Thankfully she gives Hugh a few context clues by grabbing his hand and clapping it, otherwise they could have been there a while (and gone through various cringeworthy variants). The sign of a woman having the absolute time of her life, and a man just about hanging on. Good old Desky. Position in task: In comparison to Mel’s detailed French instructions, the other team used Lolly shouting “SCHNELL SCHNELL” a lot — and won comfortably. Watch out for: Mel calling Hugh “Oug”, what she perceives is the French for Hugh — but as Greg points out, the French for Hugh is just Hugh. You can’t fault her commitment. 52: Jessica Knappett Pretends To Be An Airhorn (Series 7, Ep 3) Task: Make the best noise. You must begin with ‘This is my best noise’. You must remain silent for 10 seconds after your noise. You have one attempt. What happened: Jessica, having the Most Amount Of Fun A Contestant Has Ever Had On The Show, goes to the shed and finds a bunch of stuff — including a bike, a guitar, a loudhailer and a (completely silent) top hat. She then wonders “how will I put these all together?” before deciding to just… use all of them at the same time with varying degrees of success . “THIS IS MY BEST NOISE,” she yells, before weakly strumming the guitar that she’s jammed into the bike and yelling “WOOO-OOOOO” into the loudhailer. A few more strums of the guitar, then she finishes it off with the best bit — her air horn impression. “Hyuuuh! Hyuhh hyuhh hyuuuuuuh!”. One ring of the bell and finished. Mozart himself would have been proud. Why is it so good: We’ve already seen how much fun Jess has in this show, but this is basically her at her most Excitable Toddler. She mashes as many things together as she can to make a noise and she isn’t dissuaded when they all sound a little bit shit — instead she just pulls out the guaranteed hit, the full throated airhorn. Intercut with James Acaster’s attempt (hitting a pair of rubber pants filled with water with a guitar with a bell strapped to it), it’s a demonstration of the full range of the Taskmaster experience — James’ is methodical, planned and disastrous, and he’s immediately furious, while Jessica’s is chaotic, nonsense and almost as disastrous, but she couldn’t be happier. It’s all about perspective. Position in task: Second, deservedly, behind the creepy mutterings of Phil Wang. Watch out for: The awkward ten seconds at the end where she has to remain silent. You can see just the tiniest hint of regret flicker across her face, a moment of “no, actually, what was that.” A coconut businessman, or Morgan the anti-breastfeeding grump, depending on who you ask. 51: Bob Mortimer Creates A Coconut Businessman (Series 5, Ep 3) Task: Make a coconut look like a businessman. What happened: The task involved opening a briefcase (a challenge which took Mark Watson six minutes to achieve) and making the coconut inside look like a businessman. Naturally, Bob Mortimer asked for more fruit and some pens and immediately retreated to his dressing room, where he directed a short film on his mobile phone: an orange named Mary Downbyourside introduced herself as a lathe operator, an apple named Slow Peter told us he painted prison gates, and then a sudden smash zoom on a furious looking coconut, who informed us he was “a FUCKING BUSINESSMAN”. A baffling piece of art from a unique mind. Why is it so good: Literally any task involving Bob Mortimer is a contender for this list — the bizarre, brilliant leaps in logic are perfectly suited to being on Taskmaster, and it’s really no surprise he won his series at a canter. Dreaming up three separate fruit characters in ten minutes is actually pretty bog standard for Bob — I desperately want to hear from Slow Peter’s lilting Scottish accent, and to learn exactly how long he’s been in the prison gate painting game. Arguably though the best bit of this task came from Greg himself, who made the point that the (unnamed) coconut character himself doesn’t look like a businessman, but rather just tells us he is one (he’s fallen into one of the classic pitfall of the coconut character creating world — show don’t tell…). Greg revoices the attempt — the orange becomes Barbara who likes horses, the apple becomes Quentin who likes ballet, and the coconut is now Morgan, who doesn’t think women should be allowed to breastfeed in public. He’s got a point. Greg, not Morgan the coconut, who sounds like a right dickhead. Position in task: A rare misfire from Bob — last place. Watch out for: Bob remarking upon seeing the coconut — “Ah, the coconut, the largest of all the nuts.” No time for an explanation. 50: Alice Levine Puts Alex Horne’s PIN into a chocolate egg (Series 6, Ep 8) Task: Put something genuinely surprising inside a chocolate egg. What happened: While Tim Vine managed to trap a fly in his chocolate egg (see earlier in the countdown) and Asim Chaudhry put the ground version of a fly, the worm, in his chocolate egg, Alice Levine took a different route. Her egg was wrapped in tin foil with a note that read “This egg contains the key to your fortune” — and when Alex opened it up, there was a tiny little Taskmaster style task. “Your personal identification number is [redacted]”, read Alex, with a look of disturbed confusion on his face. Alice had managed to get his PIN code — which also acts as the number for his phone and his burglar alarm. Alice Levine had effectively told Alex that she could steal his identity, through the medium of a chocolate egg. Why is it so good: It’s the creepy slow build that makes this one — Alex slowly unwraps the egg as Alice Levine looks on with a kind of demonic glee. “It’s unsettling, Alice,” he murmurs as he looks at the tiny task, proof that Alice had broken into his life and had access to his money, his private messages and his home. “That’s a bonus emotion!” says Alice, always one to look on the bright side of these things. Primarily though you can tell that Alice is really, really enjoying her newfound role as the Evil One — you’d have expected this kind of creepy privacy invading nonsense from Russell Howard, maybe Asim, but Alice? There’s a real Gone Girl vibe to her here — her sweet exterior hiding an ice coolness who could potentially rob Alex of everything in his life and frame him for her murder. Very offbrand, but then that’s the joy of Taskmaster — people who you thought were just bubbly Radio 1 DJs turn out to be capable of truly scary things. Position in task: First. Although that might be blackmail because Alice has access to his phone. Watch out for: the way that Alice cackles back in the studio when Alex reveals that it was his wife that gave away his PIN, and the way he sadly says “there’s very little left now.” Poor, poor Alex. 49: Rhod Gilbert Ties Up Alex Horne (Series 7, Ep 10) Task: Tie yourself up as securely as possible. What happened: In one of the all-time greatest two-part tasks, where contestants had to tie themselves up and then put on a boiler suit when they heard the siren (see earlier in the countdown, with James Acaster having a full-blown tantrum), Rhod Gilbert’s lust for humiliating Alex gained him maximum points here. Instead of focusing on tying himself up as tightly as possible, Rhod tied Alex up instead — because the winner is the person whom Alex takes the longest untying. This was not only a great strategy, it completely negated the rugpull of the siren: when the siren went off, Rhod was still busy tying up Alex, so once he remembered what the siren actually meant (which, admittedly, took a long time), he just had to calmly walk over to the boiler suit, pop it on, take it off, and then get back to tying up Alex. Compared to the other contestants’ efforts — hopping and swearing and furiously untying themselves — it was very, very easy. In the end, Alex couldn’t free himself to “untie” Rhod (despite his best efforts), so his score in this task is infinite. A performance which earned him the respect and admiration of his peers — as Phil Wang said, “We’d love to hate that, but that was fucking great.” Why is it so good: It’s the fact that this is such a fiendishly cruel task which Rhod pre-empts: by being fiendishly cruel himself. Make no mistake, Rhod isn’t just tying up Alex to restrain him — he’s doing it to destroy him. He ties every limb to the chair, he ties around his chest (“that’s a bit tight”, squirms Alex, with little response from Rhod), he even puts Alex’s shoes on his hands and puts a bucket on his head. Whereas usually that kind of treatment might illicit sympathy from the other contestants, you can feel instead the tide has turned — they were humiliated by Alex and his tying up task, and now they will enjoy his destruction. It’s a perfect performance by Rhod — only slightly marred by the way that he’s so entranced by his job of tying Alex up that he entirely forgets what the siren means. “You… you were supposed to do something when you heard the siren,” groans Alex, mournfully, hoping for just a little respite from this very strange bondage session. But then that’s Rhod. Clinical, professional, focussed on the task at hand. When Alex asks him if he’s done this before as he ties him up, he replies with a quick “yep”. The stories he could tell. Terrifying. Position in task: First. Where else? Watch out for: The way Alex tries to get out at the end (which involves an attempt at slouching his way out of the ropes). Also the way Rhod offers to get Alex something for lunch — “what would you like?” “Something sharp.” Like a brand new Smack the Pony sketch. 48: Sally Phillips Gets Intimate With A Water Cooler (Series 5, Ep 5) Task: Create the most remarkable water cooler moment using a water cooler. What happened: While Bob Mortimer did his party trick of pulling an apple in two, and Nish very slowly kicked his water cooler after a very long run up, Sally Phillips went above and beyond for the show, with an extremely X-rated short film where she, well, what she did was, well, she… in Greg’s words, “she fucked a watercooler”. Her red heels and screams rocked the caravan as we watched like disgusting watercooler voyeurs from the outside, a bobbing watercooler head visible through the window. One of the most horrible, brilliant and arousing things on the show. Why is it so good: It’s Sally’s commitment to the bit that sells this — the sexy music, the extremely vivid moans, the Titanic hand on the glass, the way that the water gushes (oh god) out of the caravan door at the end which raises a lot of questions about the anatomy of a watercooler and quite how much they, um, you know what, never mind. Sally seems almost embarrassed by it — not because it’s smutty, but because “it’s episode five and it’s old hat by now, you’re probably bored with it.” This is the great thing about Sally Phillips — she’s always looking for the next weird thing, always looking to push the envelope further, to the extent that she’ll find her old ideas dull after just a few minutes. Plus there’s something brilliant about how dismissive she is about the whole thing — her reason for doing it was that she “didn’t have a better idea”, so I guess why not have sex with a water cooler? Also, extra points for making Alex squirm as she recounts that midway through the task she said “I probably shouldn’t do this, should I?”, and was greeted by a “ring of men” (her words) egging her on. Alex has a real “oh no, we’re going to get letters” face on him at that. Position in task: She came first. Ahem. Watch out for: Alex’s voice breaking as he introduces Sally’s attempt — he really doesn’t like sexy things. 47: Johnny Vegas Whispers Sweet Nothings To A Dying Mannequin (Series 10, Ep 10) Task: Neatly hang all of Bernard Mannequin’s clothes on the clothes rail. You may not step over the line. What happened: This was one of the most violent tasks in the show’s history, where contestants had to undress a mannequin and then use his upsettingly lifelike corpse to pull a clothes rail towards them. And Johnny Vegas, a man who feels all the emotions all the time, struggled with the dichotomy inherent in the task — both to care for Bernard by undressing him, and yet also using him violently as a tool. He veered from the paternal (“Let’s get you undressed,”) to the vengeful (flinging Bernard’s body onto the floor in an attempt to get a missing item of clothing and yelling “GET YOUR SOCK!”). Eventually, Bernard’s battered body was flung over the top of the clothing rail as Vegas whispered “my boy! My beautiful boy,” into his shattered ear, like the lead in the fifth act of a Shakespeare play. “You’re absolutely fine, just…sleep on the rail,” he murmured, hanging Bernard by the neck on the clothing line, before turning to Alex with a haunted look in his eyes. “Finished.” A haunting, horrible task. Why is it so good: Johnny doesn’t do anything by halves — he is a man of passion and fits of anger. If you present him with a mannequin, he will love that mannequin like his son. If he thinks that mannequin has betrayed him by hiding the hangers from him until the last few minutes of the task, he will scream “WHY DID YOU HIDE THE HANGERS UNDER THE CHAIR?!” and throw his cursed son to the ground. Love and hate are two sides of the same coin for Johnny, and both his joy in caring for this mannequin and the pain he experiences in hurting him are keenly felt. There is a moment when Johnny has to rip an item of clothing off of Bernard, and he sighs like a parent teaching a child — “this is going to hurt me a lot more than it’s going to hurt you” — and you genuinely believe him. Not just because, you know, his son is a mannequin. Johnny has invested in this. He has chose to love, he has chose to feel pain — and that makes the final image, Bernard’s lifeless head lolling back on the clothing rail as Johnny comes to terms with what he has done, all the more poignant. In search of Taskmaster glory, what has he become? Position in task: Second. But at what mannequin cost? Watch out for: Johnny first noticing the giant hole in Bernard’s head — that he put there — and gasping and sobbing. He tries to cover it with a bobble hat, but it’s no use. Bernard will never model headwear again. Greg’s reaction to his befezzed mother. 46: Rhod Gets A Picture of A Befezzed Greg’s Mum In The Bath (Series 7, Ep 9) Task: Take a photo of yourself wearing this fez in the most unusual situation. What happened: Any task that involves Rhod Gilbert and “photo” and you’re fairly sure you know which way it’s going to go — with that picture of Greg in his speedos photoshopped into some background of the Welsh coast or whatever. Not so this time, when the contestants had to take a photo of themselves wearing a fez in a weird situation — Rhod managed to get a photo of Greg’s mum, wearing nothing but the fez and a grin, in the bath. Why is it so good: It’s all about Greg’s reaction. The photo is revealed, the audience laughs, but Greg is so sure it’s going to be the Usual Photo that he doesn’t turn around and instead addresses the audience: “If this is a fez superimposed on a picture of me looking fat…” When he does turn, his face falls faster than a hulahoop from James Acaster’s hips. He does a literal Scooby Doo double take, barely able to comprehend what he’s seeing. The microexpressions on his face are a thing to behold — disgust, anger, admiration, confusion, laughter — before he lets his head fall into his hands. “Jesus Christ. You traiterous old woman.” Every time Rhod pranks Greg it feels like getting a little secret insight into their friendship — and this is one of the many times in the show that you feel like their friendship is going to come under considerable strain as a result. As far as I know, Rhod is still waiting for the promised repercussions. Position in task: Last on a technicality (because he wasn’t actually wearing the fez.) Watch out for: When Greg reveals that that’s his mother, Rhod jokingly says “And she’s here tonight!”… before Greg looks at him seriously and says “Yeah, she is actually.” Rhod quickly shuts up. Joe Lycett, in the bath. Not pictured: Lolly, stealing Joe’s clingfilm. 45: Noel Fielding, Lolly Adefope and Joe Lycett Fail To Put Stuff In, Put Clingfilm Over and Fill A Bath With Water Task: One member must put as many different things as possible into a bathtub, another member to seal the top of the bathtub with clingfilm, and another member to fill the bathtub with water. What happened: In this disguised team task, the contestants were given separate tasks but had to work together to ensure that all of the tasks were completed — you fill the bathtub with items and water, and then cover it in clingfilm. The other team (Team Happy French Teacher and Sad Geography Teacher, Mel and Hugh) worked this out relatively quickly and completed it in a fair amount of time. Team Kidz (Noel, Lolly and Joe) did not. Instead of communicating with each other, Joe started clingfilming the bath and Lolly started throwing tables (literal tables) into the bath. Every time Lolly put something in, Joe threw it out angrily, and every time Joe clingfilmed, Lolly destroyed it with a table (a literal table). Meanwhile Noel Fielding runs in with a hose and starts dousing everything in water, to no effect. By the end of the task, Lolly has stolen Joe’s clingfilm and hidden it in a hedge, while Joe has thrown most of her tables (she has so many tables) onto the ground. As they’re walking away at the end of the disaster, Joe has a brainwave: “we should have told each other our tasks.” Why is it so good: There’s an episode of Frasier where Niles and Frasier try to run a restaurant together, but because they don’t communicate with each other, they think the other hasn’t added the liquor to the flambé so they add far too much and accidentally blow up half the restaurant. If Noel, Lolly and Joe were in that episode of Frasier, every single one of the people in that restaurant would have died from flambé explosion, immediately. What makes this so fantastic is firstly Noel’s attitude — at the start he whispers to Alex “Are we supposed to work together, or…” but then entirely forgets that concept as soon as the task begins, instead “pissing water into a bath while Rome burned” (to quote Greg). Secondly is the complete lack of talking any of them do — Lolly starts throwing things in, Joe throws them out and starts clingfilming, Lolly starts stealing the clingfilm, Joe starts throwing things out — at no point does anyone stop and say “wait, what exactly are we supposed to be doing here?” The closest anyone gets to talking in this task is Joe screaming “where’s my fucking clingfilm” — cut to Lolly throwing it into a hedge. It’s simultaneously one of the chaotic and yet also one of the most silent battles in Taskmaster history — a French farce without any of the talking, just the slapstick. Position in task: Last. Team Grown-ups wins again. Watch out for: The reveal in the studio that Joe managed to break two of Lolly’s tables, and also that Noel managed to pour quite a lot of water into the bath — only for the plug to be dislodged by one of the many table kerfuffles. 44: Asim Chaudhry Mimes Made-Up Animals To His Team (Series 6, Ep 7) Task: Write down as many obscure animals as possible in 3 minutes — and then convey the animals on the list to your teammate through mime. What happened: Asim’s task — delivered by an whirring inflatable shark — was to write down as many obscure animals as he could think of. Of course, Asim being Asim, he wasn’t going to go for the obviously obscure ones (duck-billed platypus, etc) — instead, he came up with some entirely new ones. “Blue dog. Three-eyed raven. Anorexic elephant. Eight bollock cat…” The list went on and on, and Asim thought he had done a great job. Until he went into the next room and discovered that he would have to act them out to his teammates, Tim Vine and Liza Tarbuck. What followed was a ludicrous, terrible game of charades where Asim had to act out a series of stupid made-up animals, which ended with Asim lying on the floor, trying to be a laser-beam tortoise. I have no idea either. Why is it so good: It’s really all in the split second when Asim is listening to Liza Tarbuck reading out the second task: his face drops, his eyes widen, he slowly strokes his beard in a way that betrays the utter panic in himself. He realises the terrible task ahead of him is entirely of his own making. It’s another example of very clever editing by the Taskmaster team — by showing us Russell’s orthodox take on the game first, and revealing what the second task is with him, we build up anticipation for how Asim is going to tackle it. “Strap yourself in,” says Greg gleefully before we see Asim coming up with his made-up animals. It means that each one of Asim’s ridiculous animals has two “jokes” — there’s the ridiculousness of the animal, but there’s also the dramatic irony — we know something Asim doesn’t know, that he’ll be forced to act these out as well. Simple, but very effective. Plus there’s a moment where Tim Vine guesses very confidently “a shy badger” which is so wonderfully wrong. Position in task: Last. Orthodox wins out, sadly. Watch out for: Liza Tarbuck proudly yelling “an eight bollocked cat” when she accurately guesses Asim’s animal, like it’s her favourite breed. A look of quite serious regret. 43: Katherine Ryan Makes A Mess (Champion of Champions, Ep 2) Task: Make the biggest mess and completely clear it up. What happened: While the rest of the competitors in this special decided to take the task literally (doing the very male thing of making a big old mess in the Taskmaster house and then desperately vacuuming it up), Katherine Ryan went slightly more abstract — the mess she made was more emotional, as she called up her sister and told her that she’d caught her husband cheating on her. As if that wasn’t enough, she then called up her Dad to tell him that her sister was pregnant with a man who wasn’t her husband. And OK, her (lovely Canadian) dad saw threw that pretty quickly, but when Katherine went to “clean up” her mess by calling her sister again and telling her that she’d gotten confused, her sister revealed that she’d just told her husband that she was cheating on him! Like a Canadian version of Eastenders (Eh-stenders?), it snowballed from there — Katherine called up her sister’s husband to tell him that her sister was joking, but couldn’t get through and had to leave a message with his colleague. The whistle blew and…that was the end of the task. But you felt like the damage done to the family was just beginning… Why is it so good: It’s Katherine’s dedication to making a mess that you really have to admire — there’s a steel to her from the get-go where she just casually tells her sister that her husband is cheating on her in a less-than-three minute phone call. The way that she then immediately moves on to her dad, dropping another fake bombshell, is so breathtaking that Josh Widdicombe genuinely looks like he’s about to throw up with second-hand embarrassment. What’s great (and a massive relief) is that Katherine’s sister gives as good as she gets — it turns out that she wasn’t cheating on her husband and that she was pranking Katherine back. Although, just typing this out now, I realise that could also have just been something they said on the show to tidy everything up after Katherine accidentally revealed a huge family secret on national TV (well, Dave). Regardless, it was a wonderfully messy performance that wasn’t totally tidied up by the end, but as Greg says, her commitment to entertainment is impressive. Plus, as Katherine says, “people shouldn’t be getting married anyway.” Merry Christmas 2017! Position in task: First, where else? Watch out for: Katherine’s adorably Canadian Dad laughing his way through a story about his married daughter being pregnant with another man and saying two lovely phrases — “Is it April 1st?” and, more marvellously, “I didn’t come over here on the last banana boat, you know.” Where’s THAT parent-comedian travel show? 42: Mel Giedroyc Makes An Exotic Sandwich (Series 4, Ep 8) Task: Make the most exotic sandwich — and then, eat your exotic sandwich. What happened: One of the first two-part tasks and arguably still the best. The contestants had to create the most exotic sandwich they could dream up, and most went for entirely disgusting or inedible flavours (Noel Fielding went one step further and put two slices of bread around Alex Horne’s head). Mel, on the other hand, had a brainwave. “Let’s go all sweet! Heck,” she said, for emphasis, as if “heck” were the foulest swearword in the English language, “everyone else is going to go savoury, aren’t they?” She decided that she was going to “blow the Taskmaster’s tiny mind”, presumably by overdosing him on sugar: her sandwich went bread, chocolate spread, chocolate orange, bread (or breadski, as she insisted on calling it), chocolate spread, Double Decker, bread, chocolate spread, Crunchie (at this point watching the task my teeth started to physically hurt), bread, chocolate spread, M&Ms (typing this out is making me feel quite ill), bread, chocolate spread, Maltesers, bread, chocolate spread, marshmallows (lightly bronzed by a blowtorch). Then, when the whistle blew and she stood back from her creation (“it looks like some kind of Japanese pagoda”, she said proudly), Alex presented her with a lunchbox…and the second task. Mel’s face crumples. “Eat your exotic sandwich, fastest time wins.” A moment for the ages. Why is it so good: It’s another one of those tasks where the contestants are hoisted by their own petard — in this case a food pagoda full of so much sugar it could make you see through time. The way that Mel can’t even get the final line of the second task out (“Your time starts now”) because she’s just thinking about what she’s done, and what she now has to do — she softly murmurs, “Oh, gang…”, in the manner of a scout leader who has just led her troupe into the cave of a ravenous bear. Credit to Mel, she doesn’t shy from the task — while she doesn’t try to Scoobysnack it (eating it vertically like a normal sandwich, surely a guaranteed five points right there), she makes a good go of it by smushing the sandwich down and holding it sideways, taking “four good sized bites” as Alex says. Mel was already a firm favourite in the show, but the way she, one of Britain’s best loved comedians for over 20 years, sits there, chewing this unholy, cloying nonsense of a sandwich, her hand a mess of marshmallow, her face covered in chocolate spread and icing sugar, crying with laughter — with this task, her place in Taskmaster history was secure. Presumably held down by chocolate spread. Position in task: Second last in exoticness, second in eating. Fun (but predictable) fact about this task — the exoticness of the sandwich was the direct inverse of how much of the sandwich competitors were able to eat. So (bonus points aside) everyone ended up with 6 points, making this task functionally useless. Hooray! Watch out for: the reason Mel managed to get a bonus point — she accidentally got a blue M&M stuck up her nose during the task (after a particularly big bite) and had to enlist Alex’s help getting it out. To quote Greg, “those showbiz nights, snorting M&Ms…” “You’ve been a bad boy, Alex.” 41: Rob Beckett Dresses As A Grandma And Douses Alex With A Power House (Series 3, Ep 2) Task: Surprise Alex when he emerges from the shed after an hour. What happened: We’ve already seen how Al Murray surprised Alex when he came out of the shed (gong, dong, etc), and there were a lot of great candidates for winners of this task (Sara Pascoe framing Alex for murder, Dave Gorman making the crew take their clothes off and then jumping out of bush), but for sheer baffling nonsense, the winner had to be Rob Beckett. When Alex opened the door he was greeted by Rob Beckett, dressed as a little old lady, sitting on a brown ‘70s sofa next to a bookcase, who shouted “Alex, you’ve been a bad boy!”, pulled off the blanket on his lap to reveal an industrial power hose, and then sprayed him in the face for 30 seconds while laughing maniacally. To quote Alex, “there’s a lot going on here.” Why is it so good: obviously it’s blissfully, beautifully bonkers from the get-go, but what probably sells this task is how funny Rob Beckett finds it. There are times on Taskmaster when playing it straight is the best option (I’m thinking Rhod Gilbert’s clinical attitude to torturing Alex, or Rose Matafeo mourning a chickpea). But there’s a joy with Rob here — he starts laughing as soon as the water sprays into Alex’s face. He’s so pleased with himself that he abandons his grandma character almost immediately. There’s also a lack of context that gives the whole thing a surreal, dreamlike vibe — who is this grandma character he’s playing? Is she Alex’s grandma? Why has Alex been a bad boy? Why is her preferred mode of punishment to spray him in the face with a powerhose? Why does she say “bad boy” in such a sexy way? There are no answers to these questions — they are just non-sequiturs designed to confuse Alex into submission. And it works. Position in task: First, deserved. No amount of gonging and donging could have beaten this. Watch out for: Rob at the end, choking with laughter, turning to a visibly dripping Alex and saying “You alright? Are you wet?”, as if he hadn’t sprayed him the face with a powerhose. So just two more parts to go of this never-ending distraction from the never-ending horror that is life in 2020 and 2021! Stay tuned for more genuine anger, more heartbreak and more comedians eating terrifying amounts of sugar just to placate your lust for their suffering! Thanks for reading and see you hopefully before the end of the world… If you’re enjoying these, please give me a follow/wave on Twitter: @jackbern23
https://medium.com/@jackbern23/the-best-124-moments-in-taskmaster-history-as-a-way-of-escaping-the-despair-of-2020-and-by-f13a52ec5057
['Jack Bernhardt']
2020-09-18 00:00:00
['Task Master', 'Regret', 'Eggs', 'Sexy Time', 'Premium Chocolate Bars']
Gift Guide 2020
Photo by Annie Spratt on Unsplash It’s no secret that 2020 is a completely different year than any other. For many families, it may not be realistic to give gifts or to splurge on anything due to the plethora of unforeseen changes. Yet, that doesn’t mean this year has to go without doing something special for those that we love. Here’s now. Think back to your own experience of receiving a gift, whether that be from a friend or family member. Which of these gifts did you feel was most special? Most memorable? Something you wish you could relive getting again? Now put yourself into the shoes of the individual you are getting a gift for. If it is in your means, give one bigger gift, rather than many small gifts. Narrowing down to just one thing that the person wants can be a great way to get them exactly what they’ve been asking for and the rest can come at a later time. Everyone understands the struggles of getting gifts when work can feel unstable and many have gotten less money than before. But also, having that one gift that the person has been wanting can be so exciting and exhilarating. And, you will know it is not a gift that’s going to get forgotten quickly or go unappreciated. Secret Santa, or a variation of it. Spending limits, shopping only for one person, being anonymous about it sounds like a combination hard to pass up because of the current circumstances. And chances are, it’s going to only make the gift more creative and special rather than a basic gift is a huge win. All you have to do is know the person you’re shopping for and pay attention to what they talk about they wish they had. DIY the gift. If you happen to be more comfortable with the crafty side of gift giving, this may be perfect for you. There is nothing like the one special gift that you make for the other person. Whether it be because of their unique preferences or you want it to feel more special, DIY shows that you put time and care into the gift, not just pressed “buy now” on Amazon. Not only that, but it can be so rewarding making something with your hands and watching it transform into something another person can enjoy. Gift experiences. Especially for those that are older, the physical gifts become less meaningful as opposed to experiences. Whether the experience is a nice dinner with all of the family together or a trip for when all are allowed to travel, whatever the experience can be, it is worth more than any gift one can receive. Memories last a lifetime, especially now that we have a camera in our pockets and the ability to capture all of it.
https://medium.com/illumination/thoughtful-meaningful-gifts-for-christmas-2020-2cfdb0bc25d8
[]
2020-12-21 22:23:41.417000+00:00
['Life', 'Christmas', 'Holidays', 'Happiness', 'Gifts']
Embed NLQ and Augmented search into your applications, with Db2 Augmented Data Explorer
IBM Db2 Augmented Data Explorer (ADE) is a search and discovery tool that understands your Natural language search terms and returns data from your database and suggestions for subsequent searches. Db2 ADE understands your metadata, infers relationships between your tables and uses statistical detectors and NLG to present the results of the analyses in natural language and in the form of visualizations, telling you a story with your data. With Db2 ADE V0.7 we bring REST APIs that allow you to embed NLQ and Augmented search in your applications. Before you begin: If you do not have Db2 Augmented Data Explorer, download the beta version here, and use the resources to help you get started. After you install ADE on a server, you can access it in two ways: By typing in https://hostname:5000 on a browser, to perform all these operations through the UI Using ADE’s REST APIs Here, we will use REST APIs in a Jupyter notebook to connect to a Db2 database and perform a natural language search to get insights about the ‘Average money spent’ by customers. We will then use the data that was retrieved to plot a graph and also read the insights that ADE generates for us. You can access the original notebook that has more detailed analysis here. The API Workflow: 1. Authenticate the user with the POST /api/v1/sessions endpoint 2. Connect to a datasource with the POST /api/v1/connections endpoint 3. Crawl and index the data in your connected data sources with the PUT /api/v1/status endpoint 4. Send a natural language search text to get recommendations using the POST /api/v1/suggestions endpoint 5. Retrieve the data and natural language insights using the POST /api/v1/results endpoint Setting up We are going to import the required libraries and create a variable that stores the base URL for the REST API. Update this variable to the host name of the server that has Augmented Data Explorer. Getting authenticated Use the POST /api/v1/sessions endpoint to authenticate a user and create a session and returns an auth token, which will be used by the subsequent API calls. You must be authorized as a user to access the endpoints. The level of access the user has depends on the level of permission the user was assigned, while the user was added by the Admin user. Here we are authenticating with a user, who has the privileges to create connections and conduct searches. Three permission/roles are available to assign to users: Search , which is read-only access to search and exploration , which is read-only access to search and exploration Setup/edit data , which allows for modification of connections and crawls , which allows for modification of connections and crawls System administration, which allows the creation and deletion of users. To make it easier to issue subsequent calls with the proper authorization, we will create a header variable with the token. 2. Setting up connections The POST /api/v1/connections endpoint creates a new connection to a Db2 data source, using the hostname, username and password for the database. You can mention the schema and tables that you want to make available. All the columns in the selected tables will be available for crawling. If the connection was successful, you will see a ‘Successfully created connection.’ message in the response. 3. Crawling the connections Using the PUT /api/v1/status endpoint. Before you can search or get results from Db2 Augmented Data Explorer, you need to crawl the connection that you set up previously. The crawling process connects to the database and stores information about the data in an elasticsearch index. You can use the GET /api/v1/status endpoint to check the progress of the crawling process. When crawling is False, crawling is complete and we can start searching. 4. Searching Using the POST /api/v1/suggestions endpoint. Searching in Db2 Augmented Data Explorer involves sending search text and getting back query suggestions for this search text. These suggestions are valid query objects, which we can use to get data. Objective of our search: Get insights for `Average Money Spent` by our customers, from the Customers table in the ADEDEMO schema. The schema has a “Customers” table and a “Transactions” table. Tables in the ADEDEMO schema We want to find the relationships between ‘Average Money Spent’ with Gender, Education, Age or Marital Status We first want to get some suggestions from Augmented Data Explorer, about the analyses we can do regarding the Avg Money Spent. We get all the search responses as suggestions from ADE in JSON format. Suggestions are composed of of queries. Each query object has details of the column names, aggregate functions or groupings it did, to retrieve the result. We then use the query object to get Data and Natural language insights using the /results endpoint. 5. Getting data and natural language insights The POST /api/v1/results endpoint uses the query object that is generated by the suggestions endpoint, to retrieve the data, a recommended visualization type that can be used to represent the data and natural language insights. We use the query object that had Avg_Money_Spent and Region : Displaying the data fetched from Db2 Reading the insights generated The insights from Augmented Data Explorer tells there is a weak relationship with Gender. Visualization ADE recommends a chart type that would best suit the data fetched. To get the recommended visualization, the user types… Plot a graph with the data retrieved: We use the recommended chart type to plot a chart using matplotlib We followed the same steps with other query objects and using statistical detectors and NLG we quickly know that Average Money Spent has: Weak relationship with Gender and Region Strong relationship with Customer Value No meaningful relationship with Month Moderate relationship with Education Conclusion We have conducted a bivariate analysis of Avg_Money_Spent, without writing any complex queries or conducting correlation and exploratory analysis. With ADE, you do not have to run SQL queries to retrieve data from your database through your application, and you do not need to know the table names, their structure or the exact column names. ADE provides suggestions for searches that can be performed, based on the search text and will summarize the results of the statistical analyses it performs, in natural language.
https://medium.com/ibm-data-ai/embed-nlq-and-augmented-search-into-your-applications-with-db2-augmented-data-explorer-c21abdfb796c
['Shruthi Machimada']
2019-10-31 20:38:19.772000+00:00
['IBM', 'Augmented Data Explorer', 'Natural Language', 'API', 'Db2']
You Can Be Straight and Give a Guy a Blowjob
Sexuality You Can Be Straight and Give a Guy a Blowjob Photo by Jakob Owens on Unsplash Before Tinder, There Was AOL I was 16 years old in 1998. Before the days of online dating. No Tinder, Grindr, or OkCupid. If Craigslist was around, I certainly didn’t know about it. For me, that meant getting my jollies out the old fashioned way: via AOL chatrooms. Virtual meeting rooms where folks came together to chat about a variety of topics. There were chatrooms for just about everything, but the most popular ones had to do with sex and porn. People would meet up online, engage in cybersex, organize sex dates, or trade pornographic pictures and videos. I was there for the porn. I’d go to a chatroom, chat with some strangers, and end up trading pictures with them. Every time I logged on to AOL, I’d hear the iconic “You’ve got mail” and open up my inbox to a plethora of porn. Lesbian porn, blowjob-porn, teen porn. You know, your typical straight dude porn. Then one day, I opened up a file named hotstud.jpg. And there he was. A hot stud indeed, letting it all hang out. Big muscles, nice tan, and a huge hard cock. I quickly went to close the file, but then I hesitated. My eyes wandered to his cock, and I got turned on. Fuck. The naked girls were a no-brainer. The couples fucking I could understand. But getting turned on by a picture of a naked dude? I had a hard time swallowing that (pun very much intended). Terrified I Might Be Gay I lost my virginity to a girl at a party when I was 15 years old. My dad picked me up after, and I was sure he could smell sex on me. I’d had two girlfriends by the time I saw hotstud.jpg, so in my mind, I was straight. I didn’t have a concept of sexuality at that point, but I knew that being gay was going to fuck things up for me. I didn’t know how people close to me would react. Making friends at school was already challenging for me. I was the kid getting pushed into lockers, made fun of, and someone once threw me in a trashcan, ass first. To say I didn’t entirely fit in is an understatement. Would I be tormented more than I already was? I came from a loving family but would my parents still accept me? Was it time to kiss my dream of having a wife and kids goodbye? I didn’t know what to think or how to proceed. Mostly I was confused. Women always turned me on, and up until now, I’d only ever wanted a girlfriend. I didn’t want to be turned on by images of naked men. Remember, this is 16-year-old me talking. But I Never Deleted Those Images I started spending more time in chat rooms talking to guys. I solicited more pictures of naked guys. My collection grew until I had a pretty decent folder and was routinely masturbating to these images. I wasn’t using gay porn exclusively, but it was in heavy rotation. These photos were turning me on. Was I just a horny teenager, or was I attracted to guys as well? Could I be straight but also into masturbating to photos of cocks? I’d spend hours on AOL chatting and trading pictures with strangers. One day I received a private message from a guy claiming to be 18 years old. We started talking, and he eventually asked me if I wanted to meet up. “What for?” I asked him. “I’ll give you a blowjob, and you can give me one too if you want,” he replied. Fuuuuuuuuuck. I went to close the browser as fast as I’d wanted to close hotstud.jpg. But I hesitated. I got turned on. The idea of getting a blowjob from a stranger sounded hot. I’m not sure I wanted to blow him, but he seemed to indicate that part would be optional. So I Said Yes to a Blowjob Before I Could Say No I said yes before I could think of all the reasons I should have said no. What if he wasn’t 18? What if he had an STI? What if he forced me to do things I didn’t want to do? But of course, I didn’t think about any of those things. Sexual arousal took the wheel and started driving. The rational me took a backseat, and I wasn’t in control anymore. Any shame I might have felt about being turned on was tuned out by my desire to have sex. We quickly made plans to meet in a community center bathroom close by that I knew would be reasonably private. He Was About 18 Years Old And Not Bad Looking I wasn’t here to go on a date with the guy; we were going to blow each other in a public restroom. I don’t exactly remember what he looked like. Tall. White. Glasses. Somewhat dorky. And I’d already made up my mind; I was going to blow him. I’d somehow mustered up the courage to get this far, so I figured I might as well go all the way. There might not be next time (spoiler alert: there were several next times). You don’t quickly forget giving your first blowjob. What can I say? It was huge, and it hung to the left. I had no idea what to do with it. I’d seen enough porn to get a general idea, but it’s like learning to drive stick by watching Top Gear. You understand the principle, but you’re gonna suck at it for a while. So that’s what I did. And I was bad. I tried to put as much of it in my mouth as I could. I learned a lot that day about giving blowjobs, and how incredibly awkward they can be — more work than going down on a woman, that’s for sure. I had a newfound appreciation for the job at hand. After 5 or 10 minutes of fumbling around, he finally put me out of my misery. He told me he’d jerked off earlier today, so he probably wasn’t going to come. He gave me an out, and I took it. I was over it anyway. My jaw was sore, and my neck hurt. “You Want a Blowjob?” He Asked I couldn’t say yes fast enough, and he made short work of me. Two minutes tops. I thought I’d have ‘feelings’ about having a guy give me a blowjob. The only feelings I had were of someone’s mouth on my cock. Something that never gets old, in my honest opinion. And that was it. That awkward post blowjob goodbye, and I never saw him again. So Now What? I had come from a guy giving me a blowjob. I’d also given my first blowjob. Was this proof of my repressed homosexuality or discovering the benefits of bisexuality? Or simply more evidence that teenagers are horny and can get turned on by the slightest attention paid to their cocks? I don’t know. I still mainly thought about women. I wanted to date them, have sex with them, and eventually marry one and start a family. There was just the small issue of being turned on by naked guys. More Confused Than Ever but OK With It The thought of fooling around with guys turned me on, but I couldn’t picture myself kissing them. The idea of dating a man didn’t do it for me. I could see myself being sexual, but anything more than that turned me off. I didn’t want a romantic relationship with them, and I didn’t want their scruffy beard rubbing against my face. I still looked at photos of naked guys but didn’t have another experience with a guy for another ten years, maybe out of fear or perhaps just because I didn’t like it. I learned that I didn’t really like giving blowjobs (or at least to guys with big dicks) and that I could get off by having a guy blow me. I learned that I could be scared of something but still go through with it. In a way, I was proud that I’d done it. I didn’t tell anyone about it for years. I didn’t feel shame about what I’d done, but I wasn’t ready to tell anyone. It was my little secret. No one at school found out, and I never told my parents. Nothing changed in my daily life. I still flirted with girls and started dating a few. I didn’t make any plans to blow more guys. And I didn’t dwell too much on the significance of that one blowjob. I wasn’t sure if it made me gay, bisexual, or just experimental. In a way, it doesn’t matter. My sex life is my sex life. I get to choose what I do with it and who I share it with. And so your sex life is your sex life. You get to do with it what you want. Have fun with it, and make mistakes. Try new things and do them more than once. Find people that you can explore your sexuality with. Be safe and learn to talk about sexual health. Walk through fear and come out the other end with rich experiences. And then tell us all about them. Originally posted 2017. Updated Oct 2020
https://medium.com/sexography/you-can-be-straight-and-give-a-guy-a-blowjob-f14ae4f2ae84
['Shaun Galanos', 'Love Coach']
2021-01-04 18:29:52.089000+00:00
['LGBTQ', 'Pornography', 'Sexuality', 'Sex']
Nature is Crazy Good Help
And I have some for you Photo by Lynn Schommer You’ve heard about how important it is to be in nature; it is calming, energizing and a great stress reducer. Of course there’s much more impact of nature on our lives and health than that. Academic studies and the relatively new ecopsychology movement are examples of sources of information about the many benefits of being in nature. If you want to improve your life, knowing the value of nature is essential, so find out all you can. But what if you want to be in nature and you’re stuck somewhere with no chance. On the 23rd floor in a small apartment? Many blocks from any park? Tired of walking the perimeter of your yard? Well, there are videos, slide shows, photos. So look through all that and find what helps you. Meanwhile, I can put you on the shore of my lake, and give you the mental images of what I experience in a real place. You will probably find more value in experiencing a short piece more than once, than a long piece once. So come with me in your head. This morning at seven I went down to the lake and stood on the shore of our little bay in the dark. The temperature was 30 degrees, with no wind. Right there, on that little bit of level, I’m surrounded by white and red pine trees trying to be taller than each other. A large cedar right next to me is just being a tree — round and still. Across the bay I can see the haphazard lineup of old cabins on a hill. The hill means some of them have steps or stairs down to lake level, where right now there is a little cover of clean snow. No one is there. The rocks on the shore are an icy barrier in the dark, so I stayed on my level spot, in the leaves and pine needles, not trying to get on the ice, but close. No wind, no people, no light. Just me and the ice. The big, wide, four inch thick layer of frozen lake water. And I could hear that smooth, dark layer of ice making noises. Photo by Lynn Schommer Yesterday it was popping and burping. It sounded musical. This morning in the dark I heard low, soft, watery growls — almost an animal sound. These sounds aren’t constant; you have to wait for it, but not long. It was dark and there are no other sounds. Just the melody of the ice. Like a concert. Later in the winter there will be different sounds from the ice. It will creak and groan and snap, especially at night. But now it’s my favorite song. I listened, standing still. Then I watched it get light.
https://medium.com/@lynnschommer/nature-is-crazy-good-help-7c003a02577c
['Lynn Schommer']
2021-03-12 17:12:34.579000+00:00
['Nature', 'Stress Management', 'Health', 'Lake Life', 'Guided Imagery']
Help a Woman Artist Today?
Women’s Gallery group 1982 Tēnā koutou katoa. The gender pay gap in the arts is an old story. From 1980–1984 the determined women pictured tried to change that. And made a difference, thanks to government employment programmes, run by the Labour Department. But forty years later, the latest figures show that women artists earn 21% less overall than men artists. That means they earn 79 cents for every dollar a man artist earns. If only arts-related income is counted, the gap is an extraordinary 45%, 55 cents for every dollar (Creative New Zealand/New Zealand on Air 2019). I can’t find information about the intersections of gender and ethnicity for artists, but imagine that they’re similar to those for the whole population’s average hourly earnings.(1) Anecdotally, Covid has made this worse. And not one of the MCH relief packages for artists references the gender pay gap and attempts to address it. So there are now more women artists than ever who can’t afford to repair or replace their computers. They have to manage large projects on their phones. If they don’t live within walking distance of a public library with computers, if libraries aren’t open at the only time they have available for their art work, they’re stuck. If library computers don’t have the programmes they need, they’re stuck. If they haven’t got smart phones, they’re truly stuck. Have you upgraded your computer recently? Are you undecided about what to do with your still-in-excellent-condition old desktop or laptop computer or smart phone? Or do you know someone else with surplus equipment? If so, please consider contacting Spiral on spiralcollectives76 [at] gmail.com. We’ll find a welcoming home for that surplus. And you’ll have done a beautiful thing. (And if you’re a woman artist in need of some gear, we’d love to hear from you, too.) Ngā mihi mō te tau hou! (1) … Poet, activist and lesbian feminist Heather McPherson founded Spiral in 1976. Anne Else’s Women Together project for the Ministry of Culture & Heritage provides Spiral and/or Women’s Gallery key information and references; and there’s more here at Spiral Collectives.
https://medium.com/spiral-collectives/help-a-woman-artist-today-bf85c6574712
[]
2021-01-01 20:23:28.580000+00:00
['Aotearoa', 'Poverty', 'Women Artists', 'Computers']
How to produce and consume Kafka data streams directly via Cypher with Streams Procedures
How to produce and consume Kafka data streams directly via Cypher with Streams Procedures Leveraging Neo4j Streams — Part 3 This article is the third part of the Leveraging Neo4j Streams series (Part 1 is here, Part 2 is here). In it, I’ll show you how to bring Neo4j into your Apache Kafka flow by using the streams procedures available with Neo4j Streams. In order to show how to integrate them, simplify the integration, and let you test the whole project by hand, I’ll use Apache Zeppelin, a notebook runner that simply allows you to natively interact with Neo4j. What is a Neo4j Stored Procedure? Starting from Neo4j 3.x, the concept of user-defined procedures and functions was introduced. These are custom implementations of certain functionalities and/or business rules that can’t be (easily) expressed in Cypher itself. Neo4j provides a number of built-in procedures. The APOC library adds another 450 to cover all kinds of uses from data integration to graph refactorings. What are the streams procedures? The Neo4j Streams project comes out with two procedures: streams.publish : allows custom message streaming from Neo4j to the configured environment by using the underlying configured Producer : allows custom message streaming from Neo4j to the configured environment by using the underlying configured Producer streams.consume : allows consuming messages from a given topic. Set-Up the Environment Going to the following Github repo, you’ll find everything necessary in order to replicate what I’m presenting in this article. What you will need to start is Docker, and then you can simply spin-up the stack by entering into the directory and from the Terminal execute the following command: $ docker-compose up This will start-up the whole environment that comprises: Neo4j + Neo4j Streams module + APOC procedures Apache Kafka Apache Spark (which is not necessary in this article, but it’s used in the previous two) Apache Zeppelin By going into Apache Zeppelin @ http://localhost:8080 you’ll find in directory Medium/Part 3 one notebook called “Streams Procedures” which is the subject of this article. streams.publish This procedure allows custom message streaming from Neo4j to the configured environment by using the underlying configured Producer. It takes two variables as input and returns nothing (as it sends its payload asynchronously to the stream): topic , type String: where the data will be published , type String: where the data will be published payload, type Object: what you want to stream. Example: CALL streams.publish('my-topic', 'Hello World from Neo4j!') The message retrieved from the Consumer is the following: {"payload": "Hello world from Neo4j!"} You can send any kind of data in the payload: nodes, relationships, paths, lists, maps, scalar values and nested versions thereof. In case of nodes and/or relationships, if the topic is defined in the patterns provided by the Change Data Capture (CDC) configuration, their properties will be filtered according to the configuration. Following is a simple video that shows the procedure in action:
https://medium.com/free-code-camp/how-to-produce-and-consume-data-streams-directly-via-cypher-with-streams-procedures-52cbc5f543f1
['Andrea Santurbano']
2019-05-13 08:30:52.831000+00:00
['Neo4j', 'Apache Kafka', 'Data', 'Tech', 'Streaming']