title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
829
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
ELK deployment in kubernetes
This article describes the steps to deploy ELK components such as elasticsearch , kibana, apm-server , beats agents on to the kubernetes cluster. Though this whole setup is tested successfully , necessary tuning must be done in order to make it production ready. Elastic Cloud on Kubernetes (ECK) built on kubernetes operator pattern extends the basic Kubernetes orchestration capabilities to support the setup and management of Elasticsearch, Kibana, APM Server, Enterprise Search, and Beats on Kubernetes. With Elastic Cloud on Kubernetes we can manage critical operations related to ELK, such as: Managing and monitoring multiple clusters Scaling cluster capacity and storage Performing safe configuration changes Securing clusters and agents with TLS certificates Setting up hot-warm-cold architectures with availability zone/environment awareness Deploying ECK on kubernetes cluster Steps to install · Install custom resource definitions and the operator with its RBAC rules: · Monitor the operator logs: kubectl -n elastic-system logs -f statefulset.apps/elastic-operator Above steps will create all the resources needed for ECK operator. A dedicated namespace called elastic-system will be created with one ECK pod running in it. For further details refer documentation Deploying Elasticsearch Cluster nodes as Stateful Sets Elasticsearch needs persistence in storage volumes to store its workloads. This feature is provided by kubernetes’ StatefulSets. Other feature that justifies using elasticsearch nodes as Statefulset is , pods created as statefulsets are not interchangeable and each maintain persistent identifier across any rescheduling. Pre-requisites for deploying the elasticsearch cluster: 1. Load Balancer should be configured so that elasticsearch http service can use it as it is going to be load balancer type of service 2. Storage class which allows for volume expansion must be available. Manifest file: Deployment specifications: 1. This is elasticsearch kind of deployment which will be managed by ECK operator 2. Name of the elasticsearch cluster is “quickstart” 3. Elasticsearch cluster version is 7.13.4 4. Elasticsearch service which will be receiving data from beats and apm server is of load balancer type. Docs 5. It will be 2 node elasticsearch cluster with each node having master, data, ingest, ml role 6. 4Gb of heap memory is allocated 7. Atleast 16 Gb of RAM is made available to elasticsearch cluster 8. 500Gb of disk space is allocated to elasticsearch cluster using storage class “essc” which allows volume expansion . Docs Steps to get cluster health and secrets · To get overview of elasticsearch cluster’s health, version and number of nodes kubectl get elasticsearch · To see the pods running kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart · To get the elasticsearch http service kubectl get service quickstart-es-http · To get elasticsearch password PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}') · To see if cluster is running curl -u "elastic:$PASSWORD" -k https://<load-balancer-externalIP>:9200 · To get the tls certificate from the secret defined by ECK for accessing elasticsearch http service (this certificate is useful for beats agents to access elasticsearch cluster from outside the kubernetes cluster) Docs kubectl get secret quickstart-es-http-certs-public -o go-template=’{{index .data “tls.crt” | base64decode }}’ Deploying Kibana instance Kibana can be deployed as custom resource managed by ECK. Its upgradation, connectivity with elasticsearch using basic authentication and ssl authentication will be taken care by ECK. Manifest file: Deployment Specifications: 1. Name of the kibana instance is “quickstartkb” 2. Kibana version is 7.13.4 which should be same as elasticsearch version 3. Elasticsearch cluster name, with which kibana will connect with the help ECK operator Steps to access Kibana: · Get the service load balancer external IP using kubectl get service quickstartkb-kb-http · Get the kibana password which is same as elasticsearch password using PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}') · Open https://<load-balancerIp>:5601 in the local browser and give “elastic” as username and password from above to access kibana Beats Agents Configuration Configuration described here is for beats agents deployed in other kubernetes cluster and so do not come under ECK. The official manifest file for kubernetes is used for each beats agent. Metricbeat configuration is described as follows , similarly heartbeat and filebeat configuration can be done. Autodiscover section needs to be uncommented for filebeat-kubernetes.yaml. Metricbeat Metricbeat will collect data related to the Kubernetes nodes, pods, containers, persistent volumes, services, stateful sets, daemonset, replicaset when kubernetes module is activated . System module is enabled to get data related to cpu/memory/disk usage of kubernetes nodes. It will also load dashboards into kibana if kibana access credentials are provided. Below are the steps to configure and deploy metricbeat: · Download metricbeat-kubernetes.yaml from this · Create “quickstart-es-http-certs-public” secret in the same namespace where metricbeat daemonset will be running with following command (tls.crt is extracted as described in elasticsearch deployment section) kubectl create secret generic quickstart-es-http-certs-public --from-file=ca.crt=tls.crt --from-file=tls.crt=tls.crt · Add following configuration in “Volumes” specification of daemonset to create “certificate” volume where tls certificate will be stored which will be extracted from “quickstart-es-http-certs-public” secret - name: certificate secret: secretName: quickstart-es-http-certs-public · Mount the the above created volume in “volumeMounts” specification using - name: certificate mountPath: /home/<user>/certificate/ca.crt readOnly: true subPath: ca.crt · Put below configuration in “metricbeat-daemonset-config” configmap , under elasticsearch.output.password specification ssl.certificate_authorities: - /home/<user>/certificate/ca.crt · Add the appropriate values for “metricbeat” daemonset env: - name: ELASTICSEARCH_HOST value: https://<IP> - name: ELASTICSEARCH_PORT value: "9200" - name: ELASTICSEARCH_USERNAME value: elastic - name: ELASTICSEARCH_PASSWORD value: <password> · Deploy metricbeat using following command Kubectl apply –f metricbeat-kubernetes.yaml APM Server Configuration APM server collects the metrics gathered by apm agents attached to the applications to be monitored and ingest this data into elasticsearch which further can be visualized in kibana’s APM app. In this case ECK will operate the APM server and manage its connection with kibana and elasticsearch Manifest file: Specifications: Name of the APM server 2. APM http service type is loadbalancer to make it accessible outside the cluster 3. TLS security for APM http service is disabled as token can be used for authenticating apm agent 4. Name of the elasticsearch cluster to connect to 5. Name of the kibana instance to connect to Important Secrets and Service · quickstartapm-server-token : this secret stores the token needed to connect to APM server by apm agent. This token can be extracted by following command kubectl get secret quickstartapm-apm-token -o go-template='{{index .data "secret-token" | base64decode }}' · Following is the command to see the APM server http service which runs on port 8200 kubectl get svc APM agent configuration APM agent can be deployed as part of the application deployment without affecting the application’s image. Based on service name , apm agent configuration can be done from the Kibana APM app. Following are the steps to configure application’s deployment file to attach APM agent with it Sample manifest for application with apm agent attached can be found here Specifications: APM server URL to send the APM agent data - name: ELASTIC_APM_SERVER_URL value: "http://<apm-server-apm-http>:8200" 2. Name of the application where APM agent is attached - name: ELASTIC_APM_SERVICE_NAME value: "petclinic" 3. Application packages - name: ELASTIC_APM_APPLICATION_PACKAGES value: "org.springframework.samples.petclinic" 4. Environment name where application is deployed - name: ELASTIC_APM_ENVIRONMENT value: test 5. Secret in which APM server token is embedded. - name: ELASTIC_APM_SECRET_TOKEN valueFrom: secretKeyRef: name: quickstartapm -apm-token key: secret-token This token can be created using following command kubectl create secret generic quickstartapm-apm-token --from-literal=secret-token=<token> -n <namespace> 6. APM agent version - name: elastic-java-agent image: docker.elastic.co/observability/apm-agent-java:1.25.0 Conclusion Deploying ECK operator provided by elasticsearch and supported by Kubernetes, is one of the simplest way to deploy ELK components. It also ensures tls layered security between elk components and also between the elasticsearch nodes thereby enhancing data security. Storage requirement changes for elasticsearch cluster can be done easily by changing some values in deployment file and redeploying it. Similarly necessary module activation can be done just by changing the manifest files of the ELK components.
https://medium.com/@ak20102763/elk-deployment-in-kubernetes-a7a41fb5acbd
['Akash Tangde']
2021-09-08 06:21:25.191000+00:00
['Elasticsearch', 'Eck', 'Beats', 'Kubernetes', 'Kibana']
An Unemployment Chart that Practices Data Feminism
An Unemployment Chart that Practices Data Feminism Breaking down a “data visceralization” with principles from Data Feminism, a book by Catherine D’Ignazio and Lauren Klein. A screenshot from ProPublica’s story, “What Coronavirus Job Losses Reveal About Racism in America” As articulated by authors Catherine D’Ignazio and Lauren Klein, Data Feminism is “a way of thinking about data, both their uses and their limits, that is informed by direct experience, by a commitment to action, and by intersectional feminist thought.” It has seven core principles: Examine power Challenge power Elevate emotion and embodiment Rethink binaries and hierarchies Embrace pluralism Consider context Make labor visible In this post, I will illustrate some principles from Data Feminism by breaking down this unemployment chart recently published by ProPublica. Challenging Power Knowledge about Unemployment To apply the first two core principles from Data Feminism (examine power and challenge power), we first need to understand what people in power have to say about unemployment right now. While the US is currently in its worst economic downturn since the Great Depression, with the worst still yet to come, recent coverage of US unemployment has been suspiciously optimistic and short-sighted. Stories tend to emphasize slight decreases in the overall unemployment rate, new added jobs, and hiring activity. These narratives obscure the economic crises currently experienced by the most vulnerable in society. almost half of the US population is without a job, even though the unemployment rate is around 11% The unemployment rate is notoriously uninformative, in large part because the denominator only includes people who filed for unemployment. That is why almost half of the US population is without a job, even though the unemployment rate is only around 11%. The overall unemployment rate poses as what D’Ignazio and Klein call a “rational, scientific, objective viewpoint from a mythical, imaginary, impossible standpoint.” A number like 11% sounds empirical, when in reality the choice of metric is subjective. Or, as explained in chapter 3 of Data Feminism, “the view from nowhere is always a view from somewhere.” So where is this view that highlights 11% metric, and who does it benefit? The stock market is largely owned by wealthy Americans. Metrics like overall unemployment primarily serve the wealthy, helping fuel the kind of optimism the market thrives on. A view that highlights unemployment rate is primarily a view from power. It seeks to reassure the wealthy that everything is recovering, stabilizing, and under control, driving the kind of optimism needed to fuel the stock market (half of which is owned by the richest 1 percent of the US, as Robert Reich points out). Here are some numbers that are ignored and obscured in the overall unemployment rate: How many people without jobs do not know how to file for unemployment? How does unemployment rate differ across intersectional demographic groups, for example, white men versus black women? How many people without jobs are affected by mental illness and unable to file for unemployment? How many people are “employed” by unstable temporary jobs? How many people are “employed” but still rely on food stamps, like the essential workers at Mountaire (one of the country’s largest plants for chicken packing)? “Counting is power,” as the authors put it, and the principles of Data Feminism encourage us to ask: why are crucial numbers like these not being counted? What are the politics of what is being counted? Why are people in power so quick to focus on the overall unemployment rate? The subjective decision to focus on overall unemployment represents a form of “power knowledge,” which is similar to propaganda. Power knowledge supports narratives that help institutions and people in power, rather than pursuing truth and liberation. As a counterexample, let’s look at a “data visceralization” that conveys more truth about unemployment in the US. Data Visceralization for Unemployment Just yesterday, ProPublica published a story by Lena V. Groeger, titled “What Coronavirus Job Losses Reveal About Racism in America.” Here is what the centerpiece chart looks like: A chart from ProPublica that practices data feminism I noticed three main aspects of this graph that practice Data Feminism: (1) the inclusion of many lines to convey pluralism, (2) the use of inverted axes to leverage embodied, intuitive perception, and (3) an interactive component that captures intersectionality. Pluralism It is clear even from a brief viewing that this chart does not claim a single “rational, scientific, objective viewpoint,” but rather includes unemployment trends from a variety of different groups. Plotting all these lines on the same graph shows the range and variety of different groups experiencing unemployment. This design choice relates closely to one of the core principles from Data Feminism: embrace pluralism. Pluralism simply means that “when people make knowledge, they do so from a particular standpoint,” and that “all knowledge is partial.” Thus, the 11% overall unemployment rate, which has been emphasized in other stories, takes on a whole new meaning in this chart: The meaning of the overall unemployment rate is contextualized and subverted among the rates for various demographic groups. The national unemployment rate tells a very small part of the story. The chart situates the 11% as merely one number in a wide range of unemployment metrics. It does not ignore or exclude the overall rate, instead, it simply frames it within the bigger picture. Presenting each line as a different group of people helps contextualize — and to some extent subvert — the meaning of the overall unemployment rate. It shows that the 11% metric only represents partial knowledge. Embodiment I noticed the inverted y-axis as a subtle but important embodiment aspect of this chart. The author was inspired by a similar 2009 New York Times article about unemployment during the recession, which has been archived, but looked like this based on a screenshot from the Times: In the New York Times chart that inspired the ProPublica chart, the y-axis was not inverted, forcing readers to make a cognitive switch that higher lines mean worse (higher) unemployment. In the Times’ chart, high unemployment is higher, but in ProPublica’s chart, high unemployment is depicted as lower. This is a simple way of leveraging our embodied intuition that “up is good” and “down is bad,” which is helpful for the same reason that ceremonies at the Olympics feature the gold medal winner highest and bronze as the lowest. This “high ground” symbolism is almost universal, which is may be one reason why Bong Joon-Ho uses it as an important symbol in Parasite, associating wealth with literal physical elevation that protects from flooding and other hardships. Flipping the y-axis places white workers “above” black and latinx workers, which is an accurate depiction of their higher prosperity. While some readers could make the cognitive switch on the Times’ chart (“white workers are lower, which means lower unemployment, which is better”), Groeger ensures that we intuitively feel which groups are privileged and which are oppressed. This utilization of intuition and embodiment also relates to the intersectional aspect of the chart. Intersectionality One of the most important aspects of the chart is its depiction of intersectionality. If you look at this Wall Street Journal article about unemployment, you will find a failure to account for intersectionality: The Wall Street Journal charts do not account for intersectionality, and thus obscure important disparities. We can compare black workers versus white workers, or men versus women, but what about white men versus black women? This is what’s meant by intersectionality. The term has been somewhat popularized, but here is a helpful definition from the authors: “The idea that we must take into account not only gender but also race, class, sexuality, and other aspects of identity in order to fully understand and resist how power operates to maintain an unjust status quo.” While I was previously familiar with the concept, I learned from the book that it came from a court case called DeGraffenreid v. General Motors. In the case, a black woman sued GM for hiring discrimination, but the judge threw out the case because GM employed women and black people. But as critical race theorist Kimberlé Williams Crenshaw noticed and articulated, the employed women were all white, and the employed black people were all men. In other words, the discrimination was specifically toward DeGraffenreid’s compound (i.e. intersectional) identity of being a black woman. ProPublica’s chart captures intersectionality very well, and the interactive component makes it even more clear. For example, I can see that the unemployment rate for white men with a college degree (a group I belong to) sits at the top of the chart, as one of the most privileged, least affected groups. This is the kind of important observation that is impossible with the charts in the WSJ article. White men with a college degree enjoy an even greater employment advantage over other demographic groups, compared to pre-pandemic levels. In summary, the ProPublica chart practices Data Feminism in at least three ways: (1) embracing pluralism through multiple lines, (2) leveraging embodied intuition through the inverted y-axis, and (3) highlighting intersectionality through the interactive components. Through these three aspects and more, the chart challenges the power knowledge narrative of a stabilizing economy, and instead tells a more true story of oppressed groups who are reeling from the economic effects of the pandemic.
https://medium.com/an-injustice/an-unemployment-chart-that-practices-data-feminism-978dd3519b56
['Jack Bandy']
2020-07-22 14:07:19.643000+00:00
['Feminism', 'Data Visualization', 'Data Science', 'Intersectionality', 'Unemployment']
How to encrypt thumb drive or hard drives for protection?
A USB stick, flash drive, thumb drive, flash stick, storage USB and many more names, we know what it is. That little thing you stick at the side of your computer for that extra eight to thirty-two GB of extra space. Something you can always keep with you at all times with your key chain to keep it safe. Others use portable hard drives the same way. To store something so vital that they do not trust the same data to be physically safe in their computer. In many cases, we have crucial documents stored within or a list of all the passwords we have. Important pictures and work documents. Legal documents in some cases. Sure we have a backup somewhere, but you need something more secure and convenient. Something physical that you can plug and access with ease. Option: BitLocker Bitlocker is an excellent option. The only catch: You need to have Windows 10 Professional or Enterprise to convert your storage device into a device with Bitlocker installed to it. The good thing is, you do not need to have Windows 10 to open and use it. You can still mount and open it on a Mac! It is, after all, a straightforward software but with decent encryption. We always recommend you keep the secret access key somewhere safe in case you forget your password. And yes, once you install BitLocker, you need a username and password to access the files inside. So at least you know if someone else has your device, they cannot use it in most cases of course. Bad practices Many online resources always recommend a partition within your storage device, thumb or hard. They insist that you should partition and only encrypt one partition, which allows you to install your encryption software on the primary non-encrypted partition, allowing you to run it from your storage device and DE-crypt the information that is important in the encrypted partition, which is not much different than leaving your house key under the doormat. We always recommend that the master copy of your relevant documents somewhere safe in the cloud somewhere or a computer with no internet access. Your portable USB drive should only house critical information and should always be expendable. Meaning that if you lose the stick, you lose it, you buy another one and encrypt it all over again. That is good practice. Please adopt this measure if the information and data are that important to you. Free Open Source: VeraCrypt Now, of course, we never leave you with just an option. Link is the logo itself, and this is a fantastic piece of french software. Works on Windows, Mac OS and Linux. So why even bother recommending Bitlocker? Because it is easy, much more comfortable and straightforward to use if we are honest. So if you can use Bitlocker do use it. VeraCrypt is still a much better option and much more secure. In short, it is the best out there, in our opinion. It is tough to break the encryption here, and the many options are given, such as damage protection, recovery options, multiple encryption algorithms to choose. VeraCrypt is a godsend if you are geeks like us. This is what we use. We swear by it. In fact, in the hands of a novice, this would be much better encryption than BitLocker can ever hope to be. Is there an easier way? Of course, there is! Would we let you down! We understand the laziness, it is real, and it is true. It sticks to our soul like a sore thumb, and sometimes we want something physical. Have a look at this fantastic solution by Toshiba. It is a USB stick with a combination lock, which turned on and activated after keying in the correct combination on the USB physically-another point to take note of, as marvellous an idea your birthday sounds. Your mobile number is much better. I was saying. Right? We love this new combination USB drive so much; we use it. The only downside is the price. They are not cheap, but they work, and they are secure. And the great thing is, there is a good chance no one is able to break the code on these drives. So put all the information you want on it and relax. Yes, this is a way better option than all those other options we gave you. Thank you again for reading, and we hope you enjoyed yourselves today. Until next time this is IT Block saying we love you all.
https://medium.com/@itblocksg/how-to-encrypt-thumb-drive-or-hard-drives-for-protection-6ab41c4b2ac5
['It Block Pte. Ltd.']
2020-03-12 00:58:03.910000+00:00
['Tech', 'Encryption', 'DIY', 'Learning', 'Storage']
A Complete Beginners Guide to Installing a Lightning Node on Linux (2021 Edition)
Configuring The lnd.conf File We’re going to create a lnd.conf file and copy everything below into it. Before we do that though, let’s cover what each setting is doing, and the changes we’ll need to make: ## LND Settings # Lets LND know to run on top of Bitcoin (as opposed to Litecoin) bitcoin.active=true bitcoin.mainnet =true # Lets LND know you are running Bitcoin Core (not btcd or Neutrino) bitcoin.node=bitcoind ## Bitcoind Settings # Tells LND what User/Pass to use to RPC to the Bitcoin node bitcoind.rpcuser=PICK-A-USERNAME bitcoind.rpcpass=PICK-A-PASSWORD # Allows LND & Bitcoin Core to communicate via ZeroMQ bitcoind.zmqpubrawblock=tcp://127.0.0.1:28332 bitcoind.zmqpubrawtx=tcp://127.0.0.1:28333 ## Zap Settings # Tells LND to listen on all of your computer's interfaces # This could alternatively be set to your router's subnet IP tlsextraip= 0.0.0.0 # Tells LND where to listen for RPC messages # This could also be set to your router's subnet IP rpclisten= 0.0.0.0 :10009 LND Settings: These are fairly straightforward settings to explain. LND needs to know a few things before launching: Whether it’s going to run on top of Bitcoin or Litecoin: bitcoin.active=true Whether to use the main network, or the test network: bitcoin.mainnet=true What kind of Bitcoin client it’s going to connect to: bitcoin.node=bitcoind Bitcoind Settings: LND will communicate with our Bitcoin Core node via RPC, and via ZeroMQ. Both the lnd.conf & bitcoin.conf will need to be configured for this, but we will cover editing the bitcoin.conf file after this. The only configuration RPC needs is setting a username & password. This is unrelated to your operating system’s username & password. LND needs to know what that username & password is. Replace the text next to bitcoind.rpcuser= & bitcoind.rpcpass= with whatever you want, just make sure it’s reasonably secure. To configure ZeroMQ, we just need to specify where to send and listen for those messages, and over what ports. No changes to the config text we’re going to copy are necessary for this. Zap Settings: If you’re going to install the Zap mobile wallet, you’ll need to configure LND so you can connect via LND’s gRPC interface. (This will not be the same for the Tor guide. Over Tor, Zap will use LND’s REST interface. Do not worry about this right now.) tslextraip= allows you to set your router’s subnet IP address so LND listens for connections coming from your router. LND uses a TLS certificate to manage these connections. By default LND will create a TLS certificate that only allows connections from the same computer that LND is running on, so we need to specify where else LND should listen for connections before the TLS certificate is created. Connections from Zap on your phone will be over the Internet, so your phone will reach your router first. Then your router will have to forward that connection to your computer running LND. Everyone’s router subnet varies, so using 0.0.0.0 lets LND listen on all possible subnets and helps keeps things simple for this tutorial. It works and you don’t need to change it, but you could.
https://medium.com/@stopanddecrypt/a-complete-beginners-guide-to-installing-a-lightning-node-on-linux-2021-edition-ece227cfc35d
[]
2021-02-23 17:21:44.700000+00:00
['Full Node', 'Linux', 'Lightning Network', 'Bitcoin', 'Cryptocurrency']
Intro — PIZZA Vote System. PIZZA vote system is a secondary voting…
Intro — PIZZA Vote System PIZZA Vote System Recently, the Qiuhe project of DFS really brings focus from community, it consists of 3 different function: delegated voting, donation and mining. What it aims is to create self-reform of EOS ecology to make EOS great again! And the DFS delegated voting system, is the one we gonna talk about, to introduce you our PIZZA vote system. DFS Vote What is DFS delegated voting system? It is an enhanced version of EOS node voting delegation system. EOS holders delegate the voting powers of EOS to the voting proxy account of DFS, to obtain mining qualifications and mining weights corresponding to the number of delegated votes. DFS holders, according to the DFS Vote voting interface, a referendum decides which nodes the DFS voting proxy account will vote for. Some of DFS tokens flow to other platforms by user’s daily operation, such as exchanges, lending platform…If tokens on these platforms are deposited into DSS system(DFS Bank), then they are same with the voting powers. Currently, the total amount of DFS tokens are less than 10K on exchanges that listed DFS token, while the amount on PIZZALEND — the decentralize lending developed by PIZZA, quite a lot. DFS lending status on PIZZALEND at article publish time And no long before, PIZZA just connected to the DSS system, auto deposit those DFS tokens that lenders deposited on PIZZALEND and activated as collateral to obtain DSS earnings for lenders. Thus these DSS deposited DFS tokens have voting powers. From the DSS interface, we can see that PIZZA is the top 1 DFS holders who deposited 77,423 DFS tokens, which means that PIZZA holds a major voting power to BPs as well as to mining pools in the DFS voting system. DFS Bank You may think that PIZZA will feather its nest. It’s NOT! Instead, PIZZA developed a Vote System which benefits all PIZZA holders! PIZZA holders will decide PIZZA’s voting powers on DFS to vote for BPs and mining pools. It means that “PIZZA holders will decide DFS how to vote, and DFS will decide EOS how to vote”. PIZZA holders become the secondary proxy electoral college. Honest to say, it’s a new and meaningful try! PIZZA Vote System PIZZA Vote Rule DFS gives different voting weights based on liquidity providing or DSS deposit, and PIZZA has the same voting design. Each “PIZZA+EOS” LP Token= 10 Voting Powers Each PIZZA Token = 1 Voting Power LP token includes DFS(defis.network) and BOX(defibox.io) two platforms. DFS LP token will map directly, and BOX LP token will need to deposit into PIZZALEND and activate as collateral to have the vote right. PIZZA tokens that deposited in PIZZALEND have the same right. And different from DFS voting system, in PIZZA vote system each one has 3 votes in maximum. And finally PIZZA will vote for top 5 BPs in DFS system according to the ranking. PIZZA Vote Rule How to participate in PIZZA vote?
https://medium.com/@pizza-finance/intro-pizza-vote-system-3a9a041428c9
['Pizza.Live - Eos Defi']
2020-11-20 09:50:34.447000+00:00
['Pizza', 'Eos', 'Defi', 'Lending Platform', 'Vote']
How E-Commerce Giants Battle It Out for Your Purchase
Source: Oxylabs’ design team There is an invisible war taking place in the e-commerce world. Made up of numerous battles fought by soldiers, it is waged by major players competing for dominance in the highly competitive e-commerce environment. The purpose is clear: to post the lowest price and make the sale. While people don’t realize that this war is taking place it’s still there and is getting more brutal as time goes on. My company — Oxylabs — provides the proxies or “soldiers”, plus the strategic tools that help businesses win the war. This article is going to give you an inside view of the battles taking place along with techniques to overcome some of the common challenges. Web Scraping: The Battle for Data Spies are valuable players in any war as they provide inside information on the opponent’s activities. When it comes to e-commerce, the “spies” are in the form of bots that aim to obtain data on an opponent’s prices and inventory. This intelligence is critical to forming an overall successful sales strategy. That data is extracted through web scraping activities that aim to obtain as much quality data as possible from all opponents. Data, however, is valuable intelligence and most sites do not want to give it up easily. Below are some of the most common major challenges faced by scrapers in the battle for high-quality data: Challenge 1: IP Blocking (Defense Wall) Since ancient times, walls were built around cities to block out invaders. Websites use the same tactic today by blocking out web scrapers though IP “blocks”. Many online stores that use web scraping attempt to extract pricing and additional product information from hundreds (if not thousands) of products at once. These information requests are often recognized by the server as an “attack” and result in bans on the IP addresses (unique identification numbers assigned to each device) as a defense measure. This is a type of “wall” a target site can put up to block scraping activity. Another battle tactic is to allow the IP address access to the site but to display inaccurate data. The solution for all scenarios is to prevent the target site from seeing the IP address in the first place. This requires the use of proxies — or “soldiers” — that mimic “human” behaviour. Each proxy has its own IP address and the server cannot track them to the source organization doing the data extraction. Source: Oxylabs’ design team There are two types of proxies — residential and data center proxies. The choice of proxy type depends on the complexity of the website and the strategy being used. Challenge 2: Complex/Changing Website Structure (Foreign Battle Terrain) Fighting on enemy territory is not an easy task due to the home advantage leveraged by the defensive army. The challenges faced by an invading army are especially difficult because they are simultaneously discovering the territory while engaged in the battle. This is analogous to the terrain faced by web scrapers. Each website has a different terrain in the form of its HTML structure. Every script must adapt itself to each new site in order to find and extract the information required. For the physical wars of the past, the wisdom of the generals has proven invaluable when advancing on enemy territory. In the same way, the skills and knowledge of scripting experts are invaluable when targeting sites for data extraction. Digital terrain, unlike physical terrain on earth, can also change on a moment’s notice. Oxylab’s adaptive parser, currently in beta phase, is one of the newest features of our Next-Gen Residential Proxies solution. Soon to become a weapon of choice, this AI and ML-enhanced HTML parser can extract intelligence from rapidly-changing dynamic layouts that include the title, regular price, sale price, description, image URLs, product IDs, page URLs, and much more. Challenge 3: Extracting Data in Real Time (Battle Timing) Quick timing is essential to many types of battle strategy and often waiting too long may result in defeat. This holds true in the lighting fast e-commerce world where a small amount of time makes a big difference in winning or losing a sale. The fastest mover most often wins. Since prices can change on a minute-by-minute basis, businesses must stay on top of their competitor’s moves. An effective strategy involves strategic maneuvers using tools to extract data quickly in real time along with the use of multiple proxy solutions so data requests appear organic. Oxylab’s Real-Time Crawler is customized to access data from e-commerce sites along with empowering businesses to get structured data in real-time from leading search engines. Source: Oxylabs’ design team Ethical Web Scraping It is crucial to understand that web scraping can be used positively. There are transparent ways to gather the required public data and drive businesses forward. Here are some guidelines to follow to keep the playing field fair for those who gather data and the websites that provide it: Only scrape publicly-available web pages. Ensure that the data is requested at a fair rate and doesn’t compromise the webserver. Respect the data obtained and any privacy issues relevant to the source website. Study the target website’s legal documents to determine whether you will legally accept their terms of service and if you will do so — whether you will not breach these terms. A Final Word Few people realize the war taking place behind the low price they see on their screen. That war is composed of multiple scraping battles for product intelligence fought by proxies circumventing server security measures for access to information. Strategies for winning the battles come in the form of sophisticated data extraction techniques that use proxies along with scraping tools. As the invisible war for data continues to accelerate, it appears that the biggest winners of all are the consumers that benefit from the low prices they see on their screens.
https://medium.com/swlh/how-e-commerce-giants-battle-it-out-for-your-purchase-6e0e2bd92d7e
['Julius Cerniauskas']
2020-11-30 13:34:18.256000+00:00
['Pricing Strategy', 'Ecommerce', 'Entrepreneurship', 'Proxy Service', 'Web Scraping']
How I put together a team as an A&R
J Dvnl on performing at DROM in New York City When J Dvniel approached me about his vision for his very first performance ever the first thing I ask him was “what is it that you would like to see on stage?“ A lot of times when we ask as producers are working with an artist we get into the habit of sending out beats or demos based on our vision for the artist. We should first be asking the artist what it is that they would like to accomplish with the record. Reason being, a lot of people already have an idea of what they would like when they are performing either on stage or recording a song. Put the artist’s vision first before sending anything out. Now that I’ve gotten that out the way, let’s get back to J. Dvniel’s performance. When he told me what it was that he wanted I instantly knew who to call to help being his vision to life. I called two female vocalists, alto and soprano, and J called his very own vocal coach to be his background singers. During the very first rehearsal the only thing that I was focused on was how well everyone work together. Reason being, it doesn’t matter how well everyone can sing if they can’t get along. You can have the greatest singers of all time but if there is bad energy amongst them it will translate on stage. So the whole point of this post is to get everyone to think a bit bigger than what’s happening at the moment. See how well the people that you are bringing in, whether it’s the engineer, vocalist, or even the producer, can work well together. Listen more, watch more, receive feedback more, and most importantly adjust.
https://medium.com/@hanznobe/how-i-put-together-a-team-as-an-a-r-ccf4b0d617f
['Hänz Nobe']
2020-11-20 02:38:46.219000+00:00
['Music Business', 'Rnb', 'Music Industry', 'AR', 'Music Producer']
<>CALL+2349137205755<>JOIN YOUNGSTARS BROTHERHOOD TO MAKE WEALTH IN 5 DAYS WITHOUT HUMAN SACRIFICE.
<>CALL+2349137205755<>JOIN YOUNGSTARS BROTHERHOOD TO MAKE WEALTH IN 5 DAYS WITHOUT HUMAN SACRIFICE. master victor ·Dec 27, 2020 The youngstars brotherhood is an elite organization of world leaders that operates above geographical and political restrictions for the benefit of the human species. While our daily operations remain confidential for the safety of our members, we strive to create a better understanding between us and those we have been entrusted to protect. I WANT TO BE A MILLIONAIRE IS THE QUESTION PEOPLE ALWAYS ASKS,BUT WHY WASTE ALL YOUR LIFE COMPLAINING WHILE THE OPPORTUNITY OF BEING AMONG THE RICH,FAMOUS,GREAT,SUCCESSFUL,WEALTHY,POWERFUL AND STRONG PEOPLE IN THE SOCIETY IS HERE AT YOUR DOOR STEP,CALL THE WISE ONE TODAY ON THIS NUMBER +2349137205755 OR EMAIL US [email protected]. AND HAVE THE TESTIMONY TO SHARE ALL YOUR LIFE.
https://medium.com/@youngstarsoccult/call-2349137205755-join-youngstars-brotherhood-to-make-wealth-in-5-days-without-human-sacrifice-fd662b10d277
['Master Victor']
2020-12-27 22:36:09.535000+00:00
['Fame', 'Money', 'Power', 'Connection', 'Protection']
The Bullwhip Effect
Let’s take an example to understand the bullwhip effect Suppose you, as an end customer goes to a retail store near your house to by a packet of detergent. Now as an end customer you may buy one pack, but to fulfill the demand of customers like you, store has to keep a stock of around 1000 packs. Similarly, the retail store will order stock from a wholesaler who may have to keep a stock of 50000 packs and similarly as we keep going up the supply chain ultimately the manufacturer may have to keep an inventory of more than 5000000 packs readily available to be shipped. Now suppose you decided to two packs instead of buying one pack. This will create a spike in the demand from the end customer and this demand gets magnified at every stage of the supply chain and by the time it reaches manufacturer the demand spike is so huge that the manufacturer may end up producing more than the actual demand required by the end customer. This creates a problem as the manufacturer does not have a clear visibility of the end customer demand and cannot forecast how much production is required to meet the demand without over filling the warehouses. Due to this planning as well as the operations management for the manufacturing becomes really difficult. This will definitely reduce profit margins at least if not incur losses for non perishable products but, just replace detergent in the above example with fruit juice boxes, now at each level of the supply chain you cannot hold a huge stock of the product, as now we have a smaller time window between manufacturing and consumption by the end customer. Suddenly this magnifies the problem and optimization now becomes really critical for the manufacturer.
https://medium.com/@dipankarsonwane/the-bullwhip-effect-980ab777f72c
['Dipankar Sonwane']
2021-01-21 06:31:29.730000+00:00
['Supply Chain Solutions', 'Operations Management', 'Forecasting', 'Optimization', 'Bullwhip Effect']
I Want to Be a Dog
I Want to Be a Dog Everyone has something they want to be when they grow up. Maybe you want to be a doctor or a pilot. Maybe, you want to be a pop star, or just be pretty on the Internet! I was thinking about this recently because what else am I supposed to do when I’m stuck inside all the time and woefully unemployed without any career trajectory? Swaddled in a new comforter, I finally realized the answer — I want to be a dog. Photo by Nick Fewings on Unsplash You might be thinking — Stella, baby… what the hell is this? Or, you might even agree with me. If you’re trans, you might understand what I’m getting at when I say this. Even though I’m lucky that most of my family respects my pronouns or just doesn’t refer to me in the third person, I have to take care of an elderly relative who expressly refers to me as “he,” “him,” and all the fixins, even after both I and my mom constantly correct her. What connection do these two things have, you might ask? Well, let me enlighten you (I promise this isn’t some otherkin situation). For my old, old, old relative, dogs are boys and cats are girls thanks to some weird, pseudo-sexual connection between cats’ “sensuality” and dogs’… I don’t even know what the other reason is (the mid-1900s were a trip). I have one female dog and one male dog, and she often calls them “guys” or “boys” by mistake. She sometimes doesn’t catch herself, but when she does, she says something like “Boys, I said, how silly- boy and girl,” or just uses their names. Even though my mom respects my identity, she still slips up occasionally, but never with the dogs. She corrects other people’s misgendering of them almost as vehemently as she does when I get misgendered. This is when it really hit me: Cisgender people care more about correctly gendering a dog than a trans person. If you’re trans, you’ve probably seen, felt, and joked about this plenty of times. I do it myself all the time and have even aired it as a frustration when my relatives used to mess up my gender. It’s so odd — whenever someone needs to adjust to new pronouns, they huff and haw or get flustered and apologize, tripping over their words. They say “Oh, I just knew you as he, and it’s so hard for me to get used to it,” or “This is just so new to me, sorry, sorry, sorry-” Yet, lo and behold, if someone accidentally calls a dog the wrong pronoun, it’s like the person just spat on their mother’s grave. Cis people will, quite literally, get more upset over someone misgendering their dog than a trans person. Photo by Icons8 Team on Unsplash For the dogs, it’s something indisputable. They are what they are. Penis? Boy. Vagina? Girl. As for my gender? Not so much. Obviously, to us, sex and gender are two completely different things. For a lot of people that don’t understand trans things, they’re completely inextricable. That’s why there are so many people, even those with good intentions, who will ask trans women “Oh, so you’re, like, a man?” or “You were a man?? I can’t even tell!” Unfortunately, while cis people get the right and luxury of having a gender that’s completely non-debatable, there will always be some amount of “original gender” left in us, according to some less savvy cis people. We will never really be what we say we are, because our “biology” is the truth. I’m not even going to get into all the reasons why that’s wrong here. To me, it feels like I’m constantly having to assert my gender to “prove” to people that I am what I am. That mental exertion is exhausting. Why should I need to prove that I deserve basic human respect just because I know that I’m something that isn’t reflected in basic understandings of biology and psychology? So, that’s why I want to be a dog. I don’t have to worry about getting a job (because it’s much more difficult for an LGBTQ person to get a job and be respected at it). I get to be super cute and nobody stays mad if I piss them off for some reason. Most importantly, though, I’ll have my basic rights respected and people will not only think of me as what I really am, but will fight other people tooth and nail to respect and validate my identity. They’d say I’m nonbinary as if they said 1+1=2. They’d say I’m a woman and it would feel the same to them if they said “the Earth is round.” Even though dogs have literally no concept of gender, people will rush to point out that their set of genitals doesn’t match what someone called them. However, cis people will literally do Olympic gold medal mental gymnastics to think of a bunch of excuses for misgendering someone/slipping up even after years of knowing what pronouns they should be using. Photo by Andre Ouellet on Unsplash Even before coming out, it was obvious that cis people were entrenched in and obsessed with gender. Instead of thinking of the obvious reason for why I acted more effeminately, grew my hair out, and even put on makeup and never referred to myself as a man, they’d literally do spy movie laser avoidance around the simple answer of me being trans. People would call me a “rockstar” or say I looked like a metalhead. They’d just say I was “sensitive” (which is true, but there’s always an implied “for a guy” after it), or basically would imply that I was at most cis and gay. I have literally said that I hate being tall, and when people responded that it’s good to be a tall guy, I would say “I know, that’s why I hate being tall.” I swear, I could have hit some of these people with a neon sign that said “I’m trans” and they’d say, “Ow, man! That’s not cool dude. Man up and apologize.” Instead of thinking, “Wow, this effeminate person who never refers to themself as a man and has literally said they hate being tall for its association with manhood probably isn’t a man,” people would just not understand that the possibility of my transness even existed. Photo by Daniel Clay on Unsplash So, where exactly am I going with this? Basically, because we’re so wired to assume cis-ness, trans people are often disregarded or just seen as a fake version of their gender. Everyone says trans people are so obsessed with gender, but are we the ones correcting people who call a dog with a penis “she”? Are we the ones burning down forests just to show people what genitals our baby is going to have? Are we the ones saying that a little blubbering male baby has a “girlfriend” and is a “lady-killer” when it doesn’t even have the ability of metacognition yet? Basically, I’m just pissed off. At what, exactly, I’m not sure. Gender? Sex? The cultural understanding of gender? Dogs?? Not dogs, per se, but the frustration surrounding them is what really makes me angry. Nonbinary people will say, “Hey, I don’t feel comfortable with binary pronouns. My pronouns are they/them.” and their (well-meaning) cis friend will say “This is so-and-so, she uses they/them pronouns.” Whether it’s cis people’s fault or not, trans people are often seen subconsciously and consciously as the gender that their appearance falls into the societal idea of. I can say I’m a woman all I want, or nonbinary or whatever, but if someone sees me as a man, they’ll most likely think I’m a man, even if they use the right pronouns. Having this subconscious idea in their head instead of trying to unlearn it as best as possible (I’m aware that it’s partially biological) causes slip-ups, and “slip-ups” can ruin our days, weeks, or even years if it’s really traumatic for some reason. All I’m saying is that if I were a dog, people would just take a peek at the old undercarriage and say “oh, that’s a girl” or “that’s a boy” and I wouldn’t give a fuck! I’d just keep on wagging my tail, free from the social and mental consequences of society imposing a gender on me that I didn’t want.
https://medium.com/prismnpen/i-want-to-be-a-dog-2142f91587ac
['Stella Luna', 'They She']
2020-11-17 09:17:10.501000+00:00
['Gender', 'LGBTQ', 'Transgender', 'Dogs', 'Creative Non Fiction']
Be Aware of the Quiet Ones like Keanu Reeves — They Are the Ones That Actually Make You Think
The world doesn’t need more loud guys full of too many words, with buff arms, in tight shirts, and huge egos to match. The world needs quiet people. Why? Quiet people make you think. Thinking brings clarity. Thinking can lead to change. I’ve always been intrigued by Keanu. He is a quiet person who keeps to himself and still hasn’t figured out how to be famous after twenty-nine years of being one of the most iconic Hollywood Actors of all time. Keanu doesn’t get fame, attention or noise. Instead, he prefers to be quiet and insert silence in his speeches and TV interviews. When he does choose to speak, he drops short sentence bombs like this interview with Steven Colbert: Stephen: “What do you think happens when we die, Keanu Reeves?” Keanu: I know that the ones who love us will miss us. In eleven words, Keanu summed up the entire meaning of life. It was a moment of sheer brilliance.
https://medium.com/mind-cafe/be-aware-of-the-quiet-ones-like-keanu-reeves-they-are-the-ones-that-actually-make-you-think-de7c8f814d04
['Tim Denning']
2020-06-02 18:04:38.328000+00:00
['Relationships', 'Leadership', 'Education', 'Life Lessons', 'Work']
Do you want to send your friend or a loved one a last-minute gift?
Do you want to send your friend or a loved one a last-minute gift? Now that we have digital platforms like Venmo and PayPal, the process is so easy. It also works if you want to give out holiday tips to people on your list: your doorman, babysitter, or your super. To some, tipping digitally may seem tacky, but this year it’s a better option than handing out gifts and cards to people personally. As Elaine Swann, an etiquette expert, points out: “Right now this is something that many of us should be thinking about doing.” It keeps us safe, and we don’t put the other person in an uncomfortable predicament.”
https://medium.com/the-shortform/do-you-want-to-send-your-friend-or-a-loved-one-a-last-minute-gift-325283533eb0
['Kristina Segarra']
2020-12-24 17:57:04.864000+00:00
['Holiday Ideas', 'Short Form', 'Advice', 'Holidays', 'Gifts']
Alchemy is not Chemistry
When you take a step back from the Emperor’s New Clothes practice of permuting words and make an insight jump as Lambur clearly has, you inevitably ask simple, fundamental questions like, “Why is it that store coupons worth $100 do not require KYC/AML, but Etherium tokens to manage them do?” The answer is KYC/AML doesn’t apply to coupons because coupons are not money. Neither are Amazon Gift Cards or iTunes Gift Cards or any of the thousands of other tools like them on sale all over the world, all ordered and managed on databases. The next thing people say is “Etherium is not a database”. I’m sorry to break the news to you, but Etherium (and Bitcoin) are nothing more than databases that store strings of numbers. There is no intrinsic difference between a MySQL database and Bitcoin. 10…9…8…7…5…4…3…2…1…0….. Is this a hexdump from one of Bitcoin’s database files, or a hexdump of something else? You can’t tell, because you’re not a machine. If you were a machine, you would know that there is no difference between strings; all strings are of exactly the same nature. http://it.tuxie.eu/?p=316 Immediately when I make this statement of fact, they run into a problem; these people don’t know what MySQL is. In order to know why the the statement that, “Bitcoin is no different to MySQL” is true, you need to know what MySQL is, and if you don’t then these words go in one ear and out of the other. Bitcoin and all derivatives of it are nothing more than databases with different underlying rules. Once you understand this, then statements like these: Clearly make no sense. Coupons and Gift Cards do not attract regulation, are all controlled by databases, are privately issued and redeemable for cash, discounts or services. It is not relevant that the coupon has a value ascribed to it by the issuer, it also isn’t relevant that people “See value” whatever that means. It is also irrelevant what sort of database they’re using to keep track of which coupon has been redeemed. Coupons are all printed with a unique number on them, so that they can’t be redeemed twice. Once again, these gentlemen are not citing any law. They’re simply permuting hearsay, and then building application rules and making business decisions based purely on hearsay. Using terms like “bearer instrument” makes no sense when talking about store coupons, and all the people in these threads would concede that, but why does the “bearer instrument” suddenly make sense if the purpose of the string or the database used to keep track of them is different? They have no answer for this that is not prosaic or that takes into account what’s going on under the hood. The term “bearer instrument” is a clue to how people are thinking. “Coupon Cutters” store hundreds of coupons in books like this worth thousands of dollars. None of this activity is regulated or monitored in any way. Why? If this cutter lost this book, she would lose thousands of dollars. What if the issuer goes out of business? No one asks this about coupons. Why not? You should ask these questions yourself, and not gloss over it with a shrug of your shoulders or prosaic waving of the hands, idioglossia or run on sentences. What these people are claiming is that coupon cutters are engaging in financial activity, and if so, that they should be paying taxes on their “financial gains” made by using coupons which are a form of money. The next argument people make is that because it is done in a computer on software that the nature of the activity suddenly changes to something else, because an app is used to manage the process.
https://medium.com/@beautyon_/alchemy-is-not-chemistry-980c72fd94d6
[]
2019-07-22 14:02:11.889000+00:00
['Hackernoon Top Story', 'Coupon', 'Ethereum', 'Bitcoin', 'Law']
What are the Elements of Occupational Health and Safety Programs?
Approximately 1.4 million workers in the US are affected by a serious work-related injury or illness every year. Creating a safe and worker-friendly environment can reduce this number as well as the seriousness of workplace injuries. Workplace safety programs are tools that can effectively develop a risk-free and productive work environment for employees, by assessing threats, awareness, training, and safety program evaluation. Because workplace injuries are costly at physical, financial, and psychological levels to employees and their families, designing and implementing an effective occupational health and safety (OH&S) program is essential to avoid costly and deadly consequences. Basic Elements of an OH&S Program Every OHS program should be designed to meet the specific requirements of an organization, as well as any legislated requirements. However, there are some basic elements that every OHS program should consider: Worksite analysis to identify all processes and activities. This should be a continuous process to identify all existing and potential hazards. Ensure management commitment and employee involvement. The manager or management team leads the way by setting up the policy, assigning and supporting responsibility, setting a positive example, and involving employees. Check, with the help of the crew themselves, whether any activity has significantly associated hazards that could cause harm. Reduce risks that can lead to serious injury (accidents or long-term sickness) by removing the hazard, modifying the work process, protecting crew, etc. Verify whether the measures you have in place to protect the crew are working properly and that rules are being followed. Provide training for employees, supervisors, and managers. Ensure they are trained to understand and deal with worksite hazards. Improve by always looking out for what could be done better and more safely. Evaluating the Effectiveness of OH&S The number of injuries and illnesses at work should not be the only indicators used in evaluating the effectiveness of an OH&S program; not all incidents are always reported and documented. An additional audit, which uses a checklist and series of questions as well as interviews, questionnaires, and observations with a corresponding weighting factor, can be used to evaluate the efficacy of an OHS program. This should be followed by corrective actions with target dates and checks for their completion. Occupational health and safety programs are an extremely important activity in the workplace. It helps in reducing the risk of injuries that affect the health and wellness of workers. Since workplace accidents are costly in nature, reducing their occurrences by improving workplace safety also improves a company’s bottom line. Have You Tried Safety Assure? If you are planning to design an occupational health and safety program for your workplace, you should check out Safety Assure. Safety Assure helps companies establish a safer workplace without the administrative burden. Our easy-to-use mobile app for OSHA recordkeeping for injury and illness enables your employees to log incidents, accidents, near misses, and observations with ease and wherever these events are noticed. Complying with safety standards to avoid penalties and lower operational costs have never been easier! Originally posted on CloudApper on 4 December 2020. Author Shaon Shahnewaz.
https://medium.com/shayurmaharaj/what-are-the-elements-of-occupational-health-and-safety-programs-6e7f28d2dc7a
['Shayur Maharaj']
2020-12-21 07:21:27.412000+00:00
['Mobile Application', 'Health And Safety', 'Management', 'Workplace', 'Software']
DAA Manager Insights: Cain Ransbottyn
There is an old investment saying that goes perfectly with the theme of Easter and the current crypto crash: Don’t put all your eggs in one basket. Of course, going against the crowd is always hard, which is why these insights from our DAA manager DAA Manager Cain Ransbottyn of TRADE might give you a good reason to explore the world beyond Bitcoin. You believe there is another way to live the #cryptolife. What does that mean? Most everyone that believes in crypto believes in “Bitcoin.” We haven’t believed in Bitcoin as a currency for a long time; in fact, we think Bitcoin is for losers! But we do believe in blockchain technology — or, rather, crypto technology. Blockchain will be a big part of the future economy. We’d like to set ourselves apart not by jumping on the bandwagon, but by minimizing our stake in the “fool’s gold” otherwise known as Bitcoin. Together, our team has analyzed alternative coins with significant potential to form our investment strategy. Lamborghinis won’t be bought and sold with bitcoin, but with its offspring: the so-called “altcoins.” This is where you get to separate the men from the boys. People think I’m arrogant; maybe they are right… ;-) As a Belgian, are you satisfied with the development of crypto in your country? Are people aware of crypto/blockchain’s potential and meaning? Not really. Even people who understand cryptocurrency (particularly BTC) through the press and TV are still skeptical. The media has painted an ugly picture of crypto (especially BTC), which has created unnecessary fear — and plenty of incompetence. Traditional financial advisers claim knowledge, but their experience comes from old-school traditional foreign exchanges. Our goal is to make stepping into cryptocurrency easy through our website, where we explain from A to Z how to go down the crypto path correctly. Through our website, trade.be, we offer valuable information on topics such as new, emerging ICOs, information about existing coins, news from the crypto world, and explanations about cryptocurrency in general in understandable, everyday human language. At trade.be, we disprove the fear, uncertainty, and doubt (FUD) created here in Belgium to help people understand the true possibilities crypto offers. Companies are aware of the potential of blockchain technology, but in Belgium, it’s still a fluffy concept, since the traditional banking system is so firmly entrenched. The government is also not fully aware of how crypto investment actually drives technology forward and stimulates innovative ideas. How do you see the correction that started this past January? Positively. In 2017, the market experienced exponential growth. At the beginning of 2018, a healthy correction occurred as a counter-reaction to FUD, which resulted in a snowball effect and a panic sell. The market has since stabilized somewhat. The overall cryptocurrency market cap is still more than ten times greater than it was a year ago. Five reasons we think 2018 will be a good year: A huge correction has already taken place. Cryptocurrencies (usually bitcoin) grow in bubbles, which means big corrections happen from time to time; the longer we go without a correction, the more nervous the market becomes, because people expect a correction soon. The recent correction was one of the largest in history, meaning people can remain optimistic for longer. 2. Companies are entering the ICO market. It takes a long time to prepare a good ICO, given the planning and preparation required. Since ICOs became mainstream last year, many existing and larger companies laid the groundwork over the past couple of months for new initial coin offerings. This could mean that more large companies will offer new ICOs this year; we already have Telegram and Kodak, but this is only the beginning. These ICOs should channel more money into the cryptocurrency market. 3. All ICOs should start with existing uses. According to the roadmap, many ICOs running in 2017 are expected to deliver on the commitments they made in 2018. This means that we should have working platforms and projects that make use of cryptocurrencies. Companies that make promises have to make plenty of FUD disappear. Most people currently see ICOs as scams, but if the projects actually offer working platforms, the picture should change. As a result, skeptics could invest more money. 4. Rules may be introduced. Although crypto is always about free markets and decentralization, scammers will always try to profit from others (examples include DavorCoin and Bitconnect). Customer protection rules are beneficial to cryptocurrencies because they involve more people. Governments also need to clarify tax legislation and make it easier for people to pay taxes on cryptocurrencies. Governments will try to become beneficiaries by changing crypto legislation to serve investors by reducing taxes. 5. Crypto should become more user-friendly. Cryptocurrencies are still difficult to buy, use, and sell. Without making these things easier, it will be difficult to encourage users with a lower level of technical skills. We need more services, such as Coinbase, Mistertango, Revolut, and Robinhood, as well as apps such as Bread and Ethos. More shops need to accept crypto; crypto debit cards should also be accepted in shops that handle normal bank cards. One of the biggest obstacles at the moment is convincing people that crypto is viable and not a scam. Making crypto more user-friendly and readily available will help. You state that BTC has not been proven to be a long-term, highly profitable investment. Please elaborate. BTC is slow and useless, especially now, because Lightning Network has not been implemented. If there are many requests to send the currency, wait times increase. Not only is BTC slow, but transaction costs are also quite high. This is especially noticeable when a lot of transactions have to be completed on the blockchain. As we said, “Bitcoin is for losers.” :-) BTC still follows a “proof-of-work” protocol. As the world pays increasing attention to saving energy, proof-of-work is known for having a high rate of energy consumption. This needs to be addressed in the future, otherwise currencies that continue to use this technology will die out automatically. The need for faster transactions, lower transaction costs, and more energy-efficient techniques remains. We have no doubt that these problems will be tackled systematically and that cryptocurrency still has a very bright future ahead of it. We believe that bitcoin will remain the market leader because it is the original crypto coin, making it the market reserve currency through which all other alts are partially connected. But if you invest more in altcoins, you have a greater chance of increased profit than if you only have bitcoin in your portfolio. Bitcoin has limited usability, but it is the other technologies that give it value: Using cards such as TenX, Monaco, Revolut, TokenCard, and Xapo, among others, allows people to make purchases with bitcoin. Bitcoin ATMs and services such as Coinbase, Kraken, and Bitstamp enable people to cash out bitcoin quite easily. Bitcoin has a market value that can be exchanged for other crypto coins with real use. In many countries with a useless currency (such as Zimbabwe or Venezuela), people prefer to store bitcoin because it is more secure than fiat. Follow our official channels for more updates and news: Facebook / Twitter / Reddit / Medium or log into our platform to explore more DAA strategies.
https://medium.com/iconominet/daa-manager-insights-cain-ransbottyn-aba4a758529c
['Matej Tomazin']
2018-03-30 15:31:00.877000+00:00
['Iconomi', 'Interview', 'Insights', 'Ethereum', 'Bitcoin']
I am a self-absorbed, mediocre workaholic.
I am a self-absorbed, mediocre workaholic. I think that’s why I have always wanted to become a writer, it seems like a hobby to most but it’s actually the most excruciating, time-consuming work that you can do. I love it. And I love what I write, I’ll make no bones about it. It is perhaps my first experience of unconditional love. I know what I write is not perfect, it doesn’t win awards or change lives. It is not over achieving or particularly profound but I love it still. I am fascinated with pain. How it makes us better people. It is always the people who have never been hurt that do the hurting. How the more pain I endure, the more grateful i am for this ridiculous life. And oh boy, is wanting to be a writer painful. Especially if you are merely average. The actual writing part is (and this is the only word I can think of to describe the experience) sensual. You pour yourself into a screen, a page, the world disappears around you apart from your Fleetwood Mac’s Greatest Hits playlist and you honestly enter into a different state of mind. I can only compare it to psychedelics and even then, writing is much more confusing, intense and transformative. But that is why we must write and it is the easy part. The hard part is thinking of something interesting to say, pestering strangers to read your work, trying to get “noticed”. It is a sick irony that writers – society’s most insecure and introverted- must beg for attention and criticism. Some people run 10k at 6 a.m., some partake in winter swims or go to retreats in India and not speak for four months. In our disgusting, privileged Western world, where our insecure parents gave us everything we ever wanted, we long to feel uncomfortable. I want to write because it makes me feel insignificant and untalented. It pushes me and consumes me and gives me a reason to(To what? I have not yet figured out). I study harder, drink more coffee, write more poems, dye my hair more colours, smoke more cigarettes and read more books than most. And yet I am nothing more than depressingly average. I am addicted to becoming a proper “writer”, a desire that will haunt me until I die, I am sure. But that is the way it should be. It is how I like it. Insecure, determined and ambitious. To set unattainable goals is to live truly. It is why we go to war. We all just want to fight for an impossible life. And so I will write these barely half thought out articles and mediocre poems. I will go to a mid tier university and graduate with a 1:2 in English and I will fail miserably at being a journalist, possibly have two terribly normal children and probably get divorced, before settling in a housing estate that looks exactly like the one I live in right now and become apathetic about politics. And I am okay with that, it’s the natural order. Because I know that I will give my blood, sweat and tears to everything I love. And if it fails, I can be proud of it anyways. I will try to be a writer, a student, a girlfriend, a mother, a communist. And I will try my damn best to be the most exceptional one out there. And I will fail. And I will try once more.
https://medium.com/@cmarchrun/i-am-a-self-absorbed-mediocre-workaholic-f1d777e7da2d
[]
2020-12-24 21:12:44.153000+00:00
['Workaholic', 'Writing', 'Future', 'Work Life Balance', 'Mediocracy']
Conning Us Into Civilization
Have We Been Conned Into Civilization? Image by Ted McDonnell, from Pexels Here are two well-established facts about the origin of civilization that call for an explanation: First, the Stone Age in which protohumans lived as small bands of nomadic, egalitarian hunter-gatherers dwarfs the age of civilization. The Stone Age of prehistory, when primates in the genus Homo used stone to make tools lasted for roughly 3.4 million years. By contrast, civilizations, in which special social functions developed in large, sedentary, hierarchical societies have been around for 12 thousand years. Second and contrary to our modern myths of inevitable progress, the transition to agriculture and to large societies by the domestication of plants and animals wasn’t overwhelmingly beneficial to the early revolutionaries. There were some advantages, especially over the long term, such as protection from predators, increases in birth rate and life span, and technological advances. But in the early millennia of the agricultural revolution, there were severe drawbacks, as Jared Diamond explained in “The Worst Mistake in the History of the Human Race.” Judging from the state of ancient skeletons and from other indicators, archeologists have discovered that the early civilized, sedentary people suffered from lower life spans, shorter heights, and other signs of malnutrition. Diamond points out that hunter-gatherers had a more varied diet, didn’t run the risk of starvation due to a failed crop and were spread out in the wild so they weren’t overrun by outbreaks of infectious diseases. There were other drawbacks too. We think of social specialization as progressive because large societies enable us to use our leisure time to pursue our individual interests, whereas hunter-gatherers all had a similar job by necessity, namely the obtaining of food for the day. But the idea of specialization masks the downside of inequality between the emerging social classes. Most early kingdoms and civilizations eventually became patriarchal, so women often labored harder than men and were fed more poorly, judging again from the state of their fossils. The upper classes that formed around the kings lived in comparative luxury, while the slaves and the farmers (whose labor only became less valued when they managed to secure food surpluses) enjoyed fewer fruits of civilization. Image by Patrick, from Unsplash The Mystery: What Sustained Early Kingdoms and Civilizations? Jared Diamond sums this up by saying: “with the advent of agriculture the elite became better off, but most people became worse off. Instead of swallowing the progressivist party line that we chose agriculture because it was good for us, we must ask how we got trapped by it despite its pitfalls.” The mystery, you see, is that for unimaginable tens of millions of years, we lived as relatively healthy, free, albeit poor, ignorant nomads. This was the vast period remembered in Western religious traditions as Edenic paradise. We switched over to agriculture and to a sedentary, “civilized” lifestyle which paid off in some ways over the long run, with scientific and technological achievements, for example. But that transition may prove even more costly in the end, as the Anthropocene threatens the world’s ecosystems with global warming, human overpopulation, and our genocide against wild species (as we make more room for farms). Regardless, the early large societies couldn’t have foreseen either the long-term advantages or disadvantages of the monumental transition. Yet the early period of farming was marked by plagues that were unknown to the nomadic bands. Why, then, did so many people double down on civilization instead of returning to the old, wilder ways? There are likely numerous reasons to account for the persistence of agriculture. Perhaps there was no choice because farming was needed to support the increasing population. Maybe as Diamond says, the farmers drove the hunter-gatherers to near extinction as big-city folks seized the best land for farms, in which case the large societies might eventually have forgotten how to survive as nomads in the wild. One motivation I’ve posited elsewhere is the rise of humanistic, progressive values. Certainly, in the long term, this ideology sustains confidence that the sacrifices in large societies are worth it because we’re learning how to dominate nature and to turn ourselves into gods, thus avenging our losses against nature’s inhuman creativity and absurd indifference. But this could have been only an implicit factor, at best, in the Neolithic and Upper Paleolithic periods, as is apparent from the shamanic attempts to predict and to magically control natural processes. The overriding mindset was animistic, which means that prehistoric people likely regarded nature as enchanted and as being full of sociable spirits, not as pointless and as indifferent to our welfare. The humanistic values would have arisen explicitly with advances in philosophical reasoning, as we see especially in the Axial Age in the mid-first millennium BCE and in early modern Europe, in the Scientific Revolution and the Enlightenment. Image by Abdelmoughit Lahbabi, from Pexels The Theocratic Mythos Another possibility presents itself when we reflect further on social specialization, which is one of the presumed advantages of large, sedentary societies. Pyramidal social hierarchies developed to maintain order within the expanding, confined populations. There were exceptions such as the earliest transitional proto-cities that retained the nomad’s egalitarian values, but eventually, kings centralized power to manage the social classes and to distribute resources. And because the noble class fared better in large societies than did the lower classes, the nobles had a selfish incentive to maintain this social arrangement. For reasons emphasized in Joseph Abraham’s Kings, Conquerors, and Psychopaths, the centralization of political power and authority would likely have corrupted virtually anyone who would have been presented with such privileges that are easily abused. Additionally, these hierarchies would have attracted the more aggressive, authoritarian members of the population as candidates for reaching the pinnacle in the first place. Thus, the contenders would have fought like mobsters for prestige and for the benefits of a dominant social position. Certainly, by the time of the first empires or grand civilizations, the social structures were theocratic, which means the power asymmetries were rationalized by the state religion. Just as the prehistoric nomads perceived nature’s divinity as the diffuse presence of animated spirits, to reflect their free-ranging lifestyle, civilizations concentrated divinity in a pantheon of gods, to suit their cloistered societal arrangements. The greatest gods communed with the upper classes, as in the Pharaoh's union with Osiris, while the less powerful gods cheered on the lower classes. This was the beginning of the divine right of kings, of the mythos that justified the civilized ethos, the set of patriarchal values of drudgery, decadence, and rapacity. This underlying ethos still drives what we view as social progress. True believers think the gods bestowed on us this meta-cultural norm. The Sumerian King List says, “kingship descended from heaven,” and the myths have it that Prometheus taught us the arts and sciences, that Yahweh gave Moses the Ten Commandments, and that Heaven (Tian) supported Confucius’s social reforms. Likewise, the gods were supposed to have ordained the rule of warlords (the victors) who waged the wars to form the early empires in Mesopotamia, Egypt, and China, as the kings attempted to conquer territory to add slaves (farmers, servants, and soldiers) to sustain their burgeoning populations. The patron gods were the mascots that cheered on the home team, as well as the icons that established the symbolic power of the people’s religiopolitical brand. Lewis Mumford called this dynamic the societal “megamachine,” the automation of civilizational growth by the built-in ideological excuses for the plain injustice of carrying out the oppressive, expansionist ventures. Civilization operated like a machine with moving parts — with the castes or social classes — that had to interact efficiently, as dictated by the rules of civility. The social functions might as well have been computer programs. Image by Magda Ehlers, from Pexels The Civilizational Con But what if, contrary to the conservatives who still prefer a dubious literalistic reading of scriptures, these theocratic ideologies had no such divine origin? What if the mythical antediluvian kings listed in the Sumerian King List were concocted, for example, to justify the much-later invention of kingship in Mesopotamia, when the kings saw themselves as stewards for the gods that owned the land? What if the Jewish scriptures, too, were compiled late in the Babylonian captivity, long after the foundational events in Jewish history were supposed to have taken place? And what if the monotheistic imperative was read back into that history, to justify a self-serving priestly mentality? In short, what if ancient theocracy and thus civilization itself functioned as a colossal, global fraud? What if the elites who benefited the most from agriculture, technology, and the mega-scale of city life exploited the illiteracy and gullibility of the masses, providing a religious drama to reassure them that civilization was for the best because it was mandated by the patron gods? What if the naïve ancients practically hallucinated this spiritual dimension of their societies, perceiving social functions as the powers of gods? To the extent that this rank con artistry at least played some role in sustaining cities, kingdoms, and empires — and possibly a predominant role in the theocratic norm before the rise of modern republics — we might wonder whether even our secular societies are similarly groundless. I’ve argued elsewhere that capitalism operates as a largescale fraud. Religions run out our clocks by pinning our hopes for a reward for our drudgery and sacrifices, on a dubious afterlife, while the main rewards for most people in capitalist societies are held off until retirement. The latter is a dwindling prospect for the majority, as these societies tend to become plutocratic and unjust without governmental corrections. And this is especially true for workaholic countries like the United States. Still, modernity presented us with the chance for a re-examination of the merits of civilization. Obviously, after the churches lost their political power in Europe, we chose to continue with civilization rather than to try our luck with a revival of nomadic shamanism, buoyed as we were by stupendous scientific and technological advances. But the new individualism and skepticism entailed a loss of religious faith, which meant we’d shaken off the old fraud and had to face with sobriety the consequences of civilizational growth. No longer deferring to priests and to theocratic myths, we couldn’t pretend that everything always works out for the best because the gods control the world just as kings controlled their kingdoms. We suffered eventually from late-modern ennui, which has led, for example, to an epidemic of opioid overdoses. As Friedrich Nietzsche worried, despite the rise of First World wealth and luxuries, we might lack the motivation to carry on after the “death” of God. What would the new, godless morals be to inspire our civilizational adventure, especially after we discovered that historical progress might have been self-destructive all along? Hyper-rational communism and Romantic fascism failed in the twentieth century, and American-style capitalism seems as fraudulent as ancient theocracy, as I suggested. Indeed, it’s not obvious that there’s any such viable, authentic (non-fraudulent) secular ethos. Even a scientific rationale for modern civilization, such as transhumanism, rests on faith that we’ll fulfill the utopian promise of technological progress, the one set out by optimistic science fiction authors. The problem is that any late-modern defense of civilization must reckon with the dire environmental cost of our progress, and must avoid the existential letdown of succumbing to archaic forms of religious faith which have arguably been revealed as lingering theocratic cons. Image by Joy, from Flickr The Prospects of Civilization We have some reason to think that if civilization began largely as a fraud that excused the manipulations that enabled the wealthy rulers alone to live like gods, civilization is unlikely to end well. It’s not that the collapse of the ecosystems due to short-sighted human expansion would be a case of bad karma or poetic justice. Rather, civilization wouldn’t have been designed to be as stable or sustainable as the hunter-gatherer lifestyle. On the contrary, the roots of civilization would have been planned by psychopathic or narcissistic rulers who tended to be unconsciously self-destructive. Psychopaths are impulsive and reckless because they usually feel that all life is worthless, including theirs (despite the megaliths they may have erected in their name to protest too much). Narcissists think they can do no wrong, so they systematically underestimate their opposition; they live in a self-inflated bubble that the real world can burst at any time. This origin — the recklessness of civilization’s architects— doesn’t bode well for us; indeed, if civilization is based on an insidious fraud, its long-term survival would be a miracle. Ideally, then, with the benefit of all this hindsight, we should renew our trust in civilization for good reasons or we ought to have the strength of character and the creative vision to develop something better.
https://historyofyesterday.com/have-we-been-conned-into-civilization-d50f2cbcaf02
['Benjamin Cain']
2021-07-02 02:02:27.266000+00:00
['Civilization', 'History', 'Fraud', 'Religion', 'Sociology']
Modularizing Common Infrastructure Patterns in Terraform
Modularizing Common Infrastructure Patterns in Terraform How to use Terraform modules to minimize code duplication in your IAC projects. Background When developing large, complex software systems, it is up to the developer to identify pieces of code that can be grouped together into classes or modules so that they may be extended and re-used throughout the project. The same idea can be applied when developing infrastructure using Terraform. Terraform modules provide developers a mechanism for grouping multiple resources together so that they may be used in many places in an IAC project as one single component, thereby reducing code duplication and maintenance. Example At Ancestry, genomic data is processed through various algorithms to provide customers with insights about their DNA. Many of these algorithms benefit from being run on multiple DNA samples at once. While developing the infrastructure for Ancestry’s genomic algorithms pipeline in AWS with Terraform, we identified the need for a re-usable component that would allow us to batch these DNA samples together as they entered the pipeline to later be sent to another process once a threshold was reached. We would need to group the following components together: An input SQS queue for receiving individual samples (simply as DNA sample identifiers or sample_ids) A lambda function for implementing the queueing/dequeuing/batching logic (will be triggered by the input SQS queue) An IAM role for executing the lambda function An output SQS queue for sending batched samples (as a single list of sample_ids per message) The final piece needed for this component would be a temporary data-store for queuing samples, checking how many samples are in the data-store, and dequeueing them for a batch once the count passes our configured threshold. At first glance, the original input SQS queue seems like it would suffice. However, SQS only returns an approximation of the number of messages in a queue at any given time, which is not a reliable source of information for this component. Given that we already use Redis as a multi-purpose data-store, we decided to leverage it for our component. The architecture for our module would look something like this: Implementation To build a Terraform module in your project, the first thing you need to do is create a sub-directory in your project’s root-level directory. Terraform scans all sub-directories in a project and identifies them as child modules. They will not be initialized until you use them in any of the .tf files in your root module. We will create a directory and call it sample-id-batcher . Variables Terraform modules provide a mechanism for configuring input parameters for your component called variables. Think of them as initialization parameters for an instance of a class. Before developing our component, we need to decide which configuration pieces we want to make variable to make the component flexible while also maintaining simplicity. For our example, we need two input parameters: the batch size and the URL to our Redis instance. We will also add a third parameter to help with naming our components in a meaningful way. In our module directory, let’s add the following variables.tf: Locals Locals are used in Terraform to define constants for your module. We can use them to store the results of string interpolations for component names, timeouts, and other information. In our module’s directory, let’s add the following locals.tf file: Outputs Outputs are used in Terraform to expose information about a module that can be used by other components for integration. In our case, we would need the URL of the output SQS queue to be exposed so that we can use it to kick-off another process via an SQS subscription. For flexibility, let’s also expose the ARN of the output SQS queue. We will add the following outputs.tf file to our module: The SQS Queues For our use-case, we will need two SQS queues: the input SQS queue for receiving samples and triggering our lambda batcher function, and the output SQS queue for placing batches of samples when the threshold has been met. We will add the following sqs.tf file to our module’s directory: The Execution Role All AWS lambda functions need to be configured with an execution role. The role should have access to all the resources used throughout the lambda’s execution. For our use-case, our execution role would need the standard permissions that all lambda execution roles need as well as permissions that allow it to pull from the input SQS queue and post to the output SQS queue. Let’s add the following role.tf file to our module’s directory: As you can see, most of the permissions are standard, boiler-plate policies that AWS requires you to add in order for your lambda functions to work. The policy called lambda_execution_role_sqs_policy is where we define ACLs for allowing the lambda to pull from the input SQS queue and post to the output SQS queue. The lambda function, trigger, and data archive The final infrastructure piece that we need to create is the AWS Lambda function. We will pass along certain variables and constants in the environment block of our lambda function resource to make them available to our lambda code. In addition, we will need to create an event source mapping which will trigger the lambda any time a new message is posted in our input SQS queue. Finally, we will need to declare an archive file where the Python code and all of its dependencies will live. Let’s add the following lambda.tf file to our project: As you can see from the data block that defines the archive, we will need to place all of our Python resources in a folder called package in our batcher module’s source directory. This package will include the main Python module that we write to drive our lambda function as well as all the dependencies our lambda function has. In this case, Redis is the only dependency we will need to include. More information about how to build a Python deployment package for AWS lambda can be found here. If your lambda function does not require any external dependencies other than what is available in the Python execution environment provided by AWS, you do not need to worry about building a deployment package with dependencies. This process will differ if your lambda is using any other execution environment such as NodeJS, Java, etc. Please consult the AWS documentation for the language you are using. One other important detail to note is that our lambda is set with a concurrency of 1. This will ensure that our lambda can only process one sample from our input queue at a time, guaranteeing that we do not run into any race conditions during the queueing/dequeueing process. The Python code for the lambda Now that we have all of our AWS infrastructure resources declared, we need to create the Python module that will drive the functionality of our Terraform module. When executed, our lambda function will do the following: Parse individual sample_ids from Records in the incoming SQS message. Push the individual sample_ids from the messages into Redis Check if the number of sample_ids in Redis exceeds the configured batching threshold and, if so, remove all of the sample_ids from Redis and push them to the output queue as one batch. The following Python module should do just that: After creating all of the resources mentioned above, our project directory should look something like this: NOTE: main.tf is in our root module and will be where we use our new batcher child module. Using our new module Now that we have defined all the building blocks for our module, we can use it anywhere in the root-level of our project by calling it using a module block and supplying it with our desired input variables. For example, let’s say we wanted to created a batcher to queue up samples until they reach a threshold of 100 and then send them to a process called “ethnicity”. As noted above, we will be using our new module in main.tf . We initialize our batcher as follows: NOTE: Storing connection strings in source control is inherently unsafe, but we are doing it this way for the purposes of this tutorial. Do NOT do this in practice. With our sample-id-batcher module created, we can then use the module’s output variables to integrate with other processes that will take the batch of sample-ids as input. For instance, we could create an SQS subscription to some hypothetical lambda function called calculate-ethnicity . When this Terraform gets applied, it will result in the creation of multiple components that will work together as follows: An SQS queue named ethnicity-sample-id-batcher-in will receive messages containing single sample_ids. will receive messages containing single sample_ids. A lambda function named ethnicity-sample-id-batcher will be triggered via a subscription to the SQS queue mentioned above, executing all of the queueing/dequeueing logic we’ve coded in our Python module. will be triggered via a subscription to the SQS queue mentioned above, executing all of the queueing/dequeueing logic we’ve coded in our Python module. An SQS queue named ethnicity-sample-id-batcher-out will receive messages containing 100 sample_ids when the lambda mentioned above posts them to it (after the threshold has been reached and the samples have been removed from Redis). Following that, our theoretical calculate-ethnicity lambda will be triggered with an SQS message containing the batch of 100 sample_ids. Conclusion Terraform modules provide a mechanism for grouping together low-level AWS resources into high-level components that can be re-used throughout your project. You can make them as flexible or as rigid as you like, but the key is to design them to meet your needs while reducing code duplication and complexity.
https://medium.com/ancestry-engineering/modularizing-common-infrastructure-patterns-in-terraform-6a0f794f1712
['Eric Renteria']
2020-12-03 16:46:49.400000+00:00
['Python', 'Terraform Modules', 'Terraform', 'Bioinformatics', 'Infrastructure As Code']
Finland’s New Free AI Courses
Finland’s New Free AI Courses How to get a certificate and take advantage of the course by Elements AI. Photo by Arttu Päivinen on Unsplash Besides being the home of Santa Claus, Finland is known as a tech leader, even ahead of the US, according to the UNDP. Indeed, tech operations constitute “over 50% of all Finnish exports.” We even owe technologies like Linux and the first web browser to Finland. Today, Finland is keeping up its tech legacy with its free Elements of AI online course. Overview Elements of AI is a set of two online courses made by Reaktor and the University of Helsinki, combining theory and practice with teaching as many people as possible about AI. The two courses are titled Introduction to AI and Building AI . The course is well on its way to achieving its mission of making AI accessible, as over 550,000 people have already signed up, as of writing. Introduction to AI The first course is split into six chapters: What is AI? AI problem solving Real-world AI Machine learning Neural networks Implications Screenshot of “Elements of AI” course progress section, captured by the author. The course is very well designed, with simple explanations, nice visualizations, and exercises at the bottom of most chapters to solidify your learning. Both courses feature a “Course Progress” ribbon to show you how you’re progressing through the course and to keep you motivated. Building AI The second section will take around 50 hours and is split into five chapters: Getting started with AI Dealing with uncertainty Machine learning Neural networks Conclusion This time, the exercises are more in-depth and practical, so they’ll be more challenging than before. Be sure to check out the community below if you get stuck. Community Elements of AI come with an awesome, highly active community at Spectrum, where you can discuss and ask questions about each chapter. As of writing, the community has almost 8,000 members with whom you can ask questions and study. I’ve found it’s an invaluable resource to make sure I truly understand the material. Best of all, it’s free! Certificate You can purchase a certificate, upon completion, for each course, for just 50 Euros. This will be a shareable certificate that would make a great addition to any CV or LinkedIn, although it’s totally optional, and the course is normally free. The Final Project For the final project, you’re expected to demonstrate your skills and creativity. While it’s not required, it’s a great opportunity to put your skills to practice and share with a community of thousands of other learners. Elements of AI gives a lot of inspiration and ideas for final projects, such as “Sources Checker” — a bot that checks the sources of news articles online. Other ideas include noise pollution forecasting, predicting stock criteria like growth and reliability, matching ideas and doers, automating applications to relevant jobs, making expert recommendations, assessing financial risk, recommending healthy meals, and many more. Perhaps my favorite idea is the “AI credit-risk management for social lending” project, which uses AI to predict credit risk. Models like these are already being used in the real world. For instance, the micro-loan company Creditt uses Obviously.AI’s API to score customer profiles and find out how much to credit users.
https://medium.com/towards-artificial-intelligence/finlands-new-free-ai-courses-b75c1d53ac84
['Frederik Bussler']
2020-12-10 19:07:29.138000+00:00
['AI', 'Artificial Intelligence', 'Learning', 'Data Science', 'Education']
Google Portugal offers free training to 3000 unemployed
Google announced that it will offer training to 3000 Portuguese in the areas of IT Technical Support, Project Management, Data Analysis, and User Experience Design. hree courses are available in English and one in Portuguese (IT Technical Support), each of which has a 120-hour workload. The courses are taken on a self-service basis, which means there is no fixed deadline for completion. The normal period should be around eight months, at an average rate of 8 to 10 hours per week. The courses will be made available free of charge, but users will be monitored and will not be able to spend more than 15 days inactive on the Coursera platform, at risk of losing the right to the certificate. The choice of candidates will be the responsibility of Google’s partners for this initiative: IEFP and APDC. Among the criteria to be taken into account will be the focus on the unemployed registered in the public service and some quotas: 50% for women and another 50% for men; and 50% for the districts of Lisbon and Porto and the other 50% for the rest of the country. Google.org, the philanthropic arm of the technology giant, in partnership with the Youth Foundation, will have 235 additional certificates available to young people identified as being in a vulnerable situation. The courses will be made available free of charge by Google to both trainees and partner entities. Candidates are not required to have any previous training or experience. Follow us on social media: Instagram; Facebook; Twitter; Youtube.
https://medium.com/@ineews/google-portugal-offers-free-training-to-3000-unemployed-3dbcaf9fcef1
[]
2021-04-13 11:41:31.245000+00:00
['Iefp', 'Empresas', 'Apdc', 'Companies', 'Google']
Tracking Covid-19 in India
Humanity is facing an unprecedented challenge today in the form of a Covid-19 pandemic. The spread of this virus has crippled, even annihilated in numerous cases, economic and social orders. As countries reel in panic and isolation in the wake of the massive spread of this virus, there is a massive outburst of digital content on social media and internet outlets. In India, very often in many of these content, the fine line between true statistics and fake news is blurred. The consequences of false reports create more adversities during this crisis as the government and health agencies find it hard to gauge the extent and depth of the issue at hand for efficient administrative or health policy development. The misinformation leads to mass panic if numbers reported are too high or mass complacency if they are on the lower side of the actual figures. Even if we get the numbers right, there is always the problem of comprehension and analysis. How do we make sense of the numbers? Which parameters are important? Knowing the trend is important, but how do we see a pattern in the data? Thus as we sit in quarantine from a viral epidemic, we are also left to deal with information chaos. We at Attentive AI have recognized the severity of issues that false data and misrepresented statistics might have on how we handle this situation at both individual and social levels. This is why we have developed a real-time Covid-19 Tracking Dashboard for India that presents the latest, accurate information of the pandemic through meaningful and comprehensive visualizations. This Dashboard is powered with information from India’s Ministry of Health and Family Welfare website. Overall numbers of Covid-19 infections are displayed along with a graph that shows the infection trend over a period of time. State-wise infection cases are also available along with a bubble map visualization. Important details and information about the pandemic are also available. With this dashboard, we hope to provide a single reference that is easy to understand and thus help in creating a better and informed consensus on the disease in India. Do visit our Dashboard by clicking here for real-time, accurate and meaningful information around Covid-19.
https://medium.com/attentive-ai/tracking-covid-19-in-india-772937f97439
['Tech Attentive']
2020-06-17 11:58:26.790000+00:00
['Visualisation', 'GIS', 'Geospatial', 'Attentiveai', 'Covid 19']
What is a Challenger Brand and How To Become One?
Photo by Monica Silva on Unsplash In this article, we explore how Gymbox built a successful business by being a challenger brand, and how you can become one too. Gymbox is a very successful Fitness brand with 300 employees, 11 gyms around London. 🏋️‍♀️ And there is something that massively contributed to their success: a unique, disruptive, and bold brand. 🔥 In other words, a challenger brand. Gymbox is different, it’s not for everybody and they are not afraid to say it. Rory McEntee, Brand & Marketing Director at Gymbox, was my guest on The Implement Podcast 🎧. Rory is an expert in building challenger brands with his work at Gymbox, Paddy Power, Everyman Cinema, and others. So what is a challenger brand? It is defined, primarily, by a mindset — “it has business ambitions bigger than its conventional resources, and is prepared to do something bold, usually against the existing conventions or codes of the category, to breakthrough”. (The Challenger Project) Defining a challenger brand is not an easy task. And if we could do it easily, we’d be rich, as Rory tells us. 😆 There are a lot of brands trying to be challenger brands at the moment. And it’s something that doesn’t come easy. Being a challenger brand is ingrained in the culture, in the business. Here is how Rory would define a challenger brand and how to become one: 1/ You’ve got to be consistently inconsistent 🥸 You have to do the unexpected. Do what normal brands wouldn’t do. If people go left, you go right. Quite a simple principle right? Well, it’s a bit more complicated than that, but it’s a good start! Carry on 👇 2/ Don’t work with the same agencies as everyone else 🧐 Or actually don’t work with agencies at all… Rory doesn’t work with agencies that have worked with other gyms before. Instead, he’s worked with comedy writers or individual freelancers. Agencies will tend to have an “industrial” or standard approach. So search for people who can bring a unique POV instead. 3/ Really understand your customer 👋 I know… You know that. But here is why it’s important to say it again in this context. A lot of people think challenger brands just come up with their crazy ideas. The truth is it takes a lot of time. And to succeed you need to be embedded in your tribe, your community. To appear different to them, first, you need to see through their eyes. You need to see things as they do. It’s the only way you can find ideas that will entertain them and surprise them, but not offend them. 4/ Being disruptive VS being offensive ⚠️ Yeah, you need to take the risk of offending people if you want to be a challenger brand. And offending people is easier than ever, sadly... There is a real difficulty in balancing being disruptive and being offensive. But, if you know your customer well, you’ll know how to find that balance. Your tribe will know you're being disruptive, when others, will find you offensive. But, at the end of the day, it doesn’t matter if you potentially offend other people. What counts is your tribe. Occasionally you do cross the line when you’re trying to be a little bit different because you’re trying stuff that hasn’t been done. There’s no template for it so you’ve got to be brave and you’ve got to be up for a challenge. 5/ Put a smile on your customer’s face 😊 Ultimately, for Rory, all you have to do is try to put a smile on people’s faces. An example of this is how Gymbox tackled reopening after COVID lockdown. Everybody was doing the same safety videos on how they’re gonna welcome you back to the gym. Gymbox’s was a little bit more humorous. They had dancers in hazmat suits spraying around the gym. If you do things a bit differently, your tribe will recognize it and love you for it.
https://medium.com/@badis-khalfallah/what-is-a-challenger-brand-and-how-to-become-one-7c4317dc54e8
['Badis Khalfallah']
2021-06-08 14:26:36.176000+00:00
['Disruption', 'Branding', 'Brands', 'Marketing', 'Brand Strategy']
2 Tips for Creating Shareable Content for Influencers
One of my ‘favourite’ SEO strategies has always been “link baiting.” Essentially this means writing an article to be eye catching and entertaining enough that people will want to share it, so that then when they read it on your site they’ll be more likely to want to post it on their own websites or on forums. This was a strategy that was around before social media marketing existed in a big way, and one that allowed for a piece of writing to go viral before the days of Facebook or Twitter. Fast forward a few years though and I’d argue that many of the social network influencer could stand to learn a thing from the technique. The point is that too many people think that having a big network of contacts on a social media site is enough to ensure that their content will spread. The strategy for many of them it seems is to simply develop a large number of “followers” or “friends” and then to just share everything they create and hope for the best. The problem is that not everything is well suited to that kind of viral sharing, and like the link bait articles that SEO gurus use, a better strategy would be to devise articles from the outset that will be more likely to get shared. Here we will look at how to do that… Write Articles With an Emotional Hook The first thing you need to do if you hope for your article links to spread quickly is to give them some kind of emotional resonance. If someone reads your article and they come away thinking ‘wow’ or even feeling angry, then they will be more likely to share, comment or interact with that content in some way. Write Articles With a Catchy Title For someone to share your article they first need to read it, and the only weapon you have at your disposal as far as that’s concerned is the strength of your title. While you could write an entire article just one that, the tenets you should follow are to make your title descriptive of your article so that people know what it’s going to be about, to make it engaging in some way (again by making it emotional, or by asking a question or making a statement) and to use hyperbole (the ‘ultimate’ list, not ‘a very good’ list).
https://medium.com/@meysamm/2-tips-for-creating-shareable-content-for-influencers-35332fc394a6
['Meysam Moradpour']
2020-12-20 03:17:15.383000+00:00
['Social Media Marketing', 'Influencer Marketing Tips', 'Influencers', 'Influencer Marketing']
What would it take to provide the world with free food?
Big Ideas | Free food | The Impact Billionaires TLDR; Closed-loop, artificially intelligent farms that run on solar with advanced robotics could provide climate adaptive, high yield, sustainable growth practices with a short supply chain resulting in free food. Food “Waste” Worldwide, humans waste one of every three food calories produced. These wasted calories are enough to feed three billion people — 10 times the population of the United States, more than twice that of China, and more than three times the total number of malnourished globally, according to the Food & Agriculture organization of the United Nations. Food is lost or wasted for a variety of reasons: bad weather, processing problems, overproduction and unstable markets cause food loss long before it arrives in a grocery store, while overbuying, poor planning and confusion over labels and safety contribute to food waste at stores and in homes. A paradigm shift for free food However, this is not going to be an article berating the fact that we, humans, are a virus for this planet, especially the entitled west that is operating in a system that is by design wasteful. No. I would like to present a paradigm shift. We only speak of waste because there is a finite resource being depleted and mismanaged. When’s the last time we started a war over breathable air? What if food could truly abundantly be available, just like air? What if food could truly abundantly be available, just like air? Let’s break down what that would mean: “Abundantly available”. This means that we could get more food than we would need delivered at the exact time that we need it — for free. So in my view, that breaks the problem down in two challenges: sustainable, healthy, very high yield growth practices and a short food supply chain — preferably with no human intervention in order to be able to keep the system limited to a one-time investment. Seems do-able enough, no? No? Yes — Enter exponential technologies. Exponential Technologies A high level sketch for sustainable, healthy, very high yield growth practices Let’s look at three technologies in particular: Artificial Intelligence, Solar & Advanced Robotics — drones for the delivery. Traditionally, farms have needed many workers — mostly seasonal — to produce and harvest crops. However, fewer people are entering the farming profession due to the physical labor and high turnover rate of the job. Furthermore, most agricultural efforts use highly mobile migrant labor, which presents challenges for a stable and predictable workforce. AI in combination with advanced robotics solves critical farm labor challenges by augmenting or removing work and reducing the need for large numbers of workers. Agricultural AI bots are harvesting crops at a higher volume and faster pace than human laborers, more accurately identifying and eliminating weeds, and reducing cost and risk like for example predicting the impact climate events will have and determining effective damage mitigation strategies. On top of that, AI farmers present a permanent solution for the unpredictable and fluctuating agricultural workforce. The current crux of smart farming is using a blended workforce of digital help alongside traditional farmers and tools. Land O’Lakes deployed a line of smart tractors that use data insights to remotely plant seeds in the most optimized way. Predictive analytics data is being remotely applied to inform not only the farm, but the machinery. After the seeds are planted, IoT devices continue to monitor growth, weeds, soil and water retention, and other factors which in turn inform next year’s crop. Instead of relying on human measurements and labor, automated food and irrigation systems ensure the crops have the proper nutrients. If we extrapolate those current capabilities and make it so that eventually we would not need any human intervention anymore — accelerate the self-learning AI capabilities and make sure that the farms can be completely self-sufficiently supplied with solar-power, we’re getting a big step closer to getting free food for everyone. For free? Here’s why these technologies are called exponential technologies: All four of these technologies are seeing an exponential price-performance increase/drop (it’s how you look at it) over time. What does an exponential price-performance increase mean? If you look at the table above, you can see that for a performance X, the price drops every Y amount of time. Or the performance X doubles for the same price every Y amount of time. For us, this means that at some point in time, there is going to be a combination of our three technologies that is performant enough for a price that will make the initial investment for an AI farm negligible. What about delivery? Drones are on exponential price-performance curves too! Can you imagine a continuous stream of autonomous drones busy delivering food all over the community without one human being lifting a single finger … The Vision As soon as the price/performance mix is at a point where that initial investment is negligible and we succeed in designing a closed-loop system, meaning there is no human intervention needed for any step from the seeding to the harvest to the transport, the gates are open for practically free, abundant and sustainable food production. Of course, there are many if, ands & butts. What about the fact that current regulations would block this? What about the fact that it would derail an entire industry? Where would you implement this first? Those if, ands and butts, I invite you to flesh them out together with me along the way. But let me end this short little blog post with why I think this is worth it to explore and we could re-frame a lot of those issues by asking a different kind of question: What kind of society do we want? Do we want a people-serve-society kind of society or do we want a society-serves-people kind of society? The former is what we currently have — driven by our mainly single value capitalistic system. Depending on who you ask, it works. The latter would be that we start looking at society more as an operating system that serves each and everyone of us with a very basic set of “included” features. Here’s the included features that I would propose: Let’s create a societal baseline by automating with exponential technologies all of our biological & physiological needs, starting with food. Then we can work our way up the pyramid of Maslow in the feature set that our societal OS provides. Many posts will follow on this topic, I am sure, but I wanted to give you the bigger idea here already. What could be the impact of this? Let’s say the experimentation I’m doing with a FarmBot in my own back garden, together with what Alphabet, YesHealth Group, 80 acres farms and many others are doing, results in a model for such a farm. A great spot to get started implementing those free farms would be sub-Saharan Africa as this is our current culprit when it comes to tackling world poverty (read a previous article I wrote about this). If you’re already yelling — “But Tom! Agriculture is the backbone of most of the economies there still! You can’t just put all these farmers out of business.” Then I would say you’re right, but this is an observation within the people-serve-society paradigm. What if food is the included operating system feature? How many would still need to go to work “just to put food on the table?”. Nobody would need to as food is provided freely. This would profoundly transform the current concept of our economy. As we make these basic necessities included features in the operating system of our society, everybody would have time & energy to pursue whatever they desire. Want to focus on being the best parent you can be for the next 5 years? Got the drive to get educated in whichever field you would want? Well, now you’ve got the time as you don’t have to go to work to “put food on the table” anymore. Want to start thinking about solving other problems like for example what is going to give people purpose if all of a sudden they get all this to self-actualise? Well, you would be able to as well. Final words If this little blogpost raised more questions than answered for you, good! It does so for me too. I am on a journey however of answering as many as I can — if this is a journey you’d like to join in you are most welcome to do so by subscribing to the podcast, joining our mighty network, network or sign up for our newsletter where you’ll get little thought-provoking articles like this & a bi-weekly podcast episode shot straight to your e-mail — or follow us on Instagram, Facebook or LinkedIn! What does it take to impact a billion people+ lives?
https://medium.com/@humain.ai/what-would-it-take-to-provide-the-world-with-free-food-f7604f032812
['Humain.Ai', 'Exponential Insights']
2020-12-23 05:53:08.063000+00:00
['Robotics', 'Solar', 'Free Food', 'Drones', 'AI']
Your Omnipotent Presence
First time we meet it was a blaze of glory. I remember shed tears falling on my knees and my heart fluttering filled with your omnipotent presence. Sometimes I forget but you are unfailingly there.
https://medium.com/spiritual-secrets/your-omnipotent-presence-d660f7050508
['Ivette Cruz']
2020-12-21 18:32:27.033000+00:00
['Spirituality', 'Life Lessons', 'Motivation', 'Poetry', 'Spiritual Secrets']
Difference of Programming Languages: PHP Vs Ruby on Rails
In this article, we attempted to include an objective analysis of Ruby on Rails and PHP and the advantages and risks of both. We’d like to point out that Ruby on Rails is the most common platform for the Ruby programming language, which is often used for web development projects, and PHP is a popular web development programming language. Projects Development Speed Projects developed in Ruby on Rails are built even faster than PHP, as both, we and our colleagues have discovered. This is due to the RoR system architecture’s technological features (for example, thoughtful conventions simplify configuration), as well as a comprehensive collection of ready-to-use native Ruby on Rails software, a large community of ready-made solutions (so-called “gems”), and the ease of programming on it. As a result, thousands of ready-made applications for different program module implementations are currently available for free in the public domain. Authentication, authentication, posting systems, payment systems, mailing lists, amongst other features (many of which are typically built from the ground up) have all been applied by other teams, reviewed, and recommended by a large group. Client revenue is spent on construction time; the longer it lasts, the more costly it is. Development cost The number of Ruby on Rails developers is much less than the number of PHP developers. This is due to the different entrance points into technology, which suggests the developer’s consistency. Among all developments, there are a limited number of successful developers. Developers with a high level of expertise are very costly. Furthermore, they are the same price regardless of programming language or platform. Since there are considerably fewer poor developers in the Rails culture, there are fewer cheap developers. Due to the availability of ready-made implementations and a more accessible language framework, the cost of web creation in Ruby on Rails can be substantially lower. If you just need to build a little blog, PHP will suffice, but Ruby on Rails is the way to go with more serious ventures. Popularity The number of pages published in a specific programming language is one of the most common criteria for success. There are more than 40M websites created with PHP, whereas the number of websites built with Ruby on Rails is around 600K. The key explanation for PHP’s success is that a large number of small websites, for example, are built using the popular CMS WordPress. At the same time, PHP’s popularity is falling significantly compared to Ruby on Rails.
https://medium.com/@techneossolution/difference-of-programming-languages-php-vs-ruby-on-rails-c2ae03cf56e1
['Techneos Solution']
2021-05-08 09:46:05.915000+00:00
['Ruby On Rails Development', 'PHP', 'Framework', 'Programming']
Build Trust in Your Relationships
I used to broker freight for a mobile tool show. We had shows that moved daily, twice a week, or weekly depending on the market they were in. They would set up the show, conduct the sale then pack up and move to the next town. My job was to arrange the transportation to move the show to the next location. I built a relationship with two brokers over time. When we agreed on terms, they would start the process with their driver. Many times we had a verbal agreement, and we trusted each other. I would follow up with a signed order later. We built this over time, following up our promises with actions. You have to work hard to earn someone’s trust, and even harder to keep.
https://medium.com/@patricialrosatr/build-trust-in-your-relationships-4c53992c82b5
['Patricia Rosa']
2020-12-11 00:34:35.674000+00:00
['Ethics', 'Productivity', 'Relationships', 'Work', 'Trust']
AS WE BEGIN TO BUILD TOGETHER FOR SUSTAINABILITY...
AS WE BEGIN TO BUILD TOGETHER FOR SUSTAINABILITY... Dearest IFUDSITES, My every breath sings gratitude to God and to you all. We are stronger together. Let me congratulate everyone — contestants who all have shown great attitude throughout the periods of campaign; the Electoral Committee for organizing a free and fair election; again, you, our electorates are the most valuable players. You have shown with your vote the direction you want our dear association to move in the next few months. Tonight, IFUDSA won. And that’s the most important thing. We have held an election that we are proud of. We have shown by the power of our votes that us and only us are the shapers of our destiny. Dearest IFUDSITES, tonight, I want to say to you, we have more work to do. More work to do to enhance our professional training via ceaseless human development and organisational capacity building; more work to do to reignite our connections with our alumni; we need to improve our research skills while still undergraduates. These areas will enhance our dental training and our suitability for work in this age and century. Having said that, dear gentle reader, your welfare, our welfare comes first before anything. It is the number one thing. Our academic welfare, our mental health, our overall health, our overall welfare is the most important thing as a people. We will empathetically work together and be strong together. Our unity is bigger than numbers. And when we believe that that, the possibilities are endless. Thank you for choosing me as your President! Make no mistake: Too many dreams have been deferred for too long. We now need to believe in the possibility of doing things differently and build a sustainable, modern, 21st century organization. We have the opportunity to defeat despair and to build an association of prosperity, purpose and vision. I believe, and I want you to believe with me. I want to work with you for sustainability. And while sustainability isn’t built in a day, starting small and having a big VISION is part of we need right now in 2021 in the 21st century. IFUDSITES, again, thank you for choosing me as your President. In the coming weeks, calls will be made into the our functional departments and committees. Please, join at least one. IFUDSA needs your talents and abilities. IFUDSA needs what you can do. Let’s do this thing together. Again, I extend hearty congratulations to fellow Executives-elect. I cannot wait to start working with you. Sincerely, Oluwatosin Joe Ajibade President-elect, IFUDSA
https://medium.com/@joeajibade/as-we-begin-to-build-together-for-sustainability-caa1a82b2a0c
['Joe Ajibade']
2021-11-15 19:23:16.194000+00:00
['Oau', 'Dentistry', 'Ifudsa', 'President', 'Elections']
The best user experience isn’t on the web
When you think user experience or UX, you think about the web. Whether it’s mobile, desktop, or tablet, the web is considered as being the house of all great UX. But we’re wrong. And in my opinion, the best user experience is, wait for it… …in the car. What? The car? I know what you’re thinking, “My car’s dashboard is terrible, the interface experience is outdated!” And you’re right. But I’m getting more specific. Enter Apple CarPlay. (Sorry, Android users) The Problem Before we get into CarPlay, let’s talk about the problem it solves. Car dashboards & interfaces are beyond outdated. They are brutal. Navigating the radio, personal media, & navigation is confusing. You don’t know whether to use the buttons on the screen, or the physical buttons right below it. There are so many things wrong with the standard car interface. And to make it worse, it is different in every car. Even in cars within the same brand. This is a standard car interface. Nothing stands out, everything is cluttered. You don’t know whether to use the touch screen or the physical buttons. Some buttons lack signifiers, and some signifiers are too vague anyway. There is no continuity, no similarity. The interface challenges your presumptions and makes you think to make decisions that should be simple. This created a massive opportunity for someone to change something. And to no one’s surprise, Apple jumped in. Why Apple? Apple didn’t create CarPlay to change cars. They created CarPlay to give their users a better experience wherever they are and however they can. They realized that people using an iPhone to play music and podcasts in the car didn’t have the best experience. So instead of ignoring that interaction, they made it as simple as using the iPhone. Apple doesn’t want its users to have a negative experience using their product, regardless of the use-case. Apple also realized that people using iPhone’s while driving is a massive safety hazard. Not to mention, it has drawn tons of controversy. By making a simpler & safer interface built for the car, Apple is hoping to reduce distractions. This is why they felt it was necessary to build CarPlay. And as with anything Apple, odds are they do it right — here’s how they did. The Beauty of CarPlay Here is the standard interface you’re greeted with on CarPlay. Instead of the ugly & not usable standard car interface, this looks great. The colors don’t contrast too much, and because you’re familiar with iOS, you know how things work. This is one of the most important CarPlay features, and definitely one of the main reasons why people love using it. You know how things work. You’re coming from using the iPhone, & now you have an even simpler version on your car’s dashboard. It’s seamless. You can respond to messages with your voice with two touches. Siri reads your messages aloud to avoid distractions, it is built for safety. Apple designed this to be as simple and easy as possible. You can get almost anywhere, or do anything within two touches. Apple realized that when you are in the car you should be focusing on driving, not changing the music or responding to messages. The first image of CarPlay houses the main dashboard. It has messages, phone, Spotify, Waze, or anything else that has a use case in the car. Instead of using your phone to change between media or entertainment, you can now do this with two touches & a swipe, barely taking your eyes off of the road.
https://medium.com/swlh/the-best-user-experience-isnt-on-the-web-dccfd4d78550
['Josh Nelson']
2020-02-05 21:51:46.375000+00:00
['UX', 'UX Design', 'UI', 'User Experience', 'Design']
The Neuroscientist, A Field Guide
Sub-species The Electrophysiologist The recorder of single neurons, lowerer of electrodes. Notable tribes include the patch-clampers, whose cruel initiation rites require attaching a micrometer-scale piece of glass to the body of a neuron ten times smaller than the thickness of a human hair. Some among this tribe, possibly insane, attempt the same in awake, moving animals. They are easily distinguished from other Neuroscientus by the rhythmic beating of their heads against the lab bench when losing yet another neuron after just three trials of behaviour are complete. The Systems Neuroscientist Distinguishable from the electrophysiologist by trying to record from many neurons at the same time, and relate them to something in the outside world. Comes in many tribes, from the sensory obsessives, the memorists and decisionists, to the motor wagglers, through others that identify themselves by blobs of the brain — the corticians, cerebellites, the basal gang, the hippocampites, the amygdalarians — to yet others that identify with the type of brain — the worm-wranglers, the fly-fanciers, the sea-sluggers. To the naive observer these tribes would all seem to be working towards the same goal, but vicious internecine wars frequently erupt within and between tribes. Insufferably smug since members of their subspecies were awarded a Nobel Prize in 2014 (“we’re a real science now”), they are nonetheless handicapped by the crudeness of their tools, which has led to the Humphries Uncertainty Principle: they can either record precisely when neurons send spikes but not know exactly where they are, or know precisely where they are but not know exactly when they send spikes (they call these “multi-electrode recording” and “calcium imaging”, respectively). The Cognitive Neuroscientist This subspecies is more intensely interested in us Sapiens than any other. We find it useful to distinguish a number of breeds, albeit with some overlap: Neuroimager — often referred to by other subspecies as “a psychologist with a magnet”, they adore rainbow colours. A deeply superstitious people, yet highly intelligent. In their writings frequent references are made to “Bonferroni”, possibly a deity, and they will cross themselves and mutter curses to the heavens upon hearing the phrase “dead salmon”. But the complex incantations necessary to turn the magnetic alignment of oxygen-depleted haemoglobin into a measurable signal of brain activity is testament to their ingenuity. Scalp tickler — a “psychologist with a hairnet”, readily identified by their distinctive calls. Anthropologists have transcribed some of these calls (lit. “P100”, “N2”), but have yet to discern their meaning. Some authorities subdivide a further breed, the Magnetiser, a psychologist with a quantum thingy. The Paddlers — believe that waving a magic wand near the scalp of a Sapiens will variously cause involuntary movement, enhance their mathematical ability, or let them experience the presence of god(s). The Neuroanatomist Was thought to be going extinct, until the Connectomics Explosion approximately 15YA rapidly diversified the gene pool, with many new, previously unrecorded phenotypes appearing. Neural Engineer Consider the brain to be a feedback loop. Fiddles with amplifier settings until the squealing stops. Behavioural Neuroscientist Attempts to infer the workings of the brain from watching a rodent press a lever. Talks almost entirely in capital letters (example transcript: “the US will become the CS, but leave the UR intact”). Neuroethologist Attempts to infer the workings of the brain from watching an animal go about its daily routine. Barely on speaking terms with Behavioural Neuroscientists. Neuroendocrinologist Slightly hormonal. Obsessed with how everything sloshing around the brain that isn’t a neurotransmitter affects brain cells. Believes a “raster plot” is a graph drawn by a Bob Marley fan. Molecular neuroscientist Applies the tools of close cousins the molecular biologist to the brain. Loves the stuff floating inside cells, especially proteins and complex chains of chemicals signalling to each other. Unable to distinguish the brain from the liver without assistance. Neurogeneticist Obsessed with the expression of bits of DNA and RNA in brain cells. Unable to distinguish the brain from yeast without assistance. Clinical Neuroscientist Haughty and proud, these interact the most with Homo sapiens, bringing their skills to treat the damaged and sick among that closely-related species. Males of this sub-species are thought to be born wearing a suit and tie. Computational Neuroscientist Often gaunt, frequently shunning daylight, these shy creatures have also a number of distinctive breeds. Recent gene sequencing work on this sub-species has revealed evidence of lateral gene transfer from known species of Physicists, Mathematicians, and Computer Scientists. The consequent melange of languages spoken among members of this sub-species often collapses interactions into mutual bafflement. Circuit modeller — Literal-minded to a fault, these build exact scale models of bits of brain. Often try to show them to their fellow Systems Neuroscientists working on the same bit of brain, but suffer frequent barbs and social rejection (“Suited for a more specialist journal”). The Algorithmics — Seek the Holy Grail of the step-by-step instructions by which the brain works. Once discovered that deep-lying neurons, which may or may not contain dopamine, send a signal that looked like the difference: [actual value of what happened] — [predicted value of what happened]. Have been banging on about it for the past 25 years. Compartmentalist — Uses a thousand equations to describe a single neuron, none of whose parameters are experimentally determined. Often found weeping silently into a drink at conference bars. Worship at the Church of Rall. The Bayesians — Have discovered a hammer, and are determined to use it on absolutely bloody anything that’s not already nailed down. Karl Friston Unclassifiable.
https://medium.com/the-spike/the-neuroscientist-a-field-guide-ac15bb47372f
['Mark Humphries']
2020-05-04 12:37:55.071000+00:00
['Humor', 'Neuroscience', 'Artificial Intelligence', 'Psychology', 'Science']
Embracing Remote Team Building Through Art
I lead Design at Apartment List, which includes user research, product design, and marketing design. Every quarter I prioritize a team offsite, but given Covid and our transition to remote work, I wanted to re-imagine things. Before jumping into the details of an off-site and a productive agenda, I sat down with my team and we took a step back. Given our focus on the humans that use our product, we started with a human-centric question: who are the people on our team, and what are the people problems we are trying to solve? (And yes, this mirrors our product development process ;-)) People problem 👩‍💻 Unsurprisingly, we started with some user stories. As a member of the design team: 1. …There isn’t time to pull away from my day-to-day work …and expand my mind and creative thinking …and bond with my teammates, especially while fully remote during Covid …and do work towards a meaningful cause 2. …I feel strongly about contributing to diversity, equity and inclusion Apartment List Design Team The Solution — Art Day! 🎨 Two themes stuck out to me: first, we wanted to create something, and feel untethered from our daily constraints. Second, we wanted to contribute to something meaningful, which may not be related to our immediate scope of work. As a result, we decided to create some art, and later hold a virtual auction, with proceeds going to a good cause. We wanted to be inclusive for art day, so accepted art from any art medium (paint, mixed media, photography, video, writing, etc.). We shared a common theme for inspiration: each of our identities — whether our culture, background, or personal attributes, we wanted to celebrate our similarities and differences. To remind ourselves of this focus, we decided to donate all proceeds from the auction to the Black Artists + Designers Guild. Making things even better, Apartment List offered to match our contributions, amplifying the potential impact of our art! The Art Day agenda ✍️ As great as this sounded, we wanted to make it even better with some bonding activities. So we baked in some inclusive activities, like morning cooking and afternoon cocktails (or mocktails), where we could socialize and share our work. Here is what the agenda looked like: 8:30a — 9:15a — Breakfast with Daniel 🍳 — Daniel kicked off the day teaching us to cook, and allowing us to break bread and socialize! We made an Israeli dish called Shakshuka, with a cross between the Bon Appétit and NYT recipes. We simplified it to save time! Shakshuka! 9:15a — 9:45a — Virtual art day kick-off! ✨ We spent the next 30 minutes sharing our plans with each other, discussing details, getting feedback, and generally just bouncing ideas off each other. 9:45a- 12:30p — Individual creative time 🎨 — heads down time to create. People could remain dialed into our zoom room for company, or disconnect and focus. 12:30p — 1:30p — Team Lunch 🍱 — We all rejoined the zoom meeting link, and Apartment List paid for take out for everyone (thanks A-List!). 1:30p — 4:45p — Individual Creative Time 🎨- more time for everyone to go heads down and create. And again, people could join the zoom room, or not! 4:45p-5:30p — Cocktails/Mocktails with Jordan + show and tell 🍹- Jordan helped us end the day with a lesson on making a specific cocktail/mocktail. Jordan called his recipe, “Is it Friday yet?” and produced a delicious “quarintini,” which resembled a classic margarita. While we enjoyed our beverages, we all shared our progress, and in some cases the final result. “Is it Friday yet? Quarintini” 🎨 Virtual Art Show A few weeks later, we held a virtual art exhibit during lunch time. The entire company was invited, and we held a silent auction for people to bid on our art. Since there was so much enthusiasm, we even included art from A-List employees outside of the design team! Here are some of our final pieces, and a video from the art show: Impact through art 🎨 I consider this event a major success (or a win-win-win-win). Our team got to bond and create art, Apartment List got to celebrate our creativity, and we donated to a cause we care about. And with Apartment List’s matches, we raised $6,442 for the Black Artists + Designers Guild. Want to join in on the next one? We’re hiring!
https://medium.com/apartment-list/how-apartment-list-embraces-remote-team-building-through-art-6bb5f11c86a6
['Debbie Sorkin']
2020-11-20 21:27:32.798000+00:00
['Team Building', 'Design', 'Design Teams', 'Remote Working', 'Company']
Mobility budget: what do you need to know?
Click on the link to see the full article: Mobility budget: what do you need to know? That’s it! The Mobility Budget finally came into effect on March 1, 2019! You’ve probably heard about it a lot, but you do not really know what this budget entails or how it can be used? We have decided to compile for you the most important information about this Mobility budget ! The mobility budget in a few words. Eligibility criteria How does it work? The mobility budget in a few words First things to remember about the Mobility Budget: it is voluntary! This means that there is no obligation to use this budget, neither for the employer nor for the worker. Moreover, this budget only concerns Belgium (for the moment !). Secondly, the Mobility Budget is calculated according to the principle of “ total cost of ownership”. This cost represents the annual gross cost of the company car for the employer, including tax and parafiscal charges, and “ancillary” expenses (financing costs, annual depreciation, fuel, insurance, contributions, etc.), to which the worker has rights. But who can claim this mobility budget? You will understand, this budget is only for people owning a company car or entitled to have one, but let’s look at the eligibility criteria in detail! Eligibility criteria These eligibility criteria are as follows : > One condition for the employer: he must have already made available to one or more workers, company cars for an uninterrupted period of 36 months (exception for employers active for less than 36 months, who must make available a or several company cars at the time of budgeting). > Two conditions for the worker : A. Must have had a company car (in the last 36 months), or have been eligible for a company car (for at least 12 months) B. He must have a company car, or be eligible for a company car (for at least 3 months without interruption) at the time of application. (exceptions : hiring a worker, promotion or change of function before the budget comes into effect) How does it work? Many people know that a Mobility budget will be created and made available to businesses, but no one knows exactly how it will be applied and distributed. We have created a small infographic combining the 3 pillars of this budget Mobility: Go further with our guide !
https://medium.com/@bepark-en/mobility-budget-what-do-you-need-to-know-9885dda9b8d7
[]
2020-08-11 13:16:05.954000+00:00
['Mobility', 'Budget']
Serverless Monitoring Is No Longer “Finding a Needle in a Haystack”
At Human AI Labs (hu.man.ai), formerly Luther.ai, we use AWS Serverless Stack and Kubernetes for all the core real-time pipelines, and it is data-driven execution across all the AWS Services — ECS, Lambda, SQS, Fargate, etc. With hundreds of services with thousands of invocations, every day comes the complexity to configure, execution monitoring, logs review, latency measurements, etc. For configuration and CI/CD, we use serverless for the packaging and deploying Lambda functions. However, we tried multiple monitoring solutions, including just leveraging AWS native options; however, the scale brings various issues like - Multiple programming languages are used for AWS Lambda development. Containers / Tasks used in ECS with EC2 and ECS with Fargate. External service access ( outbound API calls ) review, including unauthorized calls. Persistent Storage access review and latency measurements. Unified access to execution logs with options of searchable across with full text and time-based. Contextualization of the service-based view. Proactive notification to places where we work — slack, pagerduty, etc. History of the events, searchable, etc. Along with the specifics above, the development team wants to focus on the serverless function rather than increase its monitoring footprint, causing lots of worries to on-call DevOps engineers. After lots of review of various services to solve the issues listed above, Epsagon is the solution we implemented. Here is the journey of how it saved hours and hours for our serverless implementation. Let us break the journey into the installation, monitoring, latency measurements, and notifications. Installation / Onboarding: If you as a reader and does AWS serverless development and not using lambda layers, you miss a core feature that will help a lot. Auto-tracing for your AWS region lambda functions is enabled with a simple workflow leveraging the lambda layers, thus solving multiple programming languages development dependencies to enable monitoring, including custom logic per serverless function. Monitoring: Once you have the auto-tracing enabled, proactive monitoring will help you with alerts and notifications. We use native slack and email integration to receive notifications. Each of the notification has the contextual link to the alert where AWS CloudWatch log, the start time of the lambda execution along with the service map of all the services used ( external API calls, AWS Services inbound and outbound) is available as a quick action — handy link to AWS. If you are like me who are interested in patterns, you can use the historical ( available for the last 7 days) view for all the issues to understand scenarios like if any of the last deployment, any specific user action, and or its scalability caused — I have an interesting issue which we uncovered using historical patterns but for a future blog not now. Epsagon — Tabular view of all the lambda functions Latency measurement: With 100’s of services and multi-thousands of executions every day, even a couple of milliseconds, etc., execution time added to one service in the real-time pipeline can result in a bad user experience. With Epsagon, it is efficient to isolate latency issues in multiple facets. As in the picture above, a unified view across all the functions is available with the Average Execution duration — which is a great start. For each lambda execution, a contextual service map is available with the service calls' time duration. With the Service Map view, we can review all the lambda dependent services as a single landscape with the capability to go back in time and understand the subtleties. Epsagon — Service Map View example Notifications: With all the integrated features, we configured extensible alerts — of which we use PagerDuty integration for core functions both for high latency, errors. Also, native integration with Jira helps to document a bug from the tool itself for each of the issues. The contextualization of information captured in the bug is a key feature — no more to worry about, did you took the log capture, what time was the issue, etc. Did I say that we self-configured from start to finish in a weekend? Yes, we did both for our dev, prod environments. Here are the resources that have proven handy AWS Workshop — https://epsagon.awsworkshop.io/ Epsagon Docs — https://docs.epsagon.com/ and AWS Activate program — https://aws.amazom.com/activate/ To conclude, serverless deployments, monitoring of workloads with hundreds of services and millions of invocations is no longer a “needle in a haystack.”
https://medium.com/humanailabs/serverless-monitoring-is-no-longer-finding-a-needle-in-a-haystack-402a1b51a78b
[]
2021-02-24 03:58:08.212000+00:00
['AWS', 'Monitoring', 'Technology', 'Serverless', 'AI']
Being Self-Made: Unpacking the 21st Centuries Biggest Falsehood
The U-Turn In today’s society, there are few things that we hold in reverence more than a ‘Self-Made’ individual. Self-made entrepreneurs personify this so-called ‘American Dream’ and other dreams alike, as they fill up the social media feeds of the masses. For years this concept of a ‘self-made individual’ has fascinated me. Had you given the task of discerning the factors behind success to my younger self, ‘Hard work’ and ‘The Grind’ would have been the first words that sprang to mind, and whilst I attributed a small part of success down to luck, there was no doubt that my teenage self fell prey to this pernicious notion that hard work makes up the bulk of success in this world. So why the sudden U-turn? — Why is it that I believe the role of luck in the success of those at the top of society is so trivialised? — Let’s take a look… “If you wish to converse with me, first define your terms” — Voltaire As always, before we can unpack the question at hand, we need to define our terms. The phrase ‘Self-made’ refers to an individual who has become successful or rich by one’s own efforts, actions, and decisions. By extension, the phrase is intertwined with a belief that there is little to no luck involved here. Even in those individuals who acknowledge that luck plays some form of role in success, this role is often underplayed as a feature not significant enough to capture the bandwidth of our attention. A Renaissance Lebron James… The year is 1484 and we find ourselves in the city-state of Florence — The epicentre of the cultural revitalisation that spread across Europe in a period that we call the Renaissance. Word is getting about outside a local cloth factory that the native painter Sandro Botticelli is embarking on a new piece where he will paint the mythical goddess — Venus, being clothed in the flowers of the natural world — He will call this painting ‘the Birth of Venus’ What inspired Botticelli to paint such a thing? How long will it take to finish? Where can we see it? These were all questions that reverberated throughout Florence at the time. If Lebron James had been born in such a time and place— ask yourself, would you have ever heard of the name Lebron James today? The fact that Lebron James lives in a society that values and loves basketball is hardly his doing. They didn't care much for basketball in the Renaissance (in fact, it hadn’t even been invented). The way you made it to the top of society in the 15th century was through talent in the liberal arts. An individual rolling up cloth balls and shooting them into pottery vases from impressive distances may have caught the attention of the Florentine locals as they made their way to the local cathedral, but this individual’s name and talent would have been lost in history, obfuscated from view by the Renaissance heavyweights, like Botticelli and Michelangelo. The fact Lebron James was brought into being at a time in history where society loves basketball is not his doing — it is a matter of contingency and good luck. It is consequently a grave mistake to look past this huge slice of luck when judging the exercise of his talents. The Concert of our Environment and Genetics Looking at the big picture, so much more than we are often led to believe can be ascribed to luck at the end — From your genes and all they do to determine the individual you grow up to be, all the way to the totality of your environmental experiences and all they do, in concert with your genes, to determine who you are. Ask yourself these very simple questions, did anyone author their genetic makeup? Are we in control of the society into which we are born? Are we in control of the parents we were born to? Going back to our previous example, it has to be an unassailable truth that Lebron James relied immensely on luck with regards to his physical attributes that allowed him to reach the pinnacle of success as a basketball player. Take the physical trait of height. It is needless to say that as far as important attributes go in basketball, height is right up there. Molecular Biologist, Chao Qiang Lai put forward the figure that up to 80 percent of the differences in height between individuals are determined by genetic factors, the other 20 percent being attributable to environmental factors — of which nutrition seems to be the most pertinent. Again ask yourself, how much does Lebron James have good fortune to thank with respect to his genetics and nutritional status? I hope we can converge on the notion that he had absolutely no authoring role with respect to his genetics and even if we examine the smaller role that environment plays on an individuals height — Was Lebron James in control of the fact that he was born into a society, at a specific time in history and to a family that had the means to acquire foods of high nutritional value? The answer has to be not a chance. The advent of behavioural genetics over the last forty years has led to huge advancements in the role our DNA plays in determining who we are, but it is a field that still lies under the radar for most people intellectually. Take the following quote from Professor of behavioural genetics and psychology, Robert Plomin: “Genetics, not lack of willpower, is the major reason why people differ in BMI. Success and failure, credit and blame, in overcoming problems should be calibrated relative to genetic strengths and weaknesses. ” — Professor Robert Plomin The more we look through this lens of trying to depict evidence of good fortune all around us, the more we stumble across how rife luck truly is, in determining the merits we ascribe to those individuals at the top of society. Now some resist this idea, seemingly at any intellectual cost. This single insight could be the antidote to the arrogance and contempt that seems to be present towards people in society who are less lucky than you are. “No one made themselves — no one picked their parents, nobody picked their genetics, no one picked the environmental influences which sculpted their nervous system and has determined their every brain state up until this very moment, in concert with their genetics and so, if you are intelligent and are able to use your intelligence in a way that produces great wealth for yourself, well you got lucky. You won the intelligence lottery. If you have a great capacity for effort and the overcoming of frustration, well you won that lottery too” — Dr Sam Harris Okay, So maybe luck plays a bigger role than I thought, Now what? The reason why the dismissal and underplaying of luck is so dangerous is that those who land on top of this pile come to believe that their success is completely their own doing — a mere measure of their merit and of course, by extension, those individuals who fall short have no one to blame but themselves. Attitudes towards success and failure, towards winning and losing become more hostile by the day and as mentioned earlier, the antidote is an appreciation of the roles that accident and good fortune play. We discussed humility in an earlier blog post concerning Socrates and his willingness to fill gaps in his knowledge and it seems we require humility again to open us towards a greater sense of responsibility for those struggling beneath us in society. To the basketball and ‘self-made individual’ lovers I say this… At the core of this blog post was one singular aim — to ensure that we no longer play down the role luck plays in society. Let me be as nuanced in my case as possible; my message was not an attempt to say the masses should no longer revere Lebron James — the fact that he has a genetic soup that made him more predisposed to become a brilliant basketball player and was brought about in a time where basketball is so loved is futile without the commitment to act upon that predisposition. The wider effects of this notion will undoubtedly be discussed in years to come with regards to how we choose to shape our society, but for now, I ask you to look past what the Forbes magazines tell you and search for examples of chance events in your own life and the lives of those around you.
https://medium.com/@aryaanthonykamyab/being-self-made-unpacking-the-21st-centuries-biggest-falsehood-3ea2070d17d6
['Arya Anthony Kamyab']
2020-12-20 21:55:20.259000+00:00
['Self Improvement', 'Compassion', 'Advice', 'Life Lessons', 'Philosophy']
The Creativity Leap: Unleash Curiosity, Improvisation, and Intuition at Work
The Creativity Leap: Unleash Curiosity, Improvisation, and Intuition at Work December book review The Creativity Leap: Unleash Curiosity, Improvisation, and Intuition at Work, Natalie Nixon, 2020: Berrett-Koehler Publishers Inc Consider this scenario. A kindergarten class is asked who would like to be an artist and most of the children put their hands up. A high school class is asked the same question and maybe ten per cent of the class put their hands up. Consider another scenario. You are discussing creativity with a group of people. The consensus is that creativity is a good thing. Do you say, self-deprecatingly, something like “I wish I was creative. I don’t have a creative bone in my body”? Natalie Nixon examines the underlying assumptions and beliefs behind these scenarios. In her book, she posits that creativity is not something that some lucky people are born with but something that all humans have. Creativity is abundant in the early years (think about those kindergarteners who love to paint, tell stories, move their bodies) but is gradually killed off by the structures of our schools and workplaces. What if we were taught how to use and develop our creativity instead of relegating it to the realm of the arts? How much more fulfilling would our personal and professional lives be if we could unleash the creativity that is inside all of us? Nixon defines creativity as “the ability to toggle between wonder and rigor in order to solve problems and deliver novel value”. She identifies three practices that anyone can implement to increase their creative competency: inquiry, improvisation, and intuition. Each of these practices are discussed in detail with interviews with people from a wide variety of people; actors, perfumieres, restaurateurs, farmers, and more. Now that we are living in the fourth industrial revolution (cloud technology, artificial intelligence, augmented reality, increased automation of jobs), creativity is more important than ever. Technology is intersecting with our lives in ways that it never has before. Nixon states that a ‘love/hate’ relationship with technology is not viable. Instead, we need creative thinking to ensure that technology is made human-centric and we have the capacity to solve new problems. The Creativity Leap is easy to read. The ideas are clearly laid out and explained with useful analogies and examples. You could read this in a day if you wanted to, but there is a lot to digest so I suggest taking your time with the exercises. Each chapter ends with suggested creativity leap exercises for you and for your organisation. I liked the personal suggested exercise from Chapter 1: Create like your life depends on it: “Become a clumsy student at something”. This was something I advocated for when teaching students with learning disabilities. I found that many of my colleagues, well-meaning or otherwise, had either forgotten what it was like to be a student in the classroom or did not have the imagination to put themselves in the shoes of a student with a learning disability. The best teachers are creative and encourage creativity. As we try to imagine a post-pandemic world, we need creative thinkers and doers who can envision how to improve society for all people. But that is not a job for a select, gifted few. It is something that we should all contribute to. So consider this, take a creativity leap and work on your creative competency, for yourself and your communities.
https://medium.com/the-innovation/the-creativity-leap-unleash-curiosity-improvisation-and-intuition-at-work-f445425e6373
['M Ainsley Blackman']
2020-12-03 01:01:03.474000+00:00
['Book Review', 'Work', 'Creativity', 'Books']
NestJS Access & Refresh Token JWT Auth
Getting Started Let’s start by installing Nest’s first-party authentication packages, which will provide 90% of the logic for our implementation, as well as a few external modules for additional support. $ npm install --save @nestjs/passport @nestjs/jwt passport passport-local passport-jwt bcrypt class-validator $ npm install --save-dev @types/passport-local @types/passport-jwt @types/bcrypt @types/jsonwebtoken Cool! @nestjs/passport and @nestjs/jwt are Nest’s first-party packages, offering the basic functionality, while passport, passport-local and passport-jwt are the “standard” packages authentication in Node. We’ll also include bcrypt for hashing support and class-validator for request validation. We’ll also of course include the types for these packages (assuming you’re using TypeScript, which you should be!), as well as the types for jsonwebtoken, which is already included as a dependency of @nestjs/jwt. With the dependencies installed, let’s discuss the basic structure of our app. While implementing the authentication, we’ll tie together two modules which we’ll refer to as the UsersModule and the AuthenticationModule using Nest’s modules system. While we won’t focus specifically on the database portion of this implementation, we will discuss a general structure that works well and stores the most essential data. The user data will be handled by our UsersService and UsersRepository , leaving the refresh tokens to be handled by the RefreshTokensRepository and all tied together by the TokensService logic. While the database implementation is less important and will work with whatever provider you choose — Sequelize, TypeORM, MongoDB, etc. we’ll still need to create two models/entities to support our authentication and user modules. Once again, for the sake of simplicity, we’ll use Nest’s first-party Sequelize package, but converting it to the package of your choosing should be fairly straightforward. Let’s put it all together in tree form for an idea of what you’ll end up with. app/ |-- application.module.ts |-- requests.ts |-- modules/ |-- users/ |-- users.module.ts -- users.service.ts |-- authentication |-- authentication.module.ts -- authentication.controller.ts -- refresh-tokens.repository.ts -- tokens.service.ts -- jwt.guard.ts -- jwt.strategy.ts |-- models/ |-- user.model.ts |-- refresh-token.model.ts There’s a few files that we haven’t discussed such as requests.ts and jwt.guard.ts but their usage will be more apparent later on. Of course everything can be tailored to fit your app’s structure, and you may even choose to create a separate module for tokens or create an additional folder or folders to further separate your repositories and services. Users and Tokens Now that we’ve laid out the structure of our app, we can get down to the important part — the logic. First, we should probably talk users, since they’re at the core of authentication. We’ll use the simplest form of a User model, one which contains two properties — a username and password but can easily be modified to add any additional information that you may desire. The model definition is fairly straightforward, but you can read more about Sequelize’s TypeScript implementation here. While we’re here, we’ll also create a RefreshToken model, which will allow us to store reusable refresh tokens for each user, while also supporting set expiration dates and revocability. The user_id column of course refers to the owning-user, is_revoked offering the ability to immediately revoke a token, and expires providing a timestamp for automatic revocation. Technically, we don’t necessarily need to include an expires field because we’ll embed the expiration date in the refresh token, but storing it in the database allows us to optionally purge expired tokens in the future. Models are done, but we’ll need a way to store and retrieve their data. Let’s create a couple repository classes that provide reusable functionality for this. If we think ahead, we’ll need a few methods — one to find a user by their ID for generating refresh tokens, one to find them by their username at login, and a final method to create a new user for registration. As mentioned earlier, we’ll refer to this class as the UsersRepository with an implementation similar to the following. If this is your first look at a class making use of Sequelize models in NestJS, make sure to check out their reference guide here, especially in reference to the constructor injections. Both findForId(id) and create(username, password) are self-explanatory, while findByUsername(username) might be slightly more complex because of the interpolated Sequelize functions for adding support for case-insensitive username logins. Thankfully, each of these built-in functions are described in the Sequelize documentation for quick reference. Before we tie everything together in the services and controllers, let’s add one last repository for our refresh tokens. We’ll need the ability to create a token, as well as find one by its ID. Again, fairly straightforward. We’ll create a new RefreshToken by associating it with a User and setting the expiration date based on the ttl parameter, specifying the number of seconds until the token expires. As a side note — in the first repository, we used Sequelize’s repository pattern, but used the static access methods for the refresh tokens. The latter allows for directly accessing methods like find and findOne from the model class, while the repository pattern allows for a better separation of concerns — but either method is equally correct. Service Layer The database logic is done. We’ve created methods for creating and retrieving new users and their refresh tokens, but we’ll need a one last layer between our repositories and API controllers — our services, which will provide the business logic connecting the two. First up, we’ll want to create reusable logic for generating access and refresh tokens, which is where our TokensService will come into play. Let’s take a more in-depth look at what we’re implementing. First of all, we’ll setup our class by injecting the RefreshTokensRepository class that we created earlier. We’ll also inject Nest’s built-in JwtService class, which provides wrapping functions around jsonwebtoken for signing and decoding JWTs. We’ll also setup a constant variable that declares the claims that will be shared across all tokens we generate — refresh and access alike. Although they’re not strictly necessary, they can provide additional validation for your token. All potential claims are documented in the JWT RFC. Onto the methods. Our first method, generateAccessToken is the simpler of the two. Given a User we can ask our injected JwtService to sign an access token with our BASE_OPTIONS claims, as well as an additional subject claim ( sub for short), which will identify the user for which the token was generated. This will be embedded in the JWT, allowing us to later decode it and quickly retrieve the user from the database based on their ID. You might wonder if this is secure. Thankfully, the JWT payload is by design immutable. Since the payload is verified by a hash of its contents, a payload modified by a potential hacker or wrongdoer would be considered invalid by the server, which we’ll see in a moment. We’ll return the signed JWT as a string, which we later return to the client in the API controller. As for the remaining method — generateRefreshToken — we’ll use very similar logic for signing our refresh tokens. Given that the method accepts an expiresIn parameter, we will once again pass this onto the signing options, allowing us to specify the expiration date of the refresh token. In addition, we need to include the jwtid claim ( jit for short), which functions as its name suggests, embedding the ID of the token. In the exact same way that we’ll be able to look up the User by the subject claim, we can just as easily pull the RefreshToken by the ID decoded from the claim later on. Assuming that those methods are in place, we can extend the class a little further. With the logic for initially generating access and refresh tokens ready, we’re just missing a couple methods for generating access tokens from the refresh tokens. Let’s add some additional methods to the TokensService class. Five new methods, but they each implement small pieces of the larger puzzle, making them easy to decipher. Let’s take a look at decodeRefreshToken first, which is passed a string—a.k.a the encoded JWT. Once again, we’ll use the built in JwtService but this time using verifyAsync which will decode the JWT token and return its payload in object form. Speaking of — we should define what we expect that payload to look like. We’ll use a TypeScript interface, naming it RefreshTokenPayload with two properties: jti and sub which are the proper versions of the full claim that we passed into signAsync earlier, jwtid and subject respectively. These two properties will return the exact same values that were embedded when the token was signed. You might also note that we intentionally capture the built in TokenExpiredError from jsonwebtoken when decoding the token. As we mentioned earlier, we don’t necessarily need to store the expiration date in the database because this logic will ensure the expiration date embedded in the JWT is valid. We’ll also catch any additional errors and return them in a format that the controllers will understand. Two other methods getUserFromRefreshTokenPayload and getStoredTokenFromRefreshTokenPayload are fairly self-explanatory, and do almost exactly what you might assume from their names. Given a decoded RefreshTokenPayload that we defined earlier, the methods will simply extract the necessary field from the payload and find and return either the User or RefreshToken respectively. As with before, we’ll throw exceptions here in case the fields are not present in the decoded payload. With the ability to decode refresh tokens and retrieve their associated token and user records from the database, we can bring this functionality together to create our resolveRefreshToken method, which will decode and return both the RefreshToken and User models from the database, assuming that the token is valid and passes all additional checks that we’ve put in place. Given the refresh token as a string, we will decode the payload and fetch the RefreshToken from the database. Assuming that the token exists, and that its is_revoked field hasn’t been toggled, we’ll be able to retrieve the user from the database before returning both the user and token. In our final method, createAccessTokenFromRefreshToken we can simply use the resolveRefreshToken method that we just implemented to retrieve the User from a refresh token, based on the sub , and then generate a new access token for that user. Tokens? Done. Users? Almost. While we’re here, let’s create a super simple UsersService to provide additional functionality to the earlier UsersRepository that we created. Personally, I like to use the service classes to handle the business logic, and repositories to interact with the database, but it’s totally dependent on your own preferences. In this case, I’ll use the service layer to handle the interaction between the request itself and the database. Before we can create this service, and more importantly before we can create our controllers, we’ll need to define our HTTP request bodies, for which we’ll make use of class-validator, which as with many other features, Nest provides first-class support. We’ll setup our requests in our requests.ts file, but you can choose to split these into several files in any structure that you’d like. All three request classes are fairly self-explanatory, one providing login functionality, another registration, and the final requiring a refresh_token which will allow us to generate a new access token. The requests are defined, so let’s create our UsersService class. Each method is essentially a wrapper around existing logic, but makes implementing it less verbose and easily reusable. The validateCredentials method uses bcrypt’s compare function to compare the user’s hashed password stored in the database to the password that they attempted to use to login. The last new method, createUserFromRequest takes in a RegisterRequest object from the requests defined earlier, validates the uniqueness of the username and then passes it and the password onto the UsersRepository to create a new user. The final two methods simply proxy the call into our UsersRepository which prevents us from having to inject both the repository and service later on. API Controller Routes The business logic is done. That was exhausting. It’s time to bring every component together to create the most essential piece — the routing. In this instance, we’ll only use one controller — our AuthenticationController providing methods to register, login and refresh our users and their tokens. We’ll probably want to start with our register endpoint, since there’s no point in trying to login or refresh users that don’t even exist! Let’s setup our AuthenticationController by injecting our UsersService and TokensService from earlier, as well as implementing our register method. We’ll also want to add Nest’s @Controller() decorator, setting up the controller’s base path at /api/auth making our registration endpoint available at /api/auth/register as a POST request. As with the rest of the logic, we’re keeping things simple. We’re creating a user by passing the request body as a RegisterRequest containing our username and password attributes into our UsersService which will handle creating our new user. We’ll then take the newly created user and generate an access token, as well as a refresh token expiring in 30 days. We’ve also added a private helper method to the controller called buildResponsePayload accepting the user, access token, and optionally a refresh token as parameters. We’ll re-use this function multiple times to build a formatted payload to return as a response from each of our authentication endpoints. In doing so, we also defined our AuthenticationPayload interface, representing the exact structure that we expect to receive each time this method is called. Now that we can register users, let’s add a method to log them in, available at /api/auth/login also via POST request. The logic is nearly identical to the registration endpoint, but instead of creating the user, it will be resolved from the username and password. We’ll only need to review the first few lines of the login method. Since we’ll need to verify the password after retrieving the user, we’ll first attempt to find the User record by their username, using the UsersService method created earlier. Assuming the user exists, we’ll attempt to match the inputted password against the hashed password in the database, once again using the validateCredentials method from our UsersService class. If the user doesn’t exist or the password doesn’t match, we’ll throw an UnauthorizedException to inform the user of an invalid username or password. Otherwise, we’ll continue on to the exact same logic that we used to generate access and refresh tokens for a newly registered user. Almost done! Let’s create our last endpoint to refresh users — /api/auth/refresh also available as a POST request. This endpoint will accept a refresh_token in the body, which we will use to identify the user and generate the new access token.
https://javascript.plainenglish.io/nestjs-implementing-access-refresh-token-jwt-authentication-97a39e448007
['Jake Engel']
2020-11-16 16:35:46.087000+00:00
['Typescript', 'JavaScript', 'Authentication', 'Nestjs', 'Jwt']
Project Euler — Problem 46 Solution
Project Euler — Problem 46 Solution Problem It was proposed by Christian Goldbach that every odd composite number can be written as the sum of a prime and twice a square. 9 = 7 + 2x12 15 = 7 + 2x22 21 = 3 + 2x32 25 = 7 + 2x32 27 = 19 + 2x22 33 = 31 + 2x12 It turns out that the conjecture was false. What is the smallest odd composite that cannot be written as the sum of a prime and twice a square? Solution [code lang=”fsharp”] let hasDivisor(n) = let upperBound = int64(sqrt(double(n))) [2L..upperBound] |> Seq.exists (fun x -> n % x = 0L) // need to consider negative values let isPrime(n) = if n <= 1L then false else not(hasDivisor(n)) // generate the sequence of odd composite numbers let oddCompositeNumbers = Seq.unfold (fun state -> Some(state, state+2L)) 9L |> Seq.filter (fun n -> not(isPrime n)) // generate the sequence of prime numbers let primeNumbers = Seq.unfold (fun state -> Some(state, state+2L)) 1L |> Seq.filter isPrime // function to check if a number can be written as the sum of a prime and twice a square let isSum(number) = primeNumbers |> Seq.takeWhile (fun n -> n < number) |> Seq.exists (fun n -> sqrt(double((number-n)/2L)) % 1.0 = 0.0) let answer = oddCompositeNumbers |> Seq.filter (fun n -> not(isSum(n))) |> Seq.head [/code] All pretty straight forward here, the only slightly confusing part of this solution is how to determine if a number can be written as the sum of a prime and twice a square: Odd Composite Number = Prime + 2 x n2 => n = sqrt((Odd Composite Number — Prime) / 2) As you know Math.Sqrt works with a double and returns a double, hence to find out if n above is a whole number I had to check whether it divides by 1 evenly
https://medium.com/theburningmonk-com/project-euler-problem-46-solution-a27fa17948b0
['Yan Cui']
2017-07-03 20:56:46.055000+00:00
['Project Euler Solutions', 'Programming', 'Functional Programming']
The Knight in Shining Armor?
In American life, there are many instances where people defer responsibility for some task or action to some unspecified other person. There was an old, idiomatic expression that I used to hear as a small child: “Let George do it.” I wondered who this “George” was. Finally, I asked my Mom, and she laughingly explained that “George” was simply a convenient label for someone who had to do something, usually a task requiring considerable diligent work. Most people, she said, would try to avoid doing that work if at all possible. “Let George do it” was their phrase. As an adolescent, I became part of the anti-war, the feminist, and the environmental movements with which my parents were involved. I noticed that many members of these groups came up with excuses for why they “couldn’t” do such-and-such work, and that work eventually was transferred to my Mom, who was an exceptionally-organized, high-speed typist, and who could communicate with the best. Much of the groups’ work involved contacting political, religious, and scientific leaders; organizing conferences or seminars; preparing campaign materials; and otherwise moving the groups’ programs forward. Mom would express her frustration with the hypocrisy of the members who claimed to be in favor of women’s rights, or against the war in Vietnam, or in favor of the environment, but who deferred responsibility for taking action to advance the groups’ agendas. One day, when she was particularly exhausted from an all-night session of typing out campaign materials; mimeographing them; and mailing them out, she said “You know, ‘George’ can’t do any more work. ‘George’s’ back is broken, because everybody has been piling on more and more work on it. They claim they’re involved in the movement, but the only people doing any ‘moving’ around here are us.” Part of that was fatigue, but it was, on a much deeper level, profoundly true. On Saturdays and Sundays, I would be part of a petition-carrying group of youngsters getting signatures on anti-war petitions, or handing out notices of protest rallies or teach-ins or sit-ins. After the initial burst of enthusiasm, I noticed that my fellow advocates fell off the wagon pretty quickly. I also saw, especially in politics, that we were looking for a “knight in shining armor” to come save us: racism, an unjust war, environmental destruction, treating women as second-class citizens or worse, these were the awful “dragons” from which we had to be rescued. President John F. Kennedy; Dr. Martin Luther King, Jr.; Robert Kennedy; Sen. Eugene McCarthy; Rev. Jesse Jackson; Sen. George McGovern; Sen. John Edwards; were just some of the “knights” we hoped would lead us to salvation. Modernly, we see Alexandria Ocasio-Cortez, Kamala Harris, Cory Booker, Elizabeth Warren, Bernie Sanders, and others “donning their armor” with the goal of saving us from the Mad Emperor, Donald Trump. Unfortunately, the quixotic assumption that a knight in shining armor will liberate us from the blandishments of this would-be dictator to his base, and, by extension, to the rest of us is simply unfounded. We are responsible for (1) critical thinking, (2) determining the threats we face from Trump, and (3) organizing or participating in a group to fight those threats. In essence, each of us is the “knight in shining armor” who must save us from Trumpocracy and its related political illness, GOP-itis. Without our commitment to doing the things that need to be done — -contacting your Senators and Congresspeople, advocating for/against positions on social media, talking to family and friends about what’s in need of change and urging them to support that momentum for change — -all our complaining does is to make us sound like old grouches. “Let George do it” is not the mantra for today. We all have to polish off our old armor, put it on, get on our horses, and gallop towards the battle. It is our determined effort to work together as a group that will prove to be our salvation. 2020 and the elections are around the corner and will be here before we know it. It’s time to sharpen our swords and take action.
https://swatkinslaw.medium.com/the-knight-in-shining-armor-2f59a03e90f5
['Stephen P. Watkins']
2019-02-16 17:25:37.466000+00:00
['Politics', '2020 Presidential Race', 'Personal Responsibility']
The power of PostgreSQL with Leaflet and Nodejs/Express (Part 1)
Photo by Gwen Weustink on Unsplash This article will show the guidelines that makes it possible to create a web app with PostgreSQL database and her spatial extension PostGIS all that combined with Nodejs/Express for the back-end and Leaflet js on the front-end. I know it’s a lot of technologies altogether but I will try to make it simpler for you. This is the part one that will focus on the front-end. Leaflet Leaflet js is a mapping library that is used in JavaScript to manipulate maps on the web. The reason of the popularity of this library lay on the simplicity and clarity of her Docs and Tutorials. Moreover, what makes Leaflet so great is that it’s completely open-source and the plugin on this library just keep coming!!! Before moving on the server-side, it’s better to have something on the front-end to visualise and understand the application that we create right? If you don’t have any experience with web development, it’s ok just try to understand the idea of the app. Building the front-end After following the tutorials on leaflet you will be able to make the front-end very quickly. Ok so what we need is a map, a tool for drawing (points, line or polygon) and three button. The first button is for adding the geometry drawn. When this button is clicked a form appear so the user can add information and submit it into the server. Don’t worry we will talk about this in the third part of this series. The second one is for deleting the geometry drawn. Finally, the third button is for refreshing the map after the user submitted a point. The map and the button can be incorporated to the web page very quickly, but what about the drawing tools, do we have to code it ourselves? Of course not, we’re talking about Leaflet here come on!! In the initialisation of the map, we just have to add the map option called ‘drawControl’ and set it to true like this: Don’t forget to add the CSS and the JS link of the drawing toolbar. Don’t worry I have the links here: Then, we want to initiate a variable to store the drawn items: Now, if we want to capture the data that is drawn, we have this event on Leaflet called ‘draw:created’ that can help us. We can add it in the variable named ‘drawnItems’ like these: Now if you draw something on the map and you console.log the variable named ‘drawnItems’ on a click event you will notice that you get an object. To be able to use it in the GeoJSON format, use the .toGeoJSON() method of Leaflet. Here is the final result for the front-end. I added a form, so the user could submit the geometry with related information. Here is the final result: As you notice, when adding a geometry feature and clicking on ‘Adding Geometry button’, this form appear and let the user add related information. Also, I created three functions to deal with the geometry coordinates input and write it automatically whether it’s a point, line or polygon. I did so to be able to add the data into PostgreSQL. We will see this in details in the second part. Finally, you could also add the ‘Delete Features’ button to delete the geometry drawn on the map before it’s added to the database (hint: check the ‘.clearLayers()’ method on leaflet). A little bonus before you go 😉 This is how you add a scale bar in meter on Leaflet: That’s it for the first part and I will see you in the second one:
https://medium.com/@tarekbagaa/the-power-of-postgresql-with-leaflet-and-nodejs-express-e5a2a1f94611
['Tarek Bagaa']
2020-11-04 19:13:47.632000+00:00
['Nodejs', 'Postgres', 'Web Development', 'Expressjs', 'Postgis']
We’re Dreamers of the Dream
Time slips away, sliding into yesterday, moments becoming memories, memories becoming dreams. We’re dreamers, of the dream of life, experiencers of the endlessly passing moment, and knowers of that which sometimes emerges inside us to be known. Like an inner flower, revealing itself, transiently in the dawn of some distant unknown place, we can briefly know, and become, the one conscious space where all knowing breathes. When we find the inexpressible silence inside all of us. Though we’re thrown into the experience of being human, and seemingly adrift, on our tiny and beautiful planet. We can find the inner space where all life breathes with one breath, in the ocean of our being. We can reconnect with the inexpressible inside us, and rediscover we vibrate, resonate and share all with the conscious Universe. And we can find the silence where all that is known and experienced becomes memories the inexpressible has of itself.
https://medium.com/loose-words/were-dreamers-of-the-dream-4875c57dddc3
['Paul Mulliner']
2020-07-31 04:28:12.518000+00:00
['Spirituality', 'Meditation', 'Mindfulness', 'Poetry', 'Life']
New Advisor Spotlight: Boris Reznikov
Boris Reznikov joins the Chynge team as our blockchain subject matter expert advisor. Boris is the Director of Partnerships at Stellar / Lightyear.io, where he runs the Asset Issuance business line. In this capacity Boris seeks to foster a network that is home to a diverse array of high quality assets by engaging with asset issuers and ecosystem players around the globe. To this role he brings a wealth of industry knowledge in the area of blockchain and distributed ledger technology and how it applies to global payment systems. Previously, he was Director of Corporate Governance Advisory at D.F. King & Co, and Senior Strategy Consultant with Deloitte Consulting, where he developed growth strategies for clients in a variety of industries. Boris received his B.A in International Economics from the University of California, San Diego, and earned his Master of Business Administration at Duke University. With his experience at Stellar and in the areas of blockchain, distributed systems, corporate governance, business development and strategy, we look forward to Boris’ guidance on our technology implementation on Stellar with Lightning Network for building out our global footprint. “I’m fortunate to have the opportunity to meet a wide variety of partners in the Stellar ecosystem. Chynge stands out with their outstanding management team and board of advisors who have cracked the code to process payments with no FX risk to customers. Their work with the UN Ambassador to Brunei around digitization of identity for refugees, distribution of humanitarian aid, and financial inclusion (especially for the ASEAN diaspora), in addition to seamless cross-border payments on Stellar, is why I’m excited to help the Chynge team continue moving forward.” We are excited to have Boris as an advisor. Chynge is now joined at the hip with Stellar and we will be creating novel financial solutions that will move financial inclusion up the ladder of economies. Boris, with the rest of the advisors, will generate the brainpower needed for Chynge to level up. Welcome to the Chynge team, Boris!
https://medium.com/chynge/new-advisor-spotlight-boris-reznikov-fec3ef6371d9
[]
2018-09-17 00:41:38.424000+00:00
['Fintech', 'Stellar', 'Blockchain', 'Advisor', 'Payments']
Riddikulus!
What is the point of private currency? Let’s take a step back and consider a few things in general. Is Grin cool? Undoubtedly. It is one of the coolest, technically interesting projects in the blockchain space, and it is sticking to its cypherpunk philosophy. Will it succeed? It entirely depends on what the definition of success is. If success means that Grin can be used as a basis for testing possible changes to the Bitcoin network, then it can certainly succeed. If it wants to become the coin of commerce, it will fail, just like Beam. Both Grin and Beam agree on one thing: true privacy and scalability are incredibly important. But, neither Grin nor Beam will ever be adopted on a meaningful scale, because no one will ever use a non-stable currency for commercial purposes. We already have Bitcoin if you want to invest in a crypto asset, or transfer large chunks of money around the world. But no-one is using Bitcoin to buy a coffee, because that would be silly (And don’t get me started on the lightning network, which is essentially a cool solution to a problem that no one was having). What we need is a stablecoin: completely anonymous, private, stable, and globally decentralized. Over my years working in the crypto-space, I have become convinced that this is the only thing that matters in cryptocurrency; the holy grail. A truly stateless, global currency. Aside from the major technical problems of this, like every attempted type-3 (non-asset backed) algorithmic stablecoin having failed so far, there are also fundamental structural difficulties, which are exemplified by both Grin and Beam. You can have cypherpunk, open-source, part-time development, but it will be incredibly slow and won’t produce anything user-friendly anytime soon. Or you can have venture-capital-backed, efficient, organized development like Beam, and be completely shut down by regulators and anti-money laundering organizations who will see a decentralized private stablecoin as the ultimate money laundering tool. The only reason Beam hasn’t gotten in trouble yet is because nobody is using it. Facebook Libra is already being shut down by major countries because of “KYC concerns” and Libra isn’t even attempting to be private, so if you think Beam stands a chance, don’t kid yourself. And this is the paradox of the holy grail: you can’t really do it subversively, because you lack the resources, and you can’t do it corporately because you’ll go to jail. Don’t be fooled by the current compatibilist talk that is floating about the blockchain industry. Cryptocurrency can be used only in two ways: to completely control and monitor everyone, or to completely privatize and hide everyone. There is no middle ground, as any such attempt will be swallowed by the authorities, or will decay into total control or censorship. I’m not talking about other uses of blockchain here. Sure, you can have decentralized assets, real estate, supply chain management, document verification, smart contract gambling, and an internet of value, etc. But that isn’t about currency. Cryptocurrency can only go one of two ways: a private, decentralized stablecoin, or a currency that is either directly state-controlled (state-issued crypto), or is easily monitored and regulated (like Bitcoin is beginning to look). While we can try to make Bitcoin more resilient to censorship and analysis, it remains more of a commodity than a currency, and fluctuates too wildly to ever be the basis of the world economy. Everything is heading towards less privacy, more control, and more surveillance, and we need to cherish and build the only technology that has even the slightest chance to go the other way, and protect the freedoms we are so willingly giving away ‘for free’.
https://medium.com/the-capital/riddikulus-465ee9d49a3d
['Victor Hogrefe']
2019-12-19 00:38:51.122000+00:00
['Mimblewimble', 'Cryptocurrency', 'Crypto', 'Bitcoin', 'Blockchain']
Dear Beautiful Sky
Dear beautiful sky, Fading out, In its twilight; Hanging on, To the last rays, Of sunshine. Dear beautiful sky; Enshrouding the sun, As it recedes, Into the clouds. Going dark, As the light, Drains out of your eyes. Dear beautiful sky; You are a sight. To admire, Day and night. A colourful display, Of stars and fire. Dear beautiful sky, I look up to you; Tonight. As elegant; As the first day, Of my life. For even if, The light goes out, On the horizon; I shall still see, The radiance, In your eyes.
https://medium.com/literary-impulse/dear-beautiful-sky-3d169a33c00f
['Fọlábòmí Àmọ Ó']
2020-06-26 08:49:06.196000+00:00
['Literary Impulse', 'Nature', 'Weather', 'Sky', 'Poetry']
Trade wars and rare earths
IMAGE: Peggy Greb, US Department of Agriculture (Public Domain) The worst aspect of the US-China trade war, apart from the fact that there are never any winners in trade wars, is that nothing about it makes any sense. The conflict has been triggered by one of the most irresponsible politicians in history, which mounting instability: today I block you, tomorrow I postpone the measures for three months, the next day I say that Huawei is threat to national security and two days later I suggest it could be included in some kind of trade agreement. In all seriousness: if a company is a threat to national security, it cannot possibly be included in any trade agreement, and conversely, if a company can be included in a trade agreement, it’s not a threat to national security. But as said, nothing about this makes sense. Block Huawei? The Chinese giant has imported enough components from the United States to continue manufacturing at its normal pace for the rest of this year, and more than enough time to develop the vast majority of these components in China if necessary. If it were necessary, which I doubt, that would be bad for US industry, because China would have been obliged to develop alternative components that would be its worst nightmare in the international markets. If the restrictions are maintained over time, the biggest problem for Trump would not come from China or Huawei, which is under no pressure from investors as it’s an unlisted company; but instead from US industry. Apple’s potential losses are enough to strike fear into investors, but many more companies face serious problems if the trade war heats up. Look no further than Google: forced by the pathetic Donald Trump and his clumsy and ill-calculated efforts to restrict its dealings with Huawei, the company has now been forced to show that its Android operating system is anything but open, that it rules it with an iron hand, and that as well as prompting many misgivings among the public, has potentially been exposed to further regulation. Might China retaliate by restricting rare earth element exports used in the manufacture of electronic components. In the same way US threats are largely empty, are China’s. Rare earth elements, in fact, are not so rare, nor is China blessed with a particular abundance of them. The only reason China is the main supplier of rare earths for industry is its lax environmental laws and comparatively cheap labor, but rare earths can be found in many places, including California, and once extracted, where they are usually found with other elements, the rest of the process is reasonably straightforward. Again: faced with a hypothetical restriction on exports of rare earths from China, all that would happen is that other countries such as Australia, Brazil, Canada, India and the United States would take up the slack, with China the biggest loser. Artificial constraints are always bad for everyone, and trade wars are, to a large extent, that: clumsy attempts to generate artificial constraints. Donald Trump believes that geopolitics can be managed by bullying, making this trade war a grotesque, absurd and pointless episode, which of course is of no concern to smartphone owners (and much less appeal to the authorities to intervene). These are meaningless, short-term actions and not lasting restrictions that will force changes in the industry that nobody cares about. None of this makes sense. In practice, the best thing that can be done about the erratic decisions and tantrums of Donald Trump is to ignore them, do nothing and wait for them to pass.
https://medium.com/enrique-dans/trade-wars-and-rare-earths-3b86c5a68aaf
['Enrique Dans']
2019-05-24 16:30:36.906000+00:00
['USA', 'China', 'Politics', 'Trade War', 'Trump']
Softmax Classifier for dummies (like me)
Let’s say we are classifying on the base of face features and let x axis denote that. Now assume we have a linear classifier of the form y = f(x). We set a threshold of 10 such that if y>10 we say that the input features x belong to a dog otherwise it is a cat. I would not go into the details of how f() in implemented. For the time being just assume it to be a black box. Although our linear classifier is working perfectly we will encounter a problem pretty soon. What if I ask that how confident are you about the prediction? The short answer will be that it is not straight forward to answer that question since we are not dealing with probabilities. We are dealing with raw continuous values. Enter Probability Now since it’s clear that we have to introduce probability we would need to figure out a way to convert our raw values to probabilities. Given a distribution what’s the best way to convert it into a probability distribution? For dummies like me the best way is to sum all the enteries and divide individual enteries by the sum. Probability formula If you are able to understand this then you have understood the crux of the material. The probability of the class defined by a softmax classifier is as follows: Softmax classifier Observe the similarity between the above formula and the probability formula. Both are very similar and just differ in the exponential term. y_i represents the class like dog or cat. Image I Let’s say the we have a 64x64 image I which when passed through our function f(I) gives two class outputs of [5,19]. We know that since y>10 the image must be of a dog. Now let’s find how confident our classifier is: Our classifier almost certain that this is a photo of a dog. This is how a softmax classifier works.
https://medium.com/@gautam-s/softmax-classifier-for-dummies-like-me-d6a301dc3cd4
['Gautam Sharma']
2020-12-25 10:04:18.585000+00:00
['Machine Learning', 'Cpp', 'Beginners Guide', 'Softmax', 'Deep Learning']
We Start Things in Upstate NY and it’s Amazing
At Upstate Interactive, our mission is to position ourselves as a strategic partner alongside Upstate organizations to help them prosper and grow through digital means. In turn, we hope that our local community benefits as these organizations grow and need to hire on more staff. Insert technical flow diagram here It’s the circle of life! In today’s day and age, organizations of all types and sizes are beginning to realize that they will not survive without having a digital presence. As this notion sinks in, there will be a need for technical talent to fill the roles of strategic partners and service providers. We wish we could do it all, but we are only four people (so far). One of the ways in which we feed into the tech ecosystem is by developing and nurturing local tech talent. We want Syracuse, Albany, Rochester, Buffalo, and other Upstate NY based firms to look within the region for their technical needs rather than outsourcing to other states or countries. We must provide access to ample opportunities for our community to learn how to code and we need to inspire and engage the developers we currently have in the area. That way, they won’t cross over to the dark side and leave us for San Francisco or New York City. We, at Upstate Interactive, have been committed to this mission, even before we joined forces. We have started things in Upstate NY, and as the title suggests, it has been amazing. Let me break it down for you: Doug Crescenzi founded Hack Upstate, a weekend long hackathon that happens twice a year. This provides the opportunity for developers from all around Upstate NY to come together and work on different projects. We love to see the ideas that have come out of these events, and the connections that are made. Hack Upstate has been around for 4 years, and it keeps growing every year. It has become a staple in our tech community and we are always looking forward to it. Peter Smith convenes a weekly coding club where developers get together to work on coding challenges. From beginners to advanced developers, the group learns how to work on a team to solve a difficult challenge. Last year, Zoe Koulouris and I cofounded Women in Coding as a way to provide more opportunities for women and others in our community to learn how to code. This came out of a lack of resources in the area at the time we were looking. Driving out to Rochester one snowy December evening for a Girl Develop It coding class, we decided to take matters into our own hands and start offering classes in Syracuse. We held a few meetings to confirm there was interest, and we’ve been hosting classes and social events since March 2016. We have a core group of community members who attend our events and classes and we continue to work hard to grow this group. We start things in Upstate NY and it’s amazing. As I always hear, there is a need for developers in the area. Wouldn’t it be great if we could hire within our city rather than outsourcing these jobs to remote workers from around the country? We are committed to bridging this gap. As we ourselves grow as a company, we hope to employ developers from within Syracuse and the rest of Upstate NY. As someone who jumped careers, I can say that programming is an exciting and rewarding field. If you are interested in learning how to code or want to get involved in any of our ventures, please reach out to us: www.upstateinteractive.io www.womenincoding.com www.hackupstate.com Facebook Twitter If we all work together, we can hopefully make Upstate NY the booming tech region that San Francisco is (with affordable rent prices). Thanks for reading!
https://medium.com/upstate-interactive/we-start-things-in-upstate-ny-and-its-amazing-8e9c6914719
['Kseniya Lifanova']
2019-03-06 15:23:15.135000+00:00
['Tech', 'Syracuse', 'Women In Tech', 'Upstate New York', 'Software Development']
I will miss Donald Trump!
A FUN READ Photo by Library of Congress on Unsplash Disclaimer: This is just a late night thought in my head and it is not intended to be taken seriously by the readers. This is more of an informal dialogue. So, I am not an admirer of clowns in politics but it’s just that if they are outvoted, I sure as hell am gonna miss the free entertainment. Like they have been able to dumb down some of the most powerful offices, and no matter how good the next ones will be, I’ll miss the cute orange joker with small fingers who’s broker of some of the best spoils. Whatever it may be, let’s not deny he’s good at something. You are persona non grata but I am almost embarrassed to admit this but I had starting liking the idea of a child sulking on Twitter late at night. trying to declare nuclear war against countries since he/she was denied his/her chocolate; like who would have thought!? It still amazes me and I am happy the world has witnessed this, it needed to. I don’t know if we will ever get a President like him again. I’m speaking but does it matter? [Got that!]
https://medium.com/illumination/i-will-miss-donald-trump-b4afc4a05aed
['Vedanth Maheshwari']
2020-11-26 21:49:06.391000+00:00
['Joe Biden', 'President', 'Elections', 'United States', 'Donald Trump']
Bigger Better Butt Program Review — Make Your Butt Rounder And Hips Wider
Bigger Better Butt Program Review — Make Your Butt Rounder And Hips Wider When I first encountered Bigger Better Butt Program, I asked myself if it really could deliver results. Having small and flat buttocks has been an embarrassing problem. I always felt insecure and couldn’t buy tight fitting jeans and body-hugging dresses that I always loved. Every time my husband and I went out for dinner, I always felt jealous when he threw admiring glances at butt-gifted women. And on our intimate times together, I felt like he was missing something when his hands explore my flat buttocks. How can I get a bigger butt because this is one thing that really drives me crazy! Well, butt implants were always available but hey, I could not afford to spend thousands of dollars on that. Is the Bigger Better Butt PDF Download something to consider? When you are close to making a decision on trying out how to get bigger hips, I suggest you go through the well-detailed research carried out by our review team and from facts gathered from various physical looks communities online through feedback from users who have already tried the Bigger Better Butt Program. In case you want to have bigger and rounder buttocks and are almost ready to give this program a shot, the information contained in the review will help you decide if and when this program will work for you. Visit Official Website What is Bigger Better Butt Program? Bigger Better Butt Program is a 4-part butt building system that claims to lets you get a bigger butt naturally amazingly in as early as 45 days. It explains easy to follow ways on how to move and shift your body fats to go to your buttocks. Yes, you can The first chapter explains the secret that is needed to get a bigger butt. This secret is what other workouts are missing. Using this one simple secret helps you achieve maximum results. The second chapter teaches you the four exercises that target the areas needed to get a firm and nice butt. The workouts are simple — no unnecessary dancing and jumping around; there are just 4 simple movements. These simple exercises can be done from the comfort of your own home or even at the gym. There are descriptions, pictures, and detailed videos to help you grasp the movements. The third chapter goes into the details about the sixty-day program that will lead you to a Bigger Better Butt. They will tell you exactly what to do and when to do it with an easy-to-follow workout calendar. The last and final chapter helps you stick with the routine. This chapter contains loads of motivational tips to help you remain committed to the program. Pros Get Bigger Better Butt lets you say goodbye to your unending search on how to get bigger buttocks quickly. It shows you the most common mistakes you’re making to get your butt rounder. Get that round and firm butt without the hollow spots and flat dents. Finally, your man will stop checking out the butts of other ladies when you’re out on a date because he’ll be glued to yours. With Bigger Better Butt guide, you will learn why prolonged exercises or jogging sessions cause you to lose substantial muscle in your butt, thus, causing it to look flatter. Furthermore, how would you like to feel like a queen when your husband or partner gets crazy as he holds your butt? You can say goodbye to devastating insecurities you felt when you used to want to drown and become invisible when, on your most intimate times, he grabbed your butt and all that he got was flat and sagging muscles. With the bigger hips you deserve, you will drive your man wild! Also, you will discover how to work fat and muscle to unleash the hidden potentials in your butt region using the natural strategy which is not known to many facing such challenging situation and has a 60-day money back guarantee. Bigger Better Butt Program is a well easy-to-follow program that has both videos and PDF format with customer service and follows through with every question that needs to be answered. Bigger Better Butt PDF guide lets you enjoy as well the attention and impressed glances men will throw in your direction when they see you walk and sway your big round butts. Cons The step by step practices and principles inside the how to get bigger butt system are simple and so easy to follow as proven by the thousands of users who had tried the program; you will only get the desired result if you follow instruction to the letter. Also, the program comes with PDF reports which you will require to read with your computer system. However, you can print it out to read at your convenience. My Opinion Painful surgeries, bad smelling creams, pointless supplements, and crazy dance workouts are not going to lead you to get the butt that you desire. The Bigger Better Butt program uses efficient and effective ways for you to achieve your ideal butt. After following the program, you won’t have to worry about sagging bathing suit bottoms or strange fitting jeans ever again. You will have a butt that is going to look great in a cute little pair of booty shorts. The results are going to be noticeable, so be prepared to be feeling great about yourself as well as people complimenting your great new butt. What are you waiting for? You can get the perfect round, toned, and sexy butt that you have always dreamed of. All that you need to do is read through the Bigger Better Butt program, practice the simple movements, and then begin the challenge. Set a goal for yourself and monitor your progress; you will be seeing results during the sixty-day challenge. You can even repeat the challenge to increase your results. You have nothing to lose, only a great butt to gain. Grab Your Copy Here
https://medium.com/@wazimrev/bigger-better-butt-program-review-make-your-butt-rounder-and-hips-wider-c091348a3d3c
['Wazim Rev']
2020-12-17 00:24:48.506000+00:00
['Girls', 'Womens Health', 'Review', 'Sexy', 'Bigger Butt']
Attack Full Movie: Release Date, Cast, Trailer, plot | Attack Full Movie Download Free
Here in this post, we discussed attack full movie 2021. What’s the attack movie new release date. Also, provide attack movie trailer. Main and recurring attack movie cast name. Quick attack movie review. Basic Information Of Attack Movie: Attack Hindi Movie Release Date: 28 January 2022 28 January 2022 Attack Movie Director Name: Lakshay Raj Anand Lakshay Raj Anand Attack Movie Genre: Drama Drama Attack Full Movie IMDb Rating: 6+/10 So that’s all about basic information of attack full movie online. Below w provide attack Hindi movie trailer as well as rotten tomatoes Attack movie review. Download Fantastic Beasts The Secrets Of Dumbledore: Release Date, Cast, Plot, Trailer | Fantastic Beasts 3 Secrets Of Dumbledore Download Free Quick Information Related To Attack Movie: Assault is an impending 2022 Indian Hindi-language activity spine chiller movie coordinated and composed by Lakshya Raj Anand , delivered by John Abraham, Jayantilal Gada and Ajay Kapoor. In view of prisoner emergency, the storyline is motivated by evident events. Attack is booked for dramatic delivery on 28 January 2022. Attack Movie Trailer: After seeing this attack Hindi movie trailer we all are very excited for attack full movie online. But we all are wait till attack Hindi movie release date. After that we provide the link where you can attack movie watch online and attack full movie download free. Attack Hindi Movie Plot: A homicide with no proof. In view of the present time Kashmiri Pandit issue, The Hindus of the Kashmir Valley, a larger part of whom were Kashmiri Pandits, had to escape the Kashmir valley because of psychological oppression. Download Don’t Breath 2: Release Date, Cast, Plot, Trailer | Don’t Breath 2 Download Free Attack Hindi Movie Cast: Here we provide the attack Hindi movie cast. Main and recurring attack movie cast. You can also get the information of attack movie cast. So John Abraham, Jacqueline Fernandez, Rakul Preet Singh, Ratna Pathak Shah and Prakash Raj this all are main character in the attack Hindi movie cast. So that’s all about attack full movie 2021. But we all are wait till attack movie new release date. After release this attack movie we provide the link where you can attack movie watch online as well as attack full movie download free. Download Ayushman Khurana’s Chandigarh Kare Aashiqui: Release Date, Cast, Trailer, Plot | Chandigarh Kare Aashiqui Full Movie Download
https://medium.com/@tazzakhabar001/attack-full-movie-release-date-cast-trailer-plot-attack-full-movie-download-free-b46538b8b632
[]
2021-12-16 04:44:06.402000+00:00
['Movie Review', 'Bollywood', 'Movies', 'Attack']
How to pass data to another view when using performSegue in Swift 5
In this example, I will send dictionary data (String & Int) like this… let sender: [String: Any?] = ["name": "My name", "id": 10] How to do? [Step 1]: Connect your first view and the second view. [Step 2]: Add your Identifier, In this example my identifier is ShowSecondView [Step 3]: Create button’s Show on the first view with code. @IBAction func showButton(_ sender: UIButton) { //Dictionary data that I want to send to the second view. let sender: [String: Any?] = ["name": "My name", "id": 10] // To go to the second view. self.performSegue(withIdentifier: "ShowSecondView", sender: sender) } [Step 4]: Prepare for segue before pass data to another view. override func prepare(for segue: UIStoryboardSegue, sender: Any?) { if (segue.identifier == “ShowSecondView”) { let secondView = segue.destination as! SecondViewController let object = sender as! [String: Any?] secondView.name = object[“name”] as? String secondView.id = object[“id”] as? Int } } You can pass datas by dictionary type like… secondView.myDictionary = sender as! [String : Any?] But In this example, I will separate them and pass its.
https://medium.com/@odenza/how-to-pass-parameters-to-another-view-when-using-performsegue-in-swift-5-dd96832b412d
[]
2020-12-18 01:59:59.482000+00:00
['Swiftui', 'Swift Programming', 'iOS App Development', 'Performsegue', 'Pass Data']
Telling Ansible To-Do things According To Operating System
So hey guys hope you all are doing today's article gonna be super useful and exciting as we all know in real-time we have to use different operating system due to some requirements but as we have seen in the previous article to download some package we have to give package name to ansible but just thin when you have two OS one RedHat and one ubuntu now in RedHat to install webserver we name is httpd but to install the same thing in ubuntu the name is apache2 but we want ansible to do things automatically and take decision that what is called automation right so this can be achieved easily and we have many methods to do this today we will discuss the manual method for this. If you have not checked my previous article on ansible I recommend you to read that first in that we have talked about how to configure yum that is necessary for any practical and in that we have also covered how to automate docker using ansible. If you have not checked my previous article on ansible I recommend you to read that first in that we have talked about how to configure yum that is necessary for any practical and in that we have also covered how to automate docker using ansible. Once yum is setup we are ready but if you don’t want to configure yum you can use Cloud services like AWS, GCP, Azure etc. Today here I will be using Ec2 instance which service offered by Aws if you want to know how to use AWS instance from ansible then let me know gonna cover the whole article on it but if you don't use cloud services you can install it on your system and set inventory in the same way as we have discussed in previous article. Let's start with this practice as below you can see the playbook lets understand what does that mean line by line:- - hosts: diff vars_files: — “{{ansible_facts[‘distribution’]}}-{{ansible_facts[‘distribution_major_version’]}}.yml” tasks: — debug: msg: “{{a}}” — package: name: “{{package}}” state: present — template: dest: “{{dest}}” src: a.html — service: name: “{{service}}” state: started First, we have to define the host group and then we have a variable file from which playbook take variable now you can see we have used {{ansible_facts[‘distribution’]}} -{{ansible_facts[‘distribution_major_version’]}} this mean whenever playbook run it go to target node and extract facts like os name its version and other information so we have to print it and tell playbook to use that file for eg when I run my playbook it pings to IP which is of RedHat then under the variable file it gives Redhat-8.0 now playbook finds a file with this name and uses the variable specified in that file. Files in Folder As you can see in this image we have 3 files one playbook with the name i.yml other is os specific file that is used by Playbook after gathering facts. All the codes with files will be in the GitHub link provided at the end of the Article. As soon as you turn the playbook it automatically decides which package name to use according to the operating system that's what we need.😎 Server running on ubuntu os Server running on amazon os Guys, here we come to the end of this blog I hope you all like it and found it informative. If have any query feel free to reach me :) Github link:-https://github.com/guptaadi123/Ansible.git
https://medium.com/@gupta-aditya333/telling-ansible-to-do-things-according-to-operating-system-13971b87da2f
['Gupta Aditya']
2020-12-22 08:05:09.551000+00:00
['DevOps', 'Information Technology', 'Ansible', 'Web Server', 'Automation']
The best way to Integrate the Node JS Application with MongoDB
The best way to Integrate the Node JS Application with MongoDB Tudip Technologies Jul 2·2 min read Introduction In this article you are going to learn about how to integrate a MongoDB database with an existing Node application. Node.js has the ability to work with both MySQL and MongoDB as databases. Before reading this article we have required basic knowledge of the Node JS platform. Node JS: MongoDB Setup Install the NodeJS mongoDB module by using the NPM (Node Package Manager). $ npm install -g mongodb Options “-g” : Install this package globally Install this package globally “Mongodb” : Install Node JS MongoDB module. Verify the MongoDB successfully installed Enter the following command on your command prompt. $ mongo This will drop you into an administrative shell. Adding Mongoose and Database Information to the Project Add the npm package mongoose to the project with the npm install command: $ npm install mongoose Mongoose gives you the built-in methods, which you will use to create the connection to your database. Before creating your mongoose schemas and model you need to make connections with the database. We will add our database connection information to our application. Create one file database_info.js inside the project. inside the project. Add the following constants in the database_info.js file file ‘mongoose’ give you access to Mongoose’s built-in methods const mongoose = require('mongoose'); const MONGO_DB_USERNAME = 'sandip; const MONGO_DB_PASSWORD = '1234'; const MONGO_DB_HOSTNAME = '127.0.0.1'; const MONGO_DB_PORT = '27017'; const MONGO_DB = 'studentInfo'; Add the DB connection information to the App.js file so that the application can use it. const express = require('express'); const app = express(); const router = express.Router(); const path = __dirname + '/views/'; const router = express.Router(); const db = require('./database_info); const path = __dirname + '/views/'; Read more: https://tudip.com/blog-post/the-best-way-to-integrate-the-node-js-application-with-mongodb/
https://medium.com/@tudiptechnologies/the-best-way-to-integrate-the-node-js-application-with-mongodb-10ca4b178aaf
['Tudip Technologies']
2021-07-02 05:23:32.500000+00:00
['Testing', 'Mobile Application', 'It Services', 'Software Development', 'Social']
ARCUS Common Cache Module Use with Basic Pattern Caching in Java Environment
If you are a developer who tries to apply cache to the application for the first time, you might not be able to get it done right. When it comes to applying cache to the application there are various caching patterns. Let’s talk about the most commonly used — the Demand-fill caching pattern and the problems you might encounter while applying it to your application. To solve these problems we’ll take a glimpse into Spring AOP, and lastly, I will introduce features available in the Java library ARCUS common module using Spring AOP. Demand-Fill Cache The Demand-fill cache pattern is a method used to access data in a cache store first instead of the main database when an application requests a data inquiry. If data exists in the cache-store, it’s retrieved from the cache, otherwise, data retrieved from the main database, stored into the cache, and after that, it has returned the data to a client. DB + ARCUS Cache Cluster This method is mainly used in query-specific requests and it’s done when a request is received from the server. function fetch(key, ttl) { data = cache.read(key); if (!data) { data = database.read(key); cache.write(key, data, ttl); } return data; } To explain the Demand-fill pattern, let’s take a look at the following example of a Spring Framework application in Java that inquiries product information. @Service class ProductService { private ProductDatabase database; ... // Product inquiry. public Product get(long id) { /* Retrieves the product data from the database with an ID. */ return database.get(id); } } There is a get API in the ProductService class for a product inquiry and if you deliver the ID of the product that you want to inquiry to the get API of ProductService class, you will retrieve the product information data through the database. In the following sample, referring to the basic usage of the API provided by ARCUS Client, we will cache the product information with the Demand-fill pattern. @Service class ProductService { private ProductDatabase database; private ArcusClient arcusClient; ... // Product inquiry. public Product get(long id) { // Generate a cache key for a specific (ID) product. String cacheKey = "product:" + id; /* Asynchronous request for Get operation that inquires items to the ARCUS cache. */ Future < Object > getFuture = client.asyncGet(cacheKey); try { // Waits for 700ms for request results from the ARCUS cache. Product product = (Product) future.get(700, TimeUnit.MILLISECONDS); if (product != null) { // In case of a Cache Hit, return cached data. return product; } } catch(Exception e) { /* Cancel the Get operation in cases of: request timeout, cache server down, or error occurrance. */ future.cancel(true); } /* In case of a Cache Miss, inquiry data from the database. */ Product product = database.get(id); /* Store the product retrieved from the database in the ARCUS cache with an automatic expiration time of 60sec. */ client.set(cacheKey, 60, product); return product; } } After applying the Demand-fill pattern caching to one line product inquiry code, a large number of codes have been appended. Of course, the real data inquiry code of an application is much more complex than this sample code. Hence, with the addition of cache logic, the complexity of code will be increased alongside the cost of the changed code test and it will increase the number of duplicate codes as the subjects to apply cache will increase. Eventually, it will turn into a very complex and difficult code while trying to apply a cache for query performance, leading to an increased cost of development that will be hard to maintain and focus on core business logic. Therefore, we need to think about how we can apply the Demand-fill pattern caching logic to the target APIs without changing the existing code. Spring AOP-based Caching Spring AOP (Aspect Orient Programming) separates the common logic into different codes from the application, allowing it to focus on the core business logic of the application. The @Transactional Annotation is a typical AOP concept used in many Spring applications for transactional processing. Oftentimes if you have a logic that needs to perform atomically many changes to the database, paste the @Transactional Annotation on the top of the corresponding API. For example, let’s consider the code of a service that transfers money from one account to another. @Transactional public void transferMoney(long from, long to, long amount) { accountDatabase.decreaseAmount(from, amount); accountDatabase.increaseAmount(to, amount); } If an error occurs when invoking transferMoney API after the amount of the from account has been deducted (decreaseAmount) from the database but the amount of the to account isn’t increased (increasedAmount) , that amount will be rolled back to the from account, to its previous state. However, when a database fails, there isn’t any code to roll-back the amount of money to the account. The fact is, some parts of a transferMoney code is hidden by @Transactional . It’s not exactly the same as the actual code, but the hidden code would look like something as follows. public void transferMoney(long from, long to, long amount) { try { // Beginning of the transaction. transactionManager.begin(); accountDatabase.decreaseAmount(from, amount); accountDatabase.increaseAmount(to, amount); /* In case of successful remittance, it commints the reflection in database. */ transactionManager.commit(); } catch (Exception e) { /* In case of failure, it rolls back to the previous state. */ transactionManager.rollback(); } } The part of the code hidden for the transaction is isolated from other modules, and the isolated code will be inserted into the API (method) where @Transactional pasted. In Spring AOP, there are two typical ways to insert the code into the target API. One way to do it is to insert the code into the target class’s byte code on the compiling time and the other way is to, create the target class’s proxy on the runtime. For more details on this, please refer to the Spring AOP‘s official documentation. Cache logic of the Demand-fill pattern can also be isolated into other modules same like @Transactional . The question is, where do you define the code that needs to be inserted into the target to apply cache? Spring AOP provides @Aspect Annotation to modularize common logics of services into class forms. You can write the code that needs to be executed before, after, and when an exception occurs running the target API in the class granted by @Aspect . @Component @Aspect class ArcusCacheAspect { /* @Pointcut: Target setting to apply common logic (Annotation, Package, Class, Method, Parameter Name(s)). @ArcusCache Annotation is alreade granted to the Servive class method and applying common logic to methods in which return types exist. */ @Pointcut("@annotation(ArcusCache) && execution(public !void *Service(..))") public void pointcut() {} /* @Around: Annotation to perform code before and after the call of the target API.In below shown code in which common logic is performed, jointPoint already has the signatures (Class, Method, Parameter, Annotation) of target API. */ @Around("pointcut()") public Object around(final ProceedingJoinPoint joinPoint) throws Throwable { // Target API before the call. System.out.println("before"); // Calling the target API. Object object = joinPoint.proceed(); // Target API after the call. System.out.println("after"); // Returning the target API's data. return object; } } Let’s try to separate the cache logic using @Aspect in the product inquiry code, that explained earlier with the Demand-fill caching method. @Component @Aspect class ArcusCacheAspect { /* @Pointcut: Target setting to apply common logic (Annotation, Package, Class, Method, Parameter Name(s)). @ArcusCache Annotation is alreade granted to the Servive class method and applying common logic to methods in which return types exist. */ @Pointcut("@annotation(ArcusCache)") public void pointcut() {} /* @Around: Annotation to perform code before and after the call of the target API.In below shown code in which common logic is performed, jointPoint already has the signatures(Class, Method, Parameter, Annotation) of target API. */ @Around("pointcut()") public Object around(final ProceedingJoinPoint joinPoint) throws Throwable { // Generate the cache key through target API parameter. String cacheKey = createArcusKeyFromJoinPoint(joinPoint); /* Obtains the Expire Time through target API's @ArcusCache Annotation. */ int expireTime = getExpireTimeFromJoinPoint(joinPoint); /* Asynchronous request to Get operation of ARCUS cache items inquiry. */ Future < Object > getFuture = client.asyncGet(cacheKey); try { // Wait for 700ms for request result from ARCUS cache. Object object = getFuture.get(700, TimeUnit.MILLISECONDS); if (object != null) { /* In the event of a Cache Hit, return the cached data. Actual code of target API(joinPoint.proceed()) not performed. */ return object; } } catch (Exception e) { /* Cancel the Get Operation request in case of: request timeout, cache server down or error occurrance */ getFuture.cancel(true); } // In case of Cache a Miss perform the target API code. Object object = joinPoint.proceed(); /* Store the retrieved Object from the database into ARCUS cache. */ client.set(cacheKey, 60, object); // Return the data. return object; } } Now if you assign @ArcusCache Annotation to the target API that you want to apply cache, around of the ArcusCacheAspect class will be called up when the target API is called, thus this way caching will be done in a Demand-fill pattern. ARCUS Common Cache Module with Basic Pattern to Help Apply Caching in Java Environment ARCUS common cache module provides a modularized Aspect class that performs the cache logic of Demand-fill pattern. Therefore, without any code modification, you can simply apply the cache to the target APIs. The ARCUS Common Module has the following advantages over using the API directly provided by the existing Arcus Java Client. Allows a developer to focus on the core business logic by separating cache logic into different modules. Reduces development cost without any changes to existing code. No need to be extra familiar with the usage method of cache client API. There are two different Demand-fill caching methods provides by ARCUS common cache module. One method is to assign the cache target API an Annotation, and another way is to specify the cache target APIs with caching attributes in the Property file. Annotation-based Caching In the get API of previously explained product inquiry it’s possible to apply cache simply by granting @ArcusCache . During the product query, if it’s a cache hit, without performing the internal code of get API we will return the retrieved data from the cache-store. In case, if the cache item cannot be retrieved due to cache server failure, it has no impact on service behavior. Because it’s implemented to perform only the internal code of get API. @Service class ProductService { private ProductDatabase database; ... // Product inquiry through the id. @ArcusCache(prefix = "PRODUCT", expireTime = 60, operationTimeout = 700) public Product get(@ArcusCacheKey long id) { /* In the event of Cache Hit, below code will not be performed. */ return database.get(id); } // Product inquiry through the Object. @ArcusCache // use all fields of product as a cache key parameter. public Product get (@ArcusCacheKeyParameter("*") Product product) { /* In case of Cache Hit, below code will not be performed. */ return database.get(producet.getId()); } } You may have noticed a similarity with @Cacheable Annotation of Spring Cache if you have used it enough. ARCUS Spring also provides a cache implementation in order to support Spring Cache. Unlike @ArcusCache , to set cache properties (prefix, expire time, operation timeout), you must create a Spring Cache instance that has each cache property and specify it in the Spring Cache Annotation’s cacheNames property. @Service class ProductService { private ProductDatabase database; ... // Product inquiry @Cacheable(cacheNames = "product_60_ttl_cache" // prefix=PRODUCT, expire time=60 */ , key = "#id") public Product get(long id) { return database.get(id); } } @Configuration class CacheConfiguration extends CachingConfigurerSupport { ... @Bean public Map<String, ArcusCacheConfiguration> initialCacheConfig() { Map<String, ArcusCacheConfiguration> initialCacheConfig = new HashMap<>(); initialCacheConfig.put("product_60_ttl_cache", product60TTLCache()); return initialCacheConfig; } @Bean public ArcusCacheConfiguration product60TTLCache() { ArcusCacheConfiguration cacheConfig = new ArcusCacheConfiguration(); cacheConfig.setPrefix("PRODUCT"); cacheConfig.setExpireSeconds(60); cacheConfig.setTimeoutMilliSeconds(700); return cacheConfig; } } Because Spring Cache is designed with a focus on cache abstraction, there’s the inconvenience of setting up vendor-provided special attributes (e.g. prefix in ARCUS). In addition, a developer must set the Key parameter of a cache that affects the outcome. @ArcusCache Annotation provided by ARCUS common cache module has the flexibility to specify the cache properties for each target APIs, including the feature to create the cache key by automatically setting the key parameter of cache, thus you don't have to extra worry about it. Nevertheless, if you want to have a cache implementation without extra code changes, using the ARCUS Spring can be a good option for you. Property File-based Caching It’s difficult to check cached API items at one glance if Annotation-based caching is applied. Also due to code modification requirements, the project needs to be rebuilt and deployed in order to grant Annotation. To that end, the ARCUS common cache module provides the method to apply the cache by specifying the cache target APIs in a separate file. All you have to do is to create the arcusCacheItems.json file in your project and write out the cache target APIs (package + class + method name) and cache properties in JSON format. /* arcusCacheItems.json */ [ { "target": "com.service.ProductService.get", "prefix": "PRODUCT", "keyParams": ["id"], "expireTime": 60 }, { "target": "com.service.UserService.get", "prefix": "USER", "keyParams": ["user.id"], "expireTime": 120 } ] ARCUS common cache module provides an API for managing cache targets information details such as arcusCacheItems.json . If we could do a little more here, without using the arcusCacheItems.json file, after importing a list of cache targets from external storage, it is possible to add, remove targets using the cache target management API provided by ARCUS common cache module. In that case, there is an advantage to change the cache targets on runtime, hence you don't have to redeploy the application. class CacheItemManager { private ArcusCacheItemManager arcusCacheItemManager; private Database database; ... public void updateArcusCacheItems() { arcusCacheItemManager.update(getArcusCacheItems()); } public List <ArcusCacheItem> getArcusCacheItems() { return database.getArcusCacheItems(); } } Conclusion So far, we’ve looked into how to apply commonly used a cache pattern to the application and the features available in the ARCUS common cache module. If you already have experience with cache applications this might sound cliché, but in reality, when we were building ARCUS on clients’ systems there are still many people who don’t know much about how to use the ARCUS Client. Therefore for those groups of people, we created the ARCUS common cache module to make it easier for them to use the ARCUS in the application. For the future, we are planning to add and optimize the following features so that the ARCUS cache application could be more easily and quickly done.
https://medium.com/jam2in/arcus-common-cache-module-with-basic-pattern-caching-in-java-environment-db88c4bf7585
['Nushaba Gadimli']
2021-03-30 02:39:17.203000+00:00
['Spring', 'Jam2in', 'Annotations', 'Arcus Cache Cluster', 'Java']
Go West, where the skies are blue?
Trying to look for a suitable HDB resale flat in Merlion City can be quite the nightmare. Although we had established parameters to narrow our search, we still ended up exploring all over the place — Ang Mo Kio, Bishan, Toa Payoh, to name a few. That was when I decided to set an additional filter instead. Rather than trying to find an elusive place where commuting to work is convenient for both of us, we should instead find a place that is close to one of our workplaces. Why torment both of us when only one person needs to go through the commuting pain? And that meant going West. I generally like the Western Territories – because that was where I grew up and spent a big part of my life. So we zoomed in on the Golden Western Belt – starting from Tiong Bahru, followed by Redhill, Queenstown, Commonwealth, Holland, Ghim Moh, Dover, and ending at Clementi. Tiong Bahru: We could never quite understand why the HDB flats in Tiong Bahru are so expensive except that part of it is a really nice gentrified neighbourhood which offers many F&B options and attracts hipster-wannabes. It is a place we don’t mind visiting from time to time when we crave the tau huay and chwee kueh at Tiong Bahru Food Centre but we were not sure we will pay a premium to live near there. We could never quite understand why the HDB flats in Tiong Bahru are so expensive except that part of it is a really nice gentrified neighbourhood which offers many F&B options and attracts hipster-wannabes. It is a place we don’t mind visiting from time to time when we crave the tau huay and chwee kueh at Tiong Bahru Food Centre but we were not sure we will pay a premium to live near there. Redhill: To us, there was no particular draw except that it is fairly near to the city centre. Nah. To us, there was no particular draw except that it is fairly near to the city centre. Nah. Queenstown: Merlion City’s first HDB satellite town is going through a rejuvenation and there will be more amenities when the new flats are completed as part of the Selective En Bloc Redevelopment Scheme (SERS) for the old flats in Tanglin Halt. Two developments of interest to us were SkyVille @ Dawson and SkyTerrace @ Dawson. Merlion City’s first HDB satellite town is going through a rejuvenation and there will be more amenities when the new flats are completed as part of the Selective En Bloc Redevelopment Scheme (SERS) for the old flats in Tanglin Halt. Two developments of interest to us were SkyVille @ Dawson and SkyTerrace @ Dawson. Commonwealth: We spied seven candidate skyscraper blocks of flats in Tanglin Halt that are super near to Commonwealth MRT station but with the upcoming SERS affecting essentially the rest of Tanglin Halt, the area will see significant dust and noise pollution from demolition and construction activities. We spied seven candidate skyscraper blocks of flats in Tanglin Halt that are super near to Commonwealth MRT station but with the upcoming SERS affecting essentially the rest of Tanglin Halt, the area will see significant dust and noise pollution from demolition and construction activities. Holland: There are four relatively new blocks of flats in Holland Drive but only two of them have 5–room flats. The total supply is 195 flats but after considering stack and level, and units already sold, good units for sale are fairly rare and the asking prices are typically above S$900K. There are four relatively new blocks of flats in Holland Drive but only two of them have 5–room flats. The total supply is 195 flats but after considering stack and level, and units already sold, good units for sale are fairly rare and the asking prices are typically above S$900K. Ghim Moh: Good 5-room flats in Ghim Moh Link are hard to come by and they command prices above S$900K as well. Good 5-room flats in Ghim Moh Link are hard to come by and they command prices above S$900K as well. Dover: Nope. Nope. Clementi: Only one HDB development in the whole of Clementi caught our eye, and that’s Casa Clementi. Excluding those estates we eliminated from our list, buying a good 5-room HDB resale unit in Queenstown, Holland, Ghim Moh or Clementi would easily cost us a princely sum. We (read: The Wife) decided to focus our (read: my) analytical firepower on Casa Clementi and see if we could find a unit that suited us.
https://medium.com/@themoneypit/go-west-where-the-skies-are-blue-b5fa558514d0
['Money Pit Digger']
2020-12-16 15:10:01.447000+00:00
['Property Search', 'Singapore', 'Hdb Resale Flat']
How to Feed Your Mind in Winter? Keep It Simple
How to Feed Your Mind in Winter? Keep It Simple Three simple ideas if you're feeling a little flat in lockdown Photo by Annie Spratt on Unsplash If you are feeling low in winter, you are not alone. The days are grey and short. The struggle to get motivated is real, with a probable third wave of the virus on the horizon. Even in the best of relationships, lockdown is not easy. We are an easy-going couple who are used to working together. Even so, it’s a small flat. We keep bumping into each other during the day. The joke is wearing thin after nine months. He works in our bedroom. I work in the eating area. The place seems even smaller. The cafes closed. Nobody leaps to live in London for a quiet life. Yet life has narrowed down to stay at home, watch the news, a trip to the supermarket, and a walk.
https://medium.com/2-minute-madness/how-to-feed-your-mind-in-winter-keep-it-simple-5f4aa57145c6
['J.R. Flaherty']
2020-12-10 18:07:38.477000+00:00
['Self', 'Home', 'Happiness', 'Motivation', 'Minimalism']
Meet the New GetBlock Dashboard: Updated Statistics, Easy-to-Use API Keys
At GetBlock, we have been busy at work building out a blockchain developer platform that can be used for immediate API connection of your app to more than 40 nodes of high-ranked cryptocurrencies. Our team is focused on delivering the customers additional services that can bring flexibility in the working process and help their business projects going forward. Today, we’re ready to introduce you to the newly released GetBlock Dashboard. It represents an updated version of the personal area that can give developers wider opportunities with enhanced tools and analytics services. Account and its features GetBlock Dashboard is the user’s personal page that allows him to work with API keys, track request volume, monitor statistics and use documentation and information about features that are stored. NOTE: Any user who wants to start working with our nodes has to register an account and sign in to GetBlock Dashboard. In the new version, our developers have added new interface elements, tabs and counters. Let’s start with the Balance or the so-called requests counter, where users can see the current number of available requests, including the free ones. Every registered GetBlock user gets a daily bonus: 40,000 free requests (but unused requests can’t be transferred to the next day). In the new interface, we also added a timer that shows you when the new ones will be credited. Developers also added tips to the interface that will help new users navigate quickly through the account. A simple way to work with API keys Each account owner can create an unlimited number of API keys. Users who have several projects can get an individual key for each one. Another useful feature for developers was added: Requests Limit. Now you can set a threshold (for example, 200 thousand / hour) to not spend more requests than planned. There is no need to manually credit each API key with requests anymore. The new GetBlock Dashboard also has a feature for deleting API keys. To sum up, now users can create as many keys as they need, manage them by changing settings, limits and name (rename), and also delete unused ones. Improved statistic Every user of GetBlock Dashboard can see statistics that are displayed as graphs. In the new version of the account, statistics are more detailed and updated more often. You can see the following data: On the whole, this section gives each user valuable insights by showing where requests go and what they are spent on. This option helps to optimize request usage and plan for demand even if the number of nodes changes. More information for developers The Documentation tab which can be seen in the side menu of the account was updated as well. It contains useful information for developers and other users: GetBlock Docs overview Authentication with API Key Explanation of API methods Our developers also added Postman Collection. It contains all the available endpoints of nodes provided by GetBlock, and every user can get quick access to a required one. In the Docs tab, you can also find nodes endpoints for all blockchains provided by GetBlock. GetBlock presents new pricing packages for both current and future clients. The team has changed the range of prices to meet the needs and requirements of our customers and get new opportunities for the development of the project. We added Unlimited rates for clients with ambitious plans. For $500 per month, all nodes are available to you with a limit of 10 requests per second (for each node). Also, the pricing packages have been changed. Full details can be found in our Pricing. GetBlock also started to provide dedicated nodes. Shared nodes can be used by a group of customers, while a dedicated node belongs only to one user. Dedicated nodes do not have limits — neither for the number of requests nor for RPS. The customer can choose the required API method and request custom features. Contact us to learn more about conditions and receive a personalized payment plan. To sum up The new account interface is user-friendly, intuitive and feature-rich. Our community is growing and becoming more diversified, so the team changes the service following the needs and wishes of customers. More cool tools and features will be delivered soon, stay tuned! We hope you enjoy the new GetBlock Dashboard. Feel free to leave us some feedback, suggestions, feature requests, complaints, etc.
https://medium.com/@getblock/meet-the-new-getblock-dashboard-updated-statistics-easy-to-use-api-keys-ac2bb7f34a9a
[]
2021-04-09 07:43:29.503000+00:00
['API', 'Cryptocurrency', 'Blockchain', 'Blockchain Development', 'Node']
Perks of Being a Vampire
Photo by Clément Falize on Unsplash Our world isn’t all Buffies and Van Helsings, you know… that brand of warrior is much rarer than the movies would have you believe. At least, among humans. It helps, to be a vampire. Leave aside the flashy combat footage so familiar from your Blade and Underworld films. I’m aghast at the lack of creativity in such cinematography. The incredibly strong vampire kills many men with his incredible strength. The outlandishly fast vampire steals many secrets from human scientists with his outlandish speed. The satanically seductive vampire seduces many women with his secrets of seduction. I yawn at such scenes. Is that the limit of human imagination? My hands are as solid as steel gauntlets and move faster than a hummingbird’s breath. Of course, I could tear out your entrails with my bare fingers or punch through a solid oaken door. But for what? I’m not saying I’m not evil. Far from it. By most human definitions I’m wickeder than a demon. I once bathed in the blood of a bride and groom while their wedding party waited in the next room. The newspapers came up with a dandy tale to explain that one. The only thing is, I’m much less interested in humans than you all seem to anticipate. I mean, consider my perspective… I’ve had a fixed diet with a single staple for the past two thousand some odd years. If you had to eat nothing but raw shrimp for two millennia, wouldn’t it start to sicken you? Even if the shrimp learned to tap dance and sing and construct cybernetic systems… your kind has become a tad bland for my palate. But I can see I’m boring you. My excuses. Bon appétit to me, and a good death to you. Try to taste better than your spouse.
https://medium.com/collective-unconscious/perks-of-being-a-vampire-85287e0e5b34
['Alex Tucker']
2019-12-02 15:15:31.214000+00:00
['Horror', 'Short Story', 'Fiction', 'Short Fiction', 'Vampires']
New Year’s Resolutions?
New Year’s Resolutions? Just forget it and do whatever you like Does it matter if we have any new year’s resolutions? These are mostly unreal and fake. We do our duties and enjoy life so neatly throughout the year. In fact, every day is new. Every week is new. The years are new only in the numbers. Play with the numbers, and enjoy your life. Sing, dance, work, cook, eat, write, etc., every day If you really want to do something worth it, love what you do. Don’t do only what you love. We need to do odd jobs as well — what we don’t like to do. Photo by freestocks.org from Pexels Join me on Twitter and LinkedIn. Cheers, Debesh Choudhury.
https://medium.com/illumination/new-year-resolution-92632b1a3966
['Debesh Choudhury']
2020-12-21 23:35:15.189000+00:00
['Life', 'Numberonetip', 'Work', 'New Year Resolution', 'Entertainment']
Why Jill Biden’s Doctoral Degree Angers People
Essayist Joseph Epstein recently wrote a letter-style op-ed for the Wall Street Journal in which he suggests Dr. Jill Biden should drop her title as she prepares to become First Lady of the United States. Epstein spends the letter giddily elaborating on the reasons he feels Biden’s title is an unnecessary indulgence. Despite the fact that anyone who holds such an advanced degree earned their title through years of labor, Epstein — who holds only an honorary doctorate and admits he barely scraped through his bachelor’s program — suggests Biden’s degree is insufficient. Fraudulent. Comedic, even. Because a woman who holds an advanced degree is, in his eyes, a joke. Only a man who delivers a baby can hold the title of doctor — by implication, not a woman who has merely birthed one. Misogynistic overtones and condescending language aside, Epstein spends considerable space in the letter discussing his own credentials as a professor at Northwestern University. Except, he was actually just a lecturer and hasn’t work there since 2003, according to Northwestern’s public statement. The fact he cites the prestigious four-year university so often seems purposefully designed to contrast with Biden’s experience working for two-year schools. Not only does he frown upon her dissertation on community college students’ needs, but his constant reminders of his association with Northwestern bring Biden’s own popular role as a community college professor to mind, perhaps in an effort to subvert her. Certainly in his tedious paragraphs despairing how easy it is to get an advanced degree these days (despite the fact he doesn’t really have one), he aims to show she’s somehow inferior to him. It’s easier for him to accept the title of First Lady, a reminder that she’s still second to her husband the President, than address a woman with a degree in education as doctor. And that’s the thing. The fact Biden studied education — which has so long been women’s role in society, a field beneath men — but attained a degree that comes with a title of respect is threatening. It’s a threat because it makes women in educational fields legitimate instead of the way too many people perceive them: glorified childcare. It reminds me of the time I was seventeen and my dentist, an old man with outdated values, asked me what I planned to do after college. I told him I wanted to be a professor, maybe even president of a university. In response he asked, in the sour tone men use when a woman is too ambitious for their taste, aren’t you more suited for working with children? From Epstein’s perspective, he likely thinks little of Biden’s work uplifting the status and resources of community colleges in the United States. You know, those supposedly terrible two-year schools that men like him would never stoop to associate with. Those junior colleges that let anyone get a degree. Anyone. Do you see what I’m getting at? There’s more than misogyny going on here. Biden, by asking people respect the title she earned, is subverting an entire academic system that places accessible, inclusive education at the bottom. Men like Epstein want to keep enjoying the exclusivity of their ivory towers, their boy’s clubs, as long as possible. The moment that women and other marginalized groups break through, attain advanced degrees, and arrive to smash the ivory tower, these kinds of men move the bar. They insult the system. It’s too easy now, Epstein argues: “The Ph.D. may once have held prestige, but that has been diminished by the erosion of seriousness and the relaxation of standards in university education generally, at any rate outside the sciences. Getting a doctorate was then an arduous proceeding,” he writes. “Dr. Jill, I note you acquired your Ed.D. as recently as 15 years ago at age 55, or long after the terror had departed.” Because that’s the thing. As soon women and people of color break through the barriers people like Epstein hope will keep them out, they will say the system is broken. It’s too easy now, they argue. The terror has departed, Epstein says, but what he means is you didn’t earn the right. Of course, the truly ironic thing — what made me laugh out loud at the pitiful drudgery produced by yet another insecure, small-minded man — is that he’s the one with the honorary degree he so despises. In his own words, “Such degrees were once given exclusively to scholars, statesmen, artists and scientists,” but now, men like Epstein can get them. He doesn’t include himself in the list of people with honorary degrees he so disdains, but readers can still see a comparison. Biden, however, holds a degree she earned. She studied, took the classes, wrote and published the dissertation. She did the work — she didn’t just write snarky essays about other people she resents. And that’s all Epstein is: an ornery old essayist with nothing better to do than disparage the accomplishments of others. Addressing someone by their appropriate title shouldn’t even require you to respect them. It’s just being polite. But Biden rises far above the belittling remarks. She’s too busy using her degree to fight for community college students — students like me. Community colleges serve underrepresented communities at an unprecedented rate compared to bigger, more prestigious four-year universities. They make higher education accessible to low-income students, who continue to perform demonstrably after transferring to university. I, for one, am proud that my country will soon have a first lady who is willing to fight to make education more equitable. I am sure that’s a scary prospect for someone like Epstein who is so eager to protect the ivory tower of academia which welcomes mediocre men like him. Of course, he must not have realized that whether or not Biden continues to use her title throughout the rest of her successful career, as I’m sure she will, the dominoes have already toppled. The ivory tower is coming down next.
https://readmorescience.medium.com/why-jill-bidens-doctoral-degree-angers-people-93fe90e1041c
['Sarah Olson Michel']
2020-12-17 14:47:27.371000+00:00
['Opinion', 'Politics', 'Education', 'Misogyny', 'Feminism']
Using Ansible Play Book Docker Configuration →>>
what is ansible? — — Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows. what is docker ? Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. what is ansible playbook? An Ansible playbook is an organized unit of scripts that defines work for a server configuration managed by the automation tool Ansible. Ansible is a configuration management tool that automates the configuration of multiple servers by the use of Ansible playbooks. so here we go first we have to create yum repo in target node we don’t have write now docker-ce software in redhat dvd. So we are going to create yum repo and after that we are going to install docker software. so we successfully created docker repo and install docker software which are not present in target node. but now we have to start the docker services. here we can see the services started . now we don’t have any images so we have to pul l docker image from docker hub . here we can see the docker image came. using this ansible code we can download docker software and start the docker services and also pull the images from docker hub. we don’t have any running container so for running container using the code we can run docker container usnig ansible palybook.
https://medium.com/@kuldeepkumawat195/using-ansible-play-book-docker-configuration-a87c0e216a82
[]
2020-12-11 10:43:58.479000+00:00
['Ansible', 'Ansible Playbook', 'Docker']
Spares and Accessories — D & H (Partnership) Ltd
D & H (Partnership) Ltd can provide spares and accessories — including medical lights for all applications, from large surgical lights for general surgery in an operating theatre to smaller examination lights for emergency rooms and clinical environments. We are specialist company that primarily supplies, installs and maintains surgical lighting including theatre operating luminaires and examination luminaires. Our services also cover associated equipment including emergency standby battery units, theatre control panels, replacement bulbs, as well as medical and surgical suction equipment. We also offer a complete operating light refurbishment service which can be particularly helpful in the current financial climate where both the NHS and private healthcare sector have restrained budgets. Please browse our product pages to find the surgical light for your particular needs. Each page has a full product brochure available to download. Should you require any further information please do not hesitate to contact a member of our team. Please note that we currently only supply to the UK.
https://medium.com/@peterbastian0208/spares-and-accessories-d-h-partnership-ltd-bd443bc9796a
['D H Partnership']
2020-06-15 12:55:29.010000+00:00
['Health', 'Healthcare', 'Spare', 'Medical', 'Accessories']
TheQuartering Is a Lying MF
Gamergate may look a lot different now than it did back in 2014, but its patrons still behave very much the same— the elaborate grift designed to manipulate gamers into thinking there’s an all-out cultural war against them has made the lives of game journalists and industry personnel a living hell. The group that has historically reaped the industry’s spoils —being of the cisgender heterosexual white male variety — has its dominance threatened, and in a bid to preserve it, it’s lashing back hard. Naturally, that gave rise to TheQuartering — a new figure to the Gamergate movement, but by no means revolutionary in terms of tactics. Since vlogging about Magic: The Gathering didn’t prove itself to exactly be that lucrative, Jeremy Hambly turned away to parroting Gamergate talking points to his audience, lacing it with as sensationalist and provocative a presentation as possible. It’s not dissimilar from the litany of ill-faithed game criticism that already exists on YouTube, but what sets Hambly apart from everyone else, is that he has a frightening capacity to shape the discourse in parameters that make it almost-impossible to defy his framing. The deck’s already stacked against a potential target of his — what’s left is for his fanbase to take the not-so-subtle hint and go wreak havoc of their own. There are obvious problems with TheQuartering that extend far beyond directing angry mobs of gamers at unsuspecting targets, but most bizarre of them all is the notion that he’s the true custodian of gamers’ demands of a better product, and a better industry — to put it lightly, Jeremy Hambly is a lying MF. So much of what Gamergate hedges its bets on when railing against games media is the appeal to authenticity — for the movement, journalists have become fodder for corporate greed and are nothing but peddlers for publishers’ own agendas, with nothing of critical importance to add to the conversation. The problem is that this presumes content creators along the lines of Gamergate are any more independent to express their own views without pressure, which with the thump of an audience who made it their specialty to bend the most adamant to their will, isn’t exactly that sound a conclusion to make. If unlawful financial compensation by publishers is indeed the presumed carrot being dangled in front of journalists to chase after, Gamergate’s is just as contingent on satisfying a set of audience expectations that by definition require a measure of consistency in defying what is perceived as the ‘mainstream’. This inherently makes Gamergate’s opinions harder to trust because unlike journalists, they don’t have a financial safety net they can cushion for when the circumstances are dire, whereas Gamergaters of TheQuartering’s ilk rely on manufacturing controversy to supplement any potential lack of engagement. The paradoxical nature of a free market argument for diversity. Another point where Gamergate ultimately fails to make the case for its existence, is the unduly amount of indignance it is quick to display at depictions of characters that are non-male, queer, and/or of color (Japan’s cultural produce notwithstanding). The thinking goes that representation for minorities is disproportionate to the population consuming them — which already presumes an Americentric narrative of game sales — but the data just doesn’t bear this out. The female representation index for video games is at one of the lowest points it’s ever been — queer, brown and black characters not faring that much better — and if there’s often this erroneous notion of inevitablism that casts a shadow on the entire discourse, inclusivity hasn’t exactly taken on a linear adoption course as one would predict. If the gaming industry was truly plotting the erosion of the white man from its fixtures, it probably could’ve done a much better job seeing it through. Then comes the issue of the “common man versus corporation” narrative, which Gamergate is quick to use whenever it feels its back against the wall. For TheQuartering — or so he purports — part of leading his crusade is to show that an independent creator can stand as a capable counterweight to corporate influence, which can only manifest as anti-consumerist behavior with the sole purpose of screwing over players — to put it charitably, this thesis does a subpar job of diagnosing the real issue with corporate influence in the gaming industry. The question very few are asking. To defy the tug of corporate greed in the industry means that utmost attention has to be paid to those standing to lose the most from it — those being game developers. Their role belittled by Gamergate, developers are often the ones being dealt the blunt end of the stick. Between issues of poor compensation, workplace misconduct and crunch, the gaming industry’s got a whole lot to atone for — any initiative that seeks to liberate gamers from the deadly grip of capitalism has to factor in labor reform as a necessary prerequisite. Gamergate had the opportunity to peddle that when the industry’s #MeToo moment came, but they quickly squandered it in favor of the customary misogynistic drivel. It’s not about catering to Gamergate’s identitarian discrepancies — it’s about making the gaming industry sustainable, so that it stands a chance of ever surviving its potential demise. Where this comes back around, is the seminal thesis of Gamergate — that if enough noise is made, change will happen, and that change will inevitably be to the gaming medium’s benefit. The opposite is manifesting itself to be true however — since gaming culture has become almost synonymous with toxicity, more are showing reluctance to join it. Be it journalists or developers, the cost to doing something that can be easily framed as an anti-gamer gesture is far too high, and seeing the reputation some have accrued for standing up for themselves and fighting, it’s not too far a throw to suggest that the gaming industry has become one of the worst professional career paths in recent years. The trauma from being isolated by mobs of harassers is far too great even for the well-paid to stomach. Jeremy “TheQuartering” Hambly is only one such example of a grift explicitly set up with the purpose of sapping confidence from a field where the culprit to bad outcome is more-often-than-not fan outrage — but his analogues are many, and his supporters even more so. If unchecked, his rhetoric will continue to fuel the very worst in gamer toxicity — feeling utterly terrorized not to anger the wrong crowd is a state of affairs we can’t allow to persist. It’s a failure of our culture’s self-corrective faculties that such behavior remains even within earshot of acceptable, but alas, to a great sum of people, it is.
https://medium.com/swlh/thequartering-gamergate-harassment-youtube-9ccae32ff698
['A. Khaled']
2019-12-28 08:35:18.947000+00:00
['Culture', 'Gaming', 'Social Media', 'GamerGate']
Sometimes Things Need To Be Well Organized To Make Them Better
Organizing things would help you to unleash the maximum potential out of the things you work with. So far so good! This is the most usual phrase we all hear almost more than hundreds of times a day. Surprisingly, we explicitly pretend ourselves as living in Paradise at the moment. Whatever is happening is all good. But do you think it's all going well? That's how we tie our notes with chunks of lies as much as possible. You might be thinking what's wrong with accepting anything, happens to us? Now here is a climax, I won't disappoint you at this point, I also hear and practice the same phrase," So far so good ". But in reality, we don't mean what we say to be very frankly speaking. Indeed, Our life is quite busy nowadays that our mind is in a consistent process of computing and managing the other or upcoming errands. It is insane to believe but most of the productive energies go in vain and we move around with a stumble brain. We feel excited once at the beginning of every new errand step into it. Gradually with time, the level of excitement inflates rapidly to none. It certainly happens because we at most don't organize our day to day things, to bring it towards betterment. Consequently, our things started overflowing on us like a glass of water overfilled when poured water beyond its capacity to hold. As far as the glass of water is concerned, you might say," Oh! it's due to a lack of attention but what would you say and to who you blame if you couldn't organize your stuff? The answer is simple, it is just you, who is responsible for all this mess. Sorry! If I'm getting a little bit rude but that's true for sure. I remember I used to repeat the phrase," there is no time "! or sometimes the shift timings of my job when I couldn't manage the errands. And gradually we become habitual of saying this to ourselves and others. Does it make any difference or Do we reach somewhere or at some point? Not at all! "End up with resentment and embarrassment". Soon we turn into profound mental stress which later on transformed into anxiety and finally, it's nothing else, just frustration. Failed to organize things at their appropriate timings and places slowly reduces our productivity to zero or none. Once productivity slows down, our minds stumble around errands with no interest at all. Finishing each errand or project becomes the last hope of our life which later on becomes more frustrated and anxious for us to make it happen. Perhaps! I'm experienced in going through the same set of a pattern of doing things in an unorganized way. But I've changed myself a bit. Now I love to keep check and balance and live my visionary life to be productive rather than just a workaholic. Indeed, it's good to keep yourself tied with some errands around but to what extent you are productive in your life matters most. "Though, there is a price to pay for everything" Whether it's a relation, health, profession, or your career, you ought to be smart enough to get the slice of pizza devoid of any biases and violence. Written by: Mubeen khan Creative & Academic Content Writer, Researcher, Digital Marketer. Footnote: Feel free to mention your opinions [email protected]
https://medium.com/@khanmubeen.uok.pk/sometimes-things-need-to-be-well-organized-to-make-them-better-e612017cc1c
['Khan Mubeen']
2020-12-26 16:08:36.617000+00:00
['Optimization', 'Management And Leadership', 'Work Life Balance', 'Stress Management', 'Organizing Tips']
Tongue Tied
Tongue Tied Photo by GREG KANTRA on Unsplash If only I could turn back the expired hands of time, and eat my tattered words, From lips not meant to hurt you, and destroy us, like a sonic boom, with thoughts blocked and souls scarred. Now I relent, just trying to survive, as I dream on, set the alarm, for the rest of eternity. Tongue tied, entangled in a snare, I feel like broken glass, soiled by pigeons crapping on the grass, messing with my perfect hair, reality taking a filthy bite of illusion, hungover and just trying to inhale the truth, exhale vodka and vermouth, swallowing smoke and delusion and bitter pills to mute the pain, as voices swell inside my brain, hemorrhaging words that bleed the canvas of the night and stain the stars, until the palette’s ink runs dry. If only I could tattoo my lips another life, and love like silk, to stop the rant of the expired hands of time. © Connie Song 2020. All Rights Reserved.
https://psiloveyou.xyz/tongue-tied-cabeed792849
['Connie Song']
2020-12-13 21:11:23.056000+00:00
['Time', 'Relationships', 'Tattoo', 'Poetry Sunday', 'Self']
Class Diagram
UML diagrams are divided into two broad categories Structure Diagrams Behavioral Diagrams Structure Diagrams: Structure diagrams are used to represent the “Static behavior” of a system. Such as the attributes, features, and how classes and objects are linked with each other. The examples of structure diagrams are listed below: · Class diagram · Component diagram · Deployment diagram · Object diagram · Package diagram · Profile diagram · Composite structure diagram Behavioral Diagrams: Behavioral diagrams are used to show the behavior of a system that how a user will interact with a system and what are the functionalities performed by the system. Different types of behavioral diagrams are listed below: · Use case diagram · Activity diagram · State machine diagram · Communication diagram · Sequence diagram · Interaction diagram · Timing diagram “Class diagram” falls under the category of structure diagram which is used to represent a static view of a system. The class diagram is not only used for visualizing, describing, and documenting different aspects of a system but also for constructing executable code of the software application. The class diagram can be mapped directly with object-oriented languages. A class diagram is an illustration of the relationships and source code dependencies among classes in UML. A class defines methods and variables in an object, which is a specific entity in a program or the unit of code representing that entity. Purpose of class diagram: · Analysis and design of the static view of an application. · Show the collaboration among the elements of the static view. · Describe the functionalities performed by the system. · Use an object-oriented approach for the construction of software · Forward and reverse engineering. Notations: Classes are portrait as boxes instead of curly brackets. The box contains three compartments. The topmost part is where the class name is written. The middle part contains the attributes and the lower part contains the methods and operations that a class has to perform. Lines that may have arrows on one or both ends connect the boxes and shows the link or association between them. The name of a class should be meaningful to describe the aspect of the system. Relationships should be meaningful to describe the aspect of the system. Relationships should be identified clearly. Use notes when required to describe some aspect of the diagram to be understandable by the developer. Relationships in the class diagram: I. Association: This is a broad term that encompasses just about any logical connection or relationship between classes. Multiplicity is an important concept in association relations. · Multiplicity: It is an active logical association when the cardinality of a class about another is being depicted. For example, one fleet may include multiple airplanes, while one commercial airplane may contain zero to many passengers. The notation 0..* II. Inheritance/Generalization: Refers to a type of relationship wherein one associated class is a child of another by assuming the same functionalities of the parent class. In other words, the child class is a specific type of parent class. III. Realization: Denotes the implementation of the functionality defined in one class by another class. IV. Aggregation: Refers to the formation of a particular class as a result of one class being aggregated or built as a collection. For example, the class “library” is made up of one or more books, among other materials. In aggregation, the contained classes are not strongly dependent on the life cycle of the container. In the same example, books will remain so even when the library is dissolved. V. Composition: Composition relation is similar to aggregation with the only difference being its key purpose of emphasizing the dependence of the contained class to the life cycle of the container class. Sample Class diagram:
https://medium.com/@alwazkazi3/class-diagram-415b7c40e12b
['Alwaz Qazi']
2020-12-03 17:32:43.047000+00:00
['Uml Diagrams', 'Structural', 'Object Oriented', 'Class Diagram', 'Uml']
The Exiled Prince — Chapter 12: Prince Dimitri of Valoria
“This is your floor.” Mikhail got out of the elevator and turned left. Dimitri followed right behind the hologram, silently looking at the minimalist design of the passageway. This underground corridor only holds ten doors versus the twenty-room recovery area where they came from thirty floors up. On one wall were eight large identical wooden doors with the elevator right smack dead center. At the opposite ends of the corridor, two steel doors served as emergency exits. Everything was positioned in perfect symmetry. The opposite wall served as an enormous virtual screen that imitated a natural landscape view. This feature made the supposedly dark subterranean floor more habitable. It created the illusion of being inside a two-storey house built on top of a ridge that commanded a majestic view of the vast expanse of forest below. Dimitri couldn’t help but compare the sharp contrast between the modern interior of the castle and its antiquated exterior. He began to discover the multiple layers in the castle’s design. Pretty much like an onion, he uncovered each new layer to be more remarkable than the previous one. “These are your living quarters.” Dimitri stopped walking and turned to Mikhail who had disrupted his contemplation. He saw the AI butler standing still beside the last wooden door nearest the exit. “All you need to do is stand right in front of the door and tap this button to open it.” Mikhail’s right hand pointed to the single button above the door handle. “How about giving it a try?” Dimitri simply nodded. He stood right in front of the door and saw his name carved elegantly on it. His right hand reached for the button but stopped midway. He just stood there, hesitant to proceed. “Our system will automatically scan your body.” Mikhail, standing beside Dimitri, sensed the young man’s apprehension. “It will grant you access and open the door for you.” “Is this how I’ll get in and out of every door in this place?” Dimitri glanced back at the AI butler, his expression filled with curiosity. “Yes and no!” Mikhail answered ambiguously. “You will be able to open any door that you have been granted access to.” “Of course, that did not stop you from breaking into restricted areas before.” Dimitri simply stood there, rendered speechless. He could not recall ever committing any of those acts Mikhail had accused him of. “Now, go on.” The AI butler pointed at the button, urging the young man to open the door. “Press that button. Time is of the essence!” “You only have exactly one hour to freshen up.” *** “As you can see, your living quarters is mainly designed for sleep and relaxation.” “Another gigantic virtual screen?” Dimitri stepped inside, silently observing his new room. He glanced around, not surprised at all to see Mikhail already hovering by the window. Right behind the AI butler, he noticed the same realistic view of the forest that he had seen earlier in the hallway. Although the room was small, he saw that it could still hold a king-sized bed. Two bedside tables flanked the bed. The sparsely furnished bedroom exuded a zen-like flair. “These doors lead to your walk-in closet and loo.” Without waiting for Dimitri’s response, Mikhail proceeded with the tour. The AI butler pointed at the two doors, one right next to each bedside table. “A third door, that you can not see from here, discreetly connects your walk-in closet and loo.” “The window serves as a virtual screen.” The AI butler showed Dimitri how to use the computer. “It can be activated by voice or through the virtual keyboard.” “You can fool around with the computer right now, if you like.” Mikhail pulled up the help page that contained the list of voice commands. “But remember, you need to be ready in no more than one hour.” *** “There aren’t any relevant matches for your search.” Dimitri heard an automated female voice. The young man sat at the edge of his king-sized bed, staring solemnly at the large virtual screen on the wall. With Mikhail gone, he felt free to gather more information about himself, ignoring the timer the AI butler had set on the screen. “New search: search Prince Dimitri of Valoria.” “Searching for Prince Dimitri of Valoria,” the automated female voice said. At the same time, the virtual screen on the wall displayed a progress bar filling up bit by bit. “There aren’t any relevant matches for your search.” “What’s going on?” The young man sighed, disappointed by the search results. Anticipating a more positive result, he had been searching for everything that he could remember from his memory flashbacks. However, no matter what keyword he uses, the search results still showed nothing. “Why can’t I find any relevant information?” Dimitri had already tried searching for Valoria, the kingdom he supposedly came from. He had scoured through the list of names in the database for all the members of his entire family tree. Or at least, according to all the information he could recall from his memory flashbacks that spanned over 13 years. Unfortunately, the search returned nothing. It appeared as though his entire family tree did not actually exist. “New search: search for The Council.” The young man ordered the virtual search engine, shifting his focus to the present. But before the female voice could respond, the virtual screen froze. After a few seconds, it reverted back to its default view. Exactly the same forest view he had seen on the hallway wall appeared on the virtual screen. “Shouldn’t you be getting ready?” Mikhail suddenly appeared. The dimly lit room instantly brightened. The AI butler waved his left hand, showing all the keywords the young man had been searching for on the screen. “You can not hide this from me, Dimitri. I know what you are up to.” Dimitri looked straight at the screen right behind Mikhail. “How did you know?” “You did not activate private mode, silly.” The AI butler sneered. The young man got up. His face turned serious. “How do I activate private mode?” “All you need to do is say ‘Activate Private Mode’ out loud.” Mikhail answered honestly. “But that will not work for you at this time.” “Why not?” Dimitri stared at the AI butler with a questioning look. Mikhail rubbed his chin thoughtfully. “The Council has been monitoring your every move since you came back from HQ eight months ago.” “HQ?” The young man asked curiously. “The Council’s Headquarters, silly!” The AI butler said. “But why would they do that?” Dimitri asked further. “Beats me.” Mikhail shrugged. “The Headmaster also prohibited you from searching for any of that information outside of this room. If there is anything you would like to know, you would need to ask him yourself.” “Now, enough with these questions!” The AI butler waved his left hand, clearing all the keywords displayed on the screen. “You need to hurry!” “Armand has been waiting for you at the training hall.” *** Author’s Notes: What do you think about Chapter 12: Prince Dimitri of Valoria? Do share your feedback with me in the comment section below. Also, kindly follow @aldenmyro on Facebook, Instagram and Twitter for updates!
https://medium.com/@aldenmyro/the-exiled-prince-chapter-12-prince-dimitri-of-valoria-97130706af71
['Alden Myro']
2021-01-18 03:09:18.299000+00:00
['Series', 'Fiction', 'Aldenmyro', 'Theexiledprince', 'Novel']
What is TDZ? from var to let and const ( JavaScript ES6)
let and const were introduced in ES6, and along with them the temporal dead zone (TDZ). It isn’t the most recent addition to JavaScript, but I think that many developers still aren’t familiar with it. In this article I’ll explain what TDZ is and why it exists, and I’ll discuss some of the differences between var and let/const, and touch on topics like hoisting along the way. This article is suitable for both beginners and experienced JavaScript developers wanting to brush up on the topic. Let me start off with the following definition of TDZ from MDN. I’m going to break it apart, so don’t worry if it’s not clear at first. Also note that it isn’t completely accurate, but we’ll get to that. In ECMAScript 2015, let bindings are not subject to Variable Hoisting, which means that let declarations do not move to the top of the current execution context. Referencing the variable in the block before the initialization results in a ReferenceError (contrary to a variable declared with var, which will just have the undefined value). The variable is in a “temporal dead zone” from the start of the block until the initialization is processed One of the differences between let and var is that var is hoisted to the top of execution context. So, for example, if we’d run the following function: The output is undefined. the variable x was hoisted to the top of the function, as if it was declared “var x;”. We log it to the console before giving it any value so the output is undefined. Hoisted variables aren’t really physically moved to the top of the function and declared. It is meant in a more figurative way. The variable is put into memory and initialized with a value of undefined when the code is compiled. This takes place before the code is actually executed (the creation and execution phases). The result however is similar to moving variable declarations to the top of the scope without assigning them a value. If we try the previous example using let, the result will be different: We received “ReferenceError: x is not defined”. As MDN says, let isn’t hoisted like var is. In other words, x isn’t initialized when we’re trying to log it to the console. Trying to access an uninitialized variable causes a ReferenceError. Although the explanation from MDN says that “let bindings aren’t subject to Variable Hoisting”, you could find contradictory information claiming that they are in fact hoisted. When the scope of a let variable is entered, storage space (a binding) is created for it. The engine is aware of its existence, so you could say that it is in that sense hoisted, although it is not initialized. Look at the example to see how let is hoisted (source): This shows a ReferenceError, but if we remove line 4 we’ll see ‘outer scope’. The existence of line 4 changed the output of line 3. The period between entering a scope and the execution reaching the actual declaration (line 4) of a let variable is precisely the temporal dead zone. This example demonstrates the TDZ and that a let variable is actually hoisted in a sense. If we give the explanation from MDN the benefit of the doubt, it is simply referring to the specific way var is hoisted, or in other words, to the life cycle of a var declared variable. let isn’t hoisted in that sense, but it is in the sense that the engine is aware of its existence. As a side note, it’s noteworthy that TDZ is temporal. After all it is called the temporal dead zone. Meaning that it relates to time, as in the time of execution, and not the physical location in the code. See the following example: Line 3 doesn’t throw an error because by the time it is executed our variable has already been initialized. All in all it looks like TDZ is good for us developers. Accessing a variable before it’s declared isn’t something you’ll usually be doing on purpose, so it’s a pretty useful addition that could potentially prevent some issues. However, you may be surprised to know that the main reason for implementing TDZ wasn’t helping you avoid errors. After all, trying to access variables before they are declared isn’t one of the most common errors around. The implementation of TDZ has to do with the addition of const. const is another keyword added is ES6. Basically it is used to declare constants. Let me quote MDN again here: Constants are block-scoped, much like variables defined using the let keyword. The value of a constant can't be changed through reassignment, and it can't be redeclared. According to Allen Wirfs-Brock, the project editor of ES6, the main motivation for TDZ was to make const work. it was then applied to let mostly for the sake of consistency. let could probably work fine without TDZ, but const needed a “rational semantic” to make it work (source ). If const behaved like var then it may have been possible, for example, to assign a value to a hoisted const before reaching the actual declaration. It works with var: In other words. TDZ’s goal is to make const assignment only possible at the declaration time. It should be defined once (declared and initialized with a value), and accessing it should return that value. Note however that although const cant be given a new value through reassignment, and it cannot be redeclared, its value can be modified in some cases. For example, if myObj is a constant we can give it new properties and change their values as usual. we cannot however reassign myObj itself. A case can be made that if consistency was important then let should have behaved like var. It seems that being able to refractor between let and const without unexpected surprises was one of the reasons that TDZ was applied to both of them. On the other hand, some might say that there is no longer a use for var. If that is the case then why would we care about it? Wirfs-Brock noted that the goal of let is to provide a variable that is block scoped. Not to replace var per se. With that in mind, there may be cases where var could be more suitable than let and still be used. This is beyond the scope of this article however, so if you’re interested check out The case for var by Kyle Simpson. In conclusion, we talked about hosting, how let/const are technically hoisted but in a different way than var, what is TDZ and why it was added, and we touched on other issues like should there even be a TDZ and is var still relevant. If you have any questions regarding this post or any suggestions, corrections or notes, feel free to comment, email or DM me. Thanks for reading! Sources and further reading: TDZ in MDN web docs Hoisting in MDN web docs Chapter on TDZ in “Exploring ES6” by Dr. Axel Rauschmayer. Performance concern with let/const for some arguments against TDZ Photos by Caspar Camille Rubin on Unsplash Thanks to Alon Valadji.
https://medium.com/@hilal-rizek/witwhat-is-tdz-from-var-to-let-and-const-javascript-es6-3cbd5c49a48a
[]
2021-02-07 21:50:44.280000+00:00
['Web Development', 'ES6', 'Javascript Development', 'Javascript Tips', 'JavaScript']
Enter Uber Park: Powered by Metropolis
Today, I’m thrilled to announce our partnership with Uber and the next evolution in urban mobility: Uber Park. Everyone in Los Angeles and Nashville (and more cities to come) can use their Uber app to access the Metropolis network and a seamless, checkout-free parking experience. You drive in, you drive out — that’s it. No tickets, no cash, no fumbling with payment kiosks, no circling around aimlessly and backing up the traffic behind you, no worrying about expired time, or paying for more time than used. Uber Park is a new way to park — You enter your license plate into the Uber app, and each time you pull into a participating parking location, Metropolis’ technology immediately detects your car’s arrival and triggers a push notification confirming the start of the parking session. Once you’re ready to depart, you simply drive out and your Uber account is charged for the duration of your stay. L.A. consistently ranks among the worst in the nation for traffic congestion, and as an L.A.-grown company, we founded Metropolis with the mission to create mobility solutions for the transportation needs of our city, and cities around the country. With Uber Park, we are able to place our technology in the hands of millions of Uber customers and deliver a seamless experience, taking cars off the streets and getting customers where they want to be. We’re making parking as easy as requesting an Uber. I am very grateful to the Metropolis and Uber teams, as well as the investors that helped us get here including 3L Capital, 01 Advisors (01A), Dragoneer, Slow Ventures, Zigg Capital, Starwood, RXR, DivcoWest, Halogen and Baron Davis. -Alex — Alex Israel, Co-Founder + CEO of Metropolis Alex Israel is the Co-Founder and Chief Executive Officer of Metropolis. Previously he was Vice President and General Manager of INRIX, a leading global traffic intelligence provider connecting cars to smarter cities in more than 60 countries around the world. In 2009, Alex Co-Founded ParkMe to facilitate a paradigm shift within navigation and grew the company to become the world’s most comprehensive parking database, offering insight for over 60,000 parking locations across the U.S., Canada, Europe and Asia. With more than a decade of experience building mobility platforms, Alex started Metropolis to make cities more efficient and to support the future of urban mobility.
https://medium.com/@alex-israel/enter-uber-park-powered-by-metropolis-16e1555460a3
['Alex Israel']
2021-11-24 15:47:32.681000+00:00
['Mobility', 'Parking', 'Uber', 'Metropolis', 'Artificial Intelligence']
Text Data Pre-Processing Using Word2Vector and t-SNE
Multi-class data classification to predict the Phrases from the sentences of the movie review given user onthe sentiment scale of 0–4. Introduction With the growth of web text data such as: online review data posted by users for hotel booking, e-commerce website and movie reviews, can be of great help to understand the business and the need of the user plays an important role in making decisions for companies [2]. The objective of this project is to use multi-class classification,instead of binary class classification (positive/negative) to predict the Phrases from the sentences of the movie review given user in the sentiment scale 0 to 4, where 0 is the lowest sentiment (negative) and 4 is the highest sentiment(positive). This project first introduces the description of data in mathematical form and also the description of the features of the datasets. It then describes one of the major tasks in sentiment analysis which is pre-processing text data into numeric data. Next, it focuses on analysis and distribution of the feature which helps in the next step which is feature extraction. Furthermore, it also introduces several machine learning methods such as logistic regression, decision tree and random forest used for classifying sentiments. Finally, the result of the machine learning is presented with comparison and suggests future direction for this project. Data Description The datasets is a collection of movie reviews from the website “www.rottentomatoes.com”. The dataset was provided by the website “www.kaggle.com”, originally collected by Pang and Lee. The dataset consists of Tab Separated files (tsv), which consist of phrases from the Rotten Tomatoes dataset. Here, each phrase has its phrase Id and each sentence has a sentence Id. Phrases which are repeated are only included one in the dataset. The source of the dataset is https://www.kaggle.com/c/sentiment-analysis-on-movie-reviews/data. Description and format Description of dataset in mathematical correct formalism Universe Ω = {Website (Rotten Tomatoes), User who is writing a review, Internet} Elementary Events ω= The possibility of the user writing the review in the comment section. Measurable Function (RV-function)= procedure of reading reviews given by the users and measuring the reviews according to the sentiment. Data Value Space= {PhraseId, SentenceId, Phrase, Sentiment} Format of the dataset The dataset is divided into training and test data, represented by “train.csv” and “test.csv” files respectively. The RV-function of the dataset is a procedure of reading reviews given by the users and measuring the reviews according to the sentiment. Starting with the training dataset file, whose first line identifies the feature names followed by feature values. The feature name or the Data Value space (DVS) of the training dataset are PhraseId, SentenceId, Phrase and Sentiment. Table 1 shows a version of the data for the train.tsv. Similarly, the test.tsv file is formatted using the same structure except for the Sentiment column, which is unknown. The purpose of this project is to predict the sentiment of the phrases from the model trained with the help of train.tsv where sentiment is known. Table 2 shows a lightweight version of the test.tsv. The columns have the following meaning: PhraseId: The ID of the Phrase. SentenceId: The ID of the sentence, which helps to track the phrases taken from sentences. Phrase: The phrases from the sentences written by the user in Rotten Tomatoes. Sentiment: It is a label given to the phrases to convey sentiments. The sentiments range from 0–4. The sentiment labels are: Data Pre-processing For the purpose of this project the data taken from train.tsv and test.tsv is of a shape of 100X4 and 100x3 respectively. The dataset is fairly clean with no missing values. For each phraseId there is a phrase, sentenceId and sentiment mapped to it in traiv.tsv file. Similarly, for test.tsv for each phraseId there is a phrase, sentenceId mapped to it. Before preprocessing the data I used several statistical methods to understand the data. The number of each sentiment in the train.tsv file was visualized using a barplot. Figure 1 shows the barplot of the division of the phrase according to their sentiments. Figure 1:Barplot for sentiment count According to the barplot, sentiment class seems to be following a normal distribution, with most of the frequently distributed class sentiment labelled 2 — which represent neutral from the range given. One of the features in the dataset is “Phrase”, this feature stores data in the form of words. These words need to be tokenized into numeric format. Figure 2 shows the example of a phrase from the dataset. Figure 2: One of the phrase from the dataset To begin with, in order to change the word to a numeric format, I used the Word2vec method. The word2vec method takes the corpus of text as its input and converts the text into a vector space with several dimensions. Words which are common in context in the corpus are located close to one another in a vector space. For example “Have a nice day.” and “Have a great day.” Here great and good will be placed closer in the vector space because they convey similar meaning in this context. Figure 3 shows the conversion of words into a vector space. Figure 3: From word to a vector conversion using word2vec. The frequency of the words present in the phrase column in the train.tsv is shown in figure 4. Figure4: Word frequency of training dataset Similarly, The frequency of the words present in the phrase column in the test.tsv is shown in figure 5. Figure 5: Word frequency for testing dataset At this point we can visualize the frequency of the words in the phrase. However, we still do not know the sentiment of the phrases, since the sentiment of the phrases, also the number of features after converting word into its numeric format has increased drastically. Therefore, to understand the relationships between the features I analyzed the correlation between words. Figure 6 shows the graph for correlation of words with each other. Figure 6: Correlation between words We see that in Figure 6 the correlation between words are difficult to interpret, also it will affect the machine learning models’s performance. Therefore, the next step is to reduce the dimension. Here, in this project to understand the data better, I used an algorithm called t-SNE, which is an effective algorithm suitable for dimension reduction for word embedding and also sued for visualization of high dimensional datasets and also visualization of the similar words clustered together in the graph which will give us an idea about the sentiment of the phrase profoundly. Figure 7 shows the t-SNE visualization of a word “Good” and the words which are closer to this word. Figure 7: t-SNE visualization for Good. Machine Learning Logistic regression approach Logistic regression is a simple classification technique, it is a common and useful regression method for solving binary classification problems [3].Here, I fit the model on the training dataset and performed prediction on the test set, the accuracy of this model was 83%. Figure 8 shows the plot for the predicted result from the model. Figure 8: Prediction result for Logistic regression Decision tree model Decision tree model is another model for classification and is capable of both binary and multiple class classification. The goal of using decision trees is to create a model that predicts the value of sentiment for the test dataset by learning simple decision rules inferred from the training dataset [4]. The accuracy of this model was 99%. Figure 9 shows the plot for the predicted result from the model. Figure 9: Plot result for Decision tree Random Forest Approach Random forest consists of a large number of decision trees that operate on ensembles. In this model each individual tree runs its class prediction and the class with most common votes becomes the prediction of the model [5]. In our dataset based on the number of classes in the training dataset yields accuracy of 98%. Figure 10 shows the prediction result for the model. Figure 10:Prediction result for decision tree Lastly, by comparing the result of three different approaches : Logistic Regression, Decision tree and random forest. By training a data set for 100 rows, we see that the majority of the prediction shows that phrase has sentiment class 2, which represents “Somewhat negative” according to the labels given to the sentiments. Figure 11 shows overall prediction for each model. Figure 11: Result for each method Conclusion This report concludes by encompassing the basic steps of statistical learning, such as collecting data, cleaning the data, preprocessing data which could be fit for the model, analyzing data distribution and finally using machine learning algorithms to make better prediction. Defining data samples in the form universe, event, RV-function and data value space helped to understand the fundamentals of the dataset and then by analyzing data distribution, frequency of the word and correlation among the features helped to understand the data in a deeper and meaningful way. Specifically, data preprocessing step where words had to be converted into numeric format using word2vec method played an important role in classification of sentiment class. Using logistic regression, decision tree and random forest as classification problems can prove to be beneficial for text analysis and sentiment analysis. Finally, the model accuracy was around 80–90% in all three models. In both training and testing dataset, the variation in the sentiment was not diverse, which led the models to overfit the prediction. Reference [1] Data source- https://www.kaggle.com/c/sentiment-analysis-on-movie-reviews/data [2] Zhou, Li-zhu, Yu-kai He, and Jian-yong Wang. “Survey on research of sentiment analysis.” Journal of Computer Applications 28.11 (2008): 2725–2728. [3] Kleinbaum, David G., et al. Logistic regression. New York: Springer-Verlag, 2002. [4] Kothari, R. A. V. I., and M. I. N. G. Dong. “Decision trees for classification: A review and some new results.” Pattern recognition: from classical to modern approaches. 2001. 169–184. [5] Biau, GÊrard. “Analysis of a random forests model.” Journal of Machine Learning Research 13.Apr (2012): 1063–1095
https://medium.com/swlh/text-data-pre-processing-using-word2vector-and-t-sne-2321fbce5b9
['Sanyukta Suman']
2020-11-14 12:53:36.141000+00:00
['Python', 'Word2vec', 'Decision Tree', 'Linear Regression', 'Random Forest']
- mundoanimalselvagem
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/mundoanimalselvagem/canto-da-coruja-buraqueira-athene-cunicularia-5667ae80b063
['Mundo Animal Selvagem']
2020-12-30 18:03:36.517000+00:00
['Coruja Buraqueira', 'Athene Cunicularia', 'Coruja', 'Birds', 'Canto Da Coruja']
Electronic Records in Present-Day Healthcare System
At present the healthcare system witnesses transmission of patients’ medical records from paper to electronic. After the US adopted the mandated switch to electronic records, they received extensive news coverage both in medical and mainstream publications. The digitalization era has given birth to a number of terms, such as electronic health records (EHRs) and electronic medical records (EMRs) that stand in the foreground of such publications, used sometimes interchangeable. However, there are distinct differences between them — as well as between other newly coined terms that describe different approaches to digitalization of the medical life. Electronic Health Records? An electronic health record (EHR) is a digital version of a patient chart, an inclusive snapshot of the patient’s medical history. It contains input from all the practitioners that are involved in the client’s care, offering a comprehensive view of the client’s health and treatment history. Electronic health records are designed to be shared with other providers and authorized users may instantly access a patient’s EHR from across different healthcare providers. Elements of EHRs As a rule, EHRs contain the following data: Patient’s demographic, billing, and insurance information; Physical history and physicians’ orders; Medication allergy information; Nursing assessments, notes, and graphics of vital signs; Laboratory and radiology results; Trending labs, vital signs, results, and activities pages for easy reference Links to important clinical information and support Reports for quality and safety personnel Electronic Medical Records An electronic medical record (EMR) is a digital version of a patient’s chart used by a single practice: a physician, nurse practitioner, specialist, dentist, surgeon or clinic. In its essence, it is digitalized chart that healthcare facilities previously used to keep track of treatments, medications, changes in condition, etc. These medical documents are private and confidential and are not usually shared outside the medical practice where they originated. Electronic medical records make it easier to track data over time and to monitor the client’s health more reliably, which leads to better long-term care. Elements of EMRs: EMRs usually contain the following information about the client: Medical history, physicals, notes by providers, and consults from other physicians Medications and allergies, including immunization history Alerts to the office and the patients for preventative tests and/or procedures, e.g. lab tests to follow-up colonoscopies Personal Health Records An electronic personal health record (PHR) provides an electronic record of the client’s health-related information and is managed by the client. It is a universally accessible and comprehensible tool for managing health information, promoting health maintenance, and assisting with chronic disease management. A PHR may contain information from multiple sources such as physicians, home monitoring devices, wearables, and other data furnished by the client. With PHRs, each client can view and control their medical data in a secure setting and share it with other parties. However, it is not a legal record unless so defined and is subject to various legal limitations. Besides, though PHRs can provide important insights and give a fuller view of the client’s health and lifestyle, its inaccuracy and lack of structure lead to limited use of it in the clinical and medical studies. Benefits of Electronic Records Offer Digital medical records may offer significant advantages both to patients and healthcare providers: Medical errors are reduced and healthcare is improved thanks to accurate and up-to-date information; Patient charts are more complete and clear — without the need to decipher illegible scribbles; Information sharing reduces duplicate testing; Improved information access makes prescribing medication safer and more reliable;Promoting patient participation can encourage healthier lifestyles; More complete information improves diagnostics; Facilitating communication between the practitioner and client; Enabling secure sharing of client’s medical information among multiple providers; Increasing administrative efficiency in scheduling, billing, and collections, resulting in lower business-related costs for the organization So where is AI? Electronic records are expected to make healthcare more efficient and less costly. However, in reality, under less-than-ideal circumstances, workarounds and errors of different types appear and complaints mount. Improving EHR/EMR design and handling requires mapping complaints to specific EHR/EMR features and design decisions, which is not always a straightforward process. Over the last year, more informatics researchers and software vendors have turned their attention to EHR/EMR systems, and more of them have started to rely on AI to give deeper insights into the design and handling of the electronic records. So far AI is used to assist medical professionals with electronic records flow in the following spheres: Data extraction from free text The free structure of clinical notes is notoriously difficult to read and categorize with straightforward algorithms. AI and natural language processing, however, can handle the heterogeneity of unstructured or semistructured data making them a useful part of EHRs. At present, healthcare providers can extract data from faxes at OneMedical, or by using Athena Health’s EHR. Apart from them, Flatiron Health’s human “abstractors” review provider uses AI to recognize key terms and uncover insights from unstructured documents. Amazon Web Services recently announced a cloud-based service that uses AI to extract and index data from clinical notes. Data collection from multiple sources As healthcare costs grow and new methods are tested, home devices such as glucometers or blood pressure cuffs that automatically measure and send the results to the EHR are gaining momentum. Moreover, data streams from the Internet of Things, including home monitors, wearables, and bedside medical devices, can auto-populate notes and provide data for predictive analytics. Some companies have even more advanced devices such as the smart t-shirts of Hexoskin, which can measure several cardiovascular metrics and are being used in clinical studies and at-home disease monitoring. This means, that future EHRs should integrate telehealth technologies. Besides, electronic patient-reported outcomes and personal health records are also being leveraged more and more as providers emphasize the importance of patient-centered care and self disease management; all of these data sources are most useful when they can be integrated into the existing EHR. Clinical documentation and data entry EHR documentation is one of the most time-consuming and irritating tasks in the modern care environment. A recent AMA study found that clinicians spend twice as much time over the keyboard as they do talking to their patients. Artificial intelligence with the help of NLP can automatically assemble and repackage the necessary components of clinical documentation to build clinical notes that accurately reflect a patient encounter or diagnosis. Nuance, for example, offers AI-supported tools that integrate with commercial EHRs to support data collection and clinical note composition. Such carefully engineered integration of AI into the note creation process would not only reduce the rummaging through bins of pieces, but could improve the output design, making clinical notes more useful, readable, and cogent and meeting all requirements for clinical documentation.” Clinical decision support Decision support, which recommends treatment strategies, used to be generic and rule-based. AI machine-learning solutions are emerging today from vendors including IBM Watson, Change Healthcare, or AllScripts that learn based on new data and enable more personalized care. For instance, Google is developing prediction models from big data to warn clinicians of high-risk conditions such as sepsis and heart failure. Enlitic, and a variety of startups are developing AI-derived image interpretation algorithms. Jvion offers a “clinical success machine” that identifies patients most at risk as well as those most likely to respond to treatment protocols. Each of these systems could be integrated into EHRs to provide decision support. Interoperability AI can address the core interoperability issues that have made it so difficult for providers to access and share information with the current generation of health IT tools. The industry is still struggling to overcome the challenges of proprietary standards, data silos, privacy concerns, and the lingering competitive disadvantages of sharing data too freely. With AI algorithms learning from inter-specialty communication specifics and facilitating shared decision making by mining patient input and feedback, the final clinical note will be the optimal product for the user in line with the interdisciplinary care concept.
https://medium.com/sciforce/electronic-records-in-present-day-healthcare-system-2d6649646aaa
[]
2020-02-27 16:22:03.602000+00:00
['Healthcare', 'Machine Learning', 'Health Technology', 'Artificial Intelligence', 'NLP']
The Game of Life
FUTURE CRAFTING The Game of Life On COVID-19 and The Matrix There’s lurking anxiety in people’s hearts. “What if this never ends?” Countless articles cropped up over the past few months theorizing what a future of living in the shadow of COVID-19 might look like. What permanent changes to our lifestyles should we prepare for? If we find a cure, what does the aftermath look like? Do we scramble back to our daily lives, balance mourning the dead with a 9–5? One outcome worth considering is the shift to virtual living. I’m not talking about more reliable Zoom meetings or better virtual doctor appointments. I’m talking about living in a video game: a world where we interact like in real life, but without physically being there. Photo by Sebastian Voortman from Pexels. Imagine you wake up, go through your morning routine, and then sit down at your desk. Instead of opening your laptop and checking your email, you pull a comfortable, lightweight Virtual Reality (VR) headset over your head. As you press back into your chair, you log in and see a digital version of yourself, a persona. You “click” on your character and spawn in your house in the World. A menu appears with your email and schedule. After 15 minutes of wading through email, you get a notification from Jim asking to meet you for coffee outside the office. You teleport in the game to a virtual office. Jim’s character is waiting on a virtual bench. You walk over to the bench and discuss family life with Jim. In real life, you sip your coffee. After 15 minutes, you both walk into the office and head into a conference room. Your coworkers are all in there. Recording technology has improved such that you can carry on a casual conversation with Jim, who sits next to you, without disturbing/talking over your other coworkers who are chatting before the morning standup. As 10:00 rolls around and a notification tells you to take a break. You remove the VR headset and go for a walk in the real outside. I know. It sounds like the start of a Black Mirror episode. On paper, the whole skit seems outlandish, even borderline fantasy. It’s been a dream of gamers forever to log into a fantasy world like this, though I doubt anyone’s top choice was a 9–5 simulator. VR, security, sound, and so many other tech fields would need to make leaps and bound to achieve something like this. But I see this as the inevitable tip of an inevitable iceberg. As developed countries get more and more comfortable doing business in non-physical domains, how long until other areas of life shift to virtual options? Imagine other VR scenarios: seeing a therapist, hanging out with your family, going to a movie. A year ago if you floated something like this, most people would have never considered doing any of these things virtually. Even if they wanted to, the prospect was unrealistic. Neither the technology nor the demand was there. However, with quarantine exposing the potential of virtual spaces, they’re starting to look like viable options. As we prepare for COVID winter and look at what the future might hold, I argue it’s not as unrealistic as it sounds. Image by Bram Van Oost. A year ago I would have laughed in your face if you told me we might spend more time in virtual environments than we do in real ones. Even now, as I spend 8 hours minimum flipping between blocks of code and Zoom/Skype/Webex/Teams calls, it still seems farfetched. However, the precedent is there. The Social Precedent Take a look at remote work. While not every job can be done remotely, it’s become an appealing option for those that can. As soon as companies realized that meetings and office communication could be done out of the office, companies like Zoom capitalized on that experience. You can already use filters, backgrounds, and emojis to add personality to conference calls. There are features for interacting, such as hand raising, polling, and more. Education is in a similar boat, though there are concerns to be wary of. (Simon Rodberg outlines some potential pitfalls in his article, School as We Knew It Is Over. What Comes Next?) We’ve seen summer camps offer online courses, and supplementary education sites like Khan Academy are increasingly active. Earlier in the year, many universities opted for virtual commencements. People joked that before long they’d host commencement in Roblox or Minecraft. At an entertainment level, virtual communities are flourishing. Quarantine has been a dream for gaming, as gamers get to spend hours inside playing games without judgment. As so many people have had to buff up their home office, lots of gamers took the opportunity to upgrade their setups. Outside of gaming, people are devouring content on sites like Twitch, YouTube, Netflix, Hulu, and more. Even when not at the computer, mobile devices are a medium into your digital persona on the go. Quarantine is pushing people to think outside of the box. Though many countries are on the verge of recovery, the world isn’t going to snap back to normal once we beat COVID. As we look, ideally with optimism, at social change in the fallout of COVID, technology and its role must remain in the spotlight. The Technology Precedent From faster computers to ray/path tracing to physics breakthroughs in games like Half-Life: Alyx, one thing is clear: gaming technology is evolving. This comes as no surprise since the gaming industry has exploded into a $100+ billion-dollar industry that continues to grow. With that kind of interest, we can expect great things in the coming years. Earlier in 2020, Sony announced the specs for the PS5. While it’s a well-rounded, beast of a machine for the price point (leaks suggest around $500), the real eye-catcher isn’t what you’d expect: it’s the hard drive. PlayStation 5. Source: Sony. Specifically, it’s what is called “I/O throughput”, which is the speed at which the hard drive can send and receive data. The PS5 ships with 5.5 Gb/s (raw), putting it leagues ahead of its competition. Xbox Series X ships stock with 2.4Gb/s (raw). The PS4 and Xbox One were in the realm of ~100 Mb/s. Why is this important? Because it revolutionizes the development process for games and other graphically intensive software. Assets are what make up a game. They include textures, solid objects, characters, the environment, and more. These assets have to be stored somewhere just like any file. Many of our graphical improvements throughout the years have relied on making better assets, but only up to a certain point. A 3D model of a rock, for instance, is made out of polygons (typically triangles). You could imagine making a simple “rock” by putting three 2D triangles together, i.e. a pyramid. Because there are only three triangles, it only takes up a small amount of file storage. That’s not a very lifelike rock, though. To make it more lifelike, we have to add more triangles (polygons). The number of polygons in a model is what is known as “poly count”. The higher the poly count, the bigger the file. Today much of the high-end rendering and lifelike games that we see have assets with high poly counts, like millions of polygons. These assets are known as “high-poly”, as opposed to a “low-poly” assets like our 3 triangle rock. The more of these high-poly assets a game has, the more data the game has to load. The more data you have to load, the longer you’re waiting for your game to load. Image by Gordon Johnson. While there are many techniques to improve graphics without increasing poly-count, this has been a major roadblock for developers for as long as we’ve been making games. You can build the most detailed rock you want. If that rock takes up half a gigabyte in memory, you’re going to be waiting for that to load for a long time. A good developer had to balance load and response times with how pretty their game is and how many assets they load into a scene. That’s where I/O throughput comes in. In the past, throughput has been too low to manage a ton of high-poly assets. Developers had a breadth of tricks to get around this, but those tricks aren’t perfect. Even with the tricks, they add time and complexity to the development process, hence games and software take longer to build. Breakthroughs in how we handle those assets as well as the increased throughput speeds change all of this. Instead of having to use all of these tricks, we can just load our assets directly from the hard drive in real-time. Cool, but why even bring all this up? Between this and the recently teased Unreal Engine 5, a cutting-edge game engine (what one uses to build games), we may be able to put together games and graphically intensive software far quicker than we do now. That means if your office wants to throw together a life-like virtual version of their office, it could be a hell of a lot more accessible than today. Of course, the possibilities are endless with technology like this. From a demo in Unreal Engine 5. There are countless triangles dynamically lit and loaded in real-time. Imagine, instead of Google Cars driving around and scanning roadways for Maps data, they’re scanning the Earth for model data. There are already companies that do similar things for CGI in movies. If we have the technology to load that data into a virtual environment in real or close to real-time, suddenly it doesn’t seem so farfetched to have a virtual Earth that one might load into. We have a ways to go before all of the pieces are in place, but the precedent is there. Image by Alex Iby. Precedent is one thing, but will it happen? It’s not a matter of if, but when. We’re already plugged in, and technology is on a fast track. Take a look at a couple revealing statistics from a collection by Kommando Tech in February 2020: “There are over 5 billion mobile users in the world, and more than 3 billion of those have smartphones.” “Globally, how much time does the average person spend on their phone in a day? The most recent data indicates that people spend an average of 3 hours and 15 minutes on their phones.” 15 years ago people were skeptical of smartphones. I can almost hear my mom saying, “Why would anyone ever want to spend time on their phone when they have a computer?” Now our digital lives are already a part of our physical ones. With this in mind, it’s not hard to conceive of a world where these virtual worlds meld into a uniform experience: your virtual-life and your real-life start to become one. It might not look like a real-life World of Warcraft, SkyNet, Animal Crossing, or a bunch of agents in tailored suits trying to suppress free will, but it’s only a matter of time before we see something new emerge. The question we should be asking is: what does that look like?
https://medium.com/super-jump/the-game-of-life-48f1188b69aa
['Devon Wells']
2020-08-08 02:07:22.549000+00:00
['Virtual Reality', 'Covid 19', 'Future Of Work', 'Gaming', 'Features']
Horizon State selected as semi-finalists for AcceliCITY 2021
Leading Cities, an early pioneer in the global Smart City ecosystem, is proud to announce the end of the QBE AcceliCITY Resilience Challenge solution provider application process and congratulates the semi-finalists in this year’s competition. Leading Cities and QBE North America are excited to work with these semi-finalists who each demonstrated the critical role they can play in building smarter cities worldwide. The purpose of AcceliCITY is to further the potential impact Smart and Resilient City solutions will have in cities and towns around the world. The AcceliCITY program and its partners, help these companies succeed by connecting them with mentors from Leading Cities’ global network of smart city experts and city leaders. Of the more than 500 companies from 44 countries that applied to be a part of the AcceliCITY program, Horizon State was one of the 50 selected companies that stood out as an exceptional contender. The applicants represent 24 different verticals such as, Internet of Things, AI/Machine Learning, Mobility/Transportation, Urban Agriculture, Smart Health, Smart Water, Security, Environment, Clean Energy, Building Technologies, to name a few. Read more about Horizon State making the semi-finals here : https://bit.ly/3q1Nj9y
https://medium.com/horizonstate/horizon-state-selected-as-semi-finalists-for-accelicity-2021-f1388d351fb9
['Dan Crane']
2021-06-17 05:48:54.662000+00:00
['Smart Cities', 'Blockchain', 'Community Engagement', 'Voting']
Social Impact Bonds: complexity wrapped in conformity
Last week I attended the Social Outcomes Conference hosted by the Government Outcomes Lab at the Blavatnik School of Government. It was a fascinating event, that aimed to provide “an in-depth exploration of the practice and evidence base around the implementation of outcomes-based models of public service provision from across the world”. I was speaking on a panel which asked whether we’ve become “overly fixated” on social impact bonds (SIBs). Given that 21 of the 23 sessions over the two-day event were explicitly about SIBs it would appear the answer is a definitive “yes”. I’ve long been interested in SIBs, and have written about some of their internal contradictions before, but have not been close to the SIB community in recent years. Returning to the debate again last week I was surprised to see how many of the old questions around risk transfer, outcomes measurement and pricing remain largely unresolved nearly a decade after the first SIB was developed. But perhaps that’s to be expected given the technical complexities of SIBs — something we’ve known all along. What has changed over the past 10 years is the way policymakers think about outcomes. Previously, taking the cue from New Public Management, outcomes were generally viewed as part of a neat linear sequence with inputs at one end and impact at the other. Line everything up, turn the handle, and you’ll produce outcomes like sausages from a sausage machine. This handy graphic, courtesy of the Government Outcomes Lab, sums it up. The inner workings of the impact sausage machine But now many people are challenging this deterministic logic. Isn’t it the case, they argue, that this is a very poor representation of the messy reality of the real world? Rather than imagining that we can measure and control everything we should instead view the world as a complex system. Using this framing, outcomes are emergent properties of that complex system and trying to “manage” or “deliver” those outcomes makes as much sense as managing the patterns of birds in flight. At this point, many SIB proponents are happy to agree. It is exactly because the world is complex that commissioners shouldn’t meddle around specifying activities. Rather, outcomes-based approaches incentivise providers to innovate. This ensures that those with the best information (the providers and those they work with) are freed up to achieve the very best results. So SIBs have complexity thinking at their heart. But this complexity is wrapped in conformity because SIBs also demand that the outcomes by which success is judged must be measurable, attributable and priceable. It is this aspect which renders SIBs ultimately a mechanistic, deterministic tool rather than one that is truly based on the insights of complexity. So SIBs simultaneously both embrace and reject a complex view of the world and it is this inherent inconsistency, I believe, that makes them fatally flawed. No matter how you design them, at some point, the top-down, linear logic of outcomes-based payments has to meet the bottom-up, emergent reality of the world. And reality always wins. One response to this challenge is to invest increasing amounts of time, energy and intelligence attempting to design ever-better metrics and mechanisms. I’m afraid these costly efforts are doomed to fail, as the complexity remains regardless. Another response is to point out that at least SIBs are better than the bad old days when contracts specified activities. While this may be true, it is a particularly unsatisfying answer. Can’t we find a better way? The most promising ideas shared at the conference “adapted” the SIB model so that it was based less on defined payments for defined outcomes and more on providing a framework within which collaboration and innovation can be fostered. In other words, scaling back the top-down deterministic element sometimes to the point of non-existence. As Scott Kleiman from Harvard’s Government Performance Lab (formerly known as the SIB Lab) admitted — when people come asking for help implementing SIBs he usually recommends something else. While SIBs were never “bonds” in any meaningful sense of the word, once the outcomes payments and equity components are removed I think we can safely drop the quasi-financial nomenclature entirely. Perhaps a new label such as social impact partnerships (SIPs) would make more sense — or put more simply — collaborative working. Call me old-fashioned, but the best way of financing these would be through grants, where the performance conversation is just that, a conversation about performance, rather than a spreadsheet, and it is assumed that those involved are intrinsically motivated to make the world a better place. There is already a good deal of thinking on models like this and examples of how they can work in practice. See, for example, Dr Toby Lowe’s work on commissioning in complexity. My suggestion is that at the Social Outcomes Conference 2020 we dedicate two sessions to SIBs as an interesting, but largely historical detour, and spend the remaining 21 exploring how we can achieve better outcomes in a complex world.
https://medium.com/centre-for-public-impact/social-impact-bonds-complexity-wrapped-in-conformity-c3e815dd0ed7
['Adrian Brown']
2019-09-10 20:10:04.849000+00:00
['Outcome Based Contract', 'Outcomes Measurement', 'Impact Investing', 'Our Vision For Government', 'Commissioning']
Deep Learning with minimum Coding
GETTING STARTED | DEEP LEARNING | KNIME ANALYTICS PLATFORM Deep Learning with Minimum Coding Figure 1. An example of a recurring neural network Recently, deep learning has become very popular in the field of data science or, more specifically, in the field of artificial intelligence (AI). Deep learning covers a subset of machine learning algorithms, stemming from neural networks. On the subject of neural networks and their training algorithms, much and more has already been written. Briefly, a neural network is an architecture of interconnected artificial neurons, each neuron performing a basic computation via its activation function. An architecture of interconnected neurons can thus implement a more complex transformation on the input data. The complexity of the transformation functions depends on the single neurons, on the connection structure, and on the learning algorithm [i] [ii] [iii]. While neural network architectures from the past consisted of just a few simple layers, mainly due to limitations in computational power, deep learning architectures nowadays take advantage of neurons and layers of neurons dedicated to specific tasks, such as convolutional layers for image segmentation [iv] or LSTM units for sequence analysis [iv]. Deep learning architectures also rely on increased computational power, which allows for a faster training of multilayer and recurrent networks [v]. Python — TensorFlow — Keras — KNIME The authors of the Python script have long since made available to the general public a set of machine learning algorithms within the scikit-learn library framework. In more recent years, Google has also open sourced its TensorFlow libraries, including a number of deep learning neural networks. TensorFlow functions can run on single devices as well as on multiple CPUs and multiple GPUs. This parallel calculation feature is the key to speeding up the computationally intensive training required for deep learning networks. However, using the TensorFlow library within Python can prove quite complicated, even for an expert Python programmer or a deep learning pro. Thus, a number of simplified interfaces have been developed on top of TensorFlow, exposing a subset of its functions and parameters. The most successful of such TensorFlow-based libraries is Keras. However, even though Keras integration presents a lower difficulty than the original TensorFlow framework, it still requires some programming skills. KNIME Analytics Platform, on the other hand, is an open source GUI-based platform for data science. It covers all your data needs without requiring any coding skills. This makes it very intuitive and easy to use, considerably reducing the learning time. KNIME Analytics Platform has been designed to be open to different data formats, data types, data sources, and data platforms as well as external tools, for example, Python and R. KNIME Analytics Platform consists of a software core and a number of community provided extensions and integrations. Such extensions and integrations greatly enrich the software core functionalities, tapping, among others, into the most advanced algorithms for artificial intelligence. This is the case, for example, with deep learning. One of the KNIME Deep Learning extensions integrates functionalities from Keras libraries, which in turn integrate functionalities from TensorFlow within Python (Figure 2). Figure 2. The deep learning Keras integration in KNIME Analytics Platform 4.4 encapsulates functions from Keras built on top of TensorFlow within Python. KNIME Deep Learning — Keras Integration In general, KNIME deep learning integrations bring deep learning capabilities to KNIME Analytics Platform. These extensions allow users to read, create, edit, train and execute deep learning neural networks within KNIME Analytics Platform. In particular, the KNIME Deep Learning — Keras integration utilizes the Keras deep learning framework to read, write, create, train and execute deep learning networks. This KNIME Deep Learning — Keras integration has adopted the KNIME GUI as much as possible. This means that a number of Keras library functions have been wrapped into KNIME nodes, most of them providing a visual dialog window to set the required parameters. The advantage of using the KNIME Deep Learning — Keras integration within KNIME Analytics Platform is the drastic reduction of the amount of code to write. Just by dragging and dropping a few nodes, you can build the desired neural architecture, which you can subsequently train with the Keras Network Learner node and apply with the Keras Network Executor node — just a few nodes with easy configuration rather than calls to functions in Python code. Installation In order to make the KNIME Deep Learning — Keras integration work, a few pieces of the puzzle need to be installed: · Python (including TensorFlow) · Keras · KNIME Deep Learning — Keras extension More information on how to install and connect all of these pieces can be found on the KNIME Deep Learning — Keras Integration documentation page. A useful video explaining how to install KNIME extensions can be found on the KNIME TV channel on YouTube. Available Nodes After installing the KNIME Deep Learning — Keras extension, you will find a category Analytics / Integrations / Deep Learning / Keras in the Node Repository of KNIME Analytics Platform (Figure 3). Here, you can see all of the nodes available for deep learning built on Keras. A large number of nodes implement neural layers: input and dropout layers in Core, LSTM layers in Recurrent, and Embedding layers in the Embedding subcategory. Then, of course, there are the Learner, Reader and Writer nodes to respectively train, retrieve and store a network. A few nodes are dedicated to the conversions between network formats. Two important nodes are the DL Python Network Executor and DL Python Network Editor. These two nodes respectively enable custom execution and custom editing of a Python-compatible deep learning network via a Python script, including Jupyter notebook. These two nodes effectively bridge KNIME Keras nodes with other Keras/TensorFlow library functions not yet available in the KNIME Deep Learning integration. Figure 3. Some of the nodes available in KNIME deep learning integrations. Notice the Keras integration and the many nodes available to build specific network layers. A number of nodes are also available to train networks in Keras, TensorFlow and Python. Available example workflows KNIME offers a number of example workflows training and consuming a deep learning architecture. Some of those workflows are available on the public EXAMPLES server. Some others are available on the KNIME Hub. The public EXAMPLES server is accessible from within the KNIME Analytics Platform workbench. In the top left corner, you can see the KNIME Explorer panel, listing the content of your LOCAL workspace as well as the content of mounted KNIME Servers. One KNIME Server is mounted from the start: the EXAMPLES server. The EXAMPLES server can be accessed only in read-only mode. Double-click on it to open the list of example workflows. The KNIME Hub is accessible from https://hub.knime.com and contains workflows published by the KNIME community. By selecting “Workflows” and typing the search words “deep learning” you will be offered a plethora of example workflows on this topic (Figure 4). Some workflows are simple and illustrate the usage of a specific node or feature. Some workflows are more complex and show a possible solution to a classic data science use case, such as demand prediction in IoT, customer segmentation in customer intelligence, or sentiment analysis in social media. These example workflows help you jump-start the resolution of your own use case. Some use only Python scripts, some use only KNIME Keras nodes, and some use a mix of the two. Some solve an image processing problem, some a text processing problem, and some a classic data analytics problem. If you intend to use KNIME for deep learning, you should start from one of these example workflows — the one that is closest to your current use case. Figure 4. Deep Learning workflows on the KNIME Hub. To see how to build, train, deploy, import, customize and speed up a deep learning network, let’s go through the workflow 08_Sentiment_Analysis_with_Deep_Learning_KNIME_nodes. This workflow extracts the sentiment of movie reviews from the IMDb dataset, using a relatively simple LSTM-based neural network built solely with KNIME Keras nodes and no Python script. Figure 5. Workflow 08_Sentiment_Analysis_with_Deep_Learning_KNIME_nodes predicts the sentiment of movie reviews using a codeless implementation of an LSTM-based deep learning network. How to Build a Deep Learning Network The network for sentiment analysis should have: · An input layer to accept the word sequence in each review · An embedding layer to transform the text words into a numerical space with lower dimensionality than the dictionary size · An LSTM layer to learn and predict the review sentiment from the text word sequence · A dense layer with sigmoid output function to produce the sentiment prediction The category Keras/Layers shown in Figure 3 offers a wide selection of nodes to build specific layers in deep learning networks. Thus, a pipeline of such nodes builds the desired neural architecture. In order to build the network, we use a specific node for the input layer, a node for the embedding layer, then the LSTM Layer node, and the Dense Layer node for the output neurons. The input layer is configured to have N=80 neurons, where N is the maximum number of words in a document. Input documents are then zero-padded if the maximum number of words is not reached. Notice that this layer does not perform any transformation on the input data; it just collects them. The embedding layer is configured to embed the words of the input sequence into a numerical vector. The embedding vector dimension is set to 128, which is way below the dictionary size (~20K words). The dictionary size would be the vector dimension if a one-hot encoding text representation were adopted. The output of the embedding layer is now a tensor [128x80], covering a sequence of 80 word vectors of size 128 each. The LSTM layer processes this input tensor and produces a sentiment class (1-positive vs. 0-negative) as output. The dense layer applies a sigmoid function to the predicted sentiment to make sure the predicted sentiment class falls in the [0,1] range. The whole network was built using four nodes and without writing a single line of code. Easy. Figure 6. Detail of neural network assembling in the top part of workflow 08_Sentiment_Analysis_with_Deep_Learning_KNIME_nodes: input layer, embedding layer, LSTM layer, dense output layer. Notice that these are not the only neural layers available in the KNIME Deep Learning — Keras extension. There, you can find a number of convolutional layers, the dropout layer, the simple recurrent layer, and many more. How to Train a Deep Learning Network After transforming each text into a sequence of index-encoded words, we split the original data set into training and test sets, via the Partitioning metanode. The training set is then used to train the deep learning neural network we have just created. To train a neural network, you need only one node: the Keras Network Learner node. This node takes three inputs: a previously built neural network, the training set, and, optionally, a validation data set. This node has four configuration setting groups: one for the input data and their format; one for the target data and the loss function; a third one for the training epochs, batch size, and optimization parameters; and, finally, a fourth one to handle stagnation in learning. In the third group of settings, the following optimization algorithms are available: Adadelta, Adagrad, Adam, Adamax, Nadam, RMSProp, Stochastic gradient descent. How to Deploy a Deep Learning Network Similar to training a deep learning neural network, only one node is needed to apply it: the Keras Network Executor node. The Keras Network Executor node is a very versatile node. It executes a deep learning network on a compatible external back-end platform, selectable in the node configuration window. The configuration window also requires the format of the input data under the menu “Conversion.” Here, you need to specify the type of encoding that has been used to convert the words into numbers to feed the network, e.g., just numbers or collections of numbers. How to Speed Up Training and Execution of a Deep Learning Network What would deep learning be without fast execution of its networks? As anticipated earlier on in this article, one of the most successful features of TensorFlow (and therefore of Keras) is the parallelization of neural network training and execution on multiple CPUs and GPUs. Execution of Keras libraries within the KNIME Deep Learning integration is automatically parallelized across multiple CPUs. If the GPU-based libraries of Keras are installed, execution of Keras libraries also runs on available GPUs, as explained in the Python-Keras-KNIME installation instructions. How to Import and Modify a Deep Learning Network Previously, trained deep learning neural networks could also be imported via: · KNIME Model Reader node, if the network has been stored using KNIME · Keras Network Reader node, if the network has been stored using Keras · TensorFlow Network Reader, if the network has been stored using TensorFlow Sometimes, networks need to be modified after training. For example, we might need to get rid of a dropout layer, separate the original network into subnetworks, or add additional layers for deployment. Whatever the reason, the DL Python Network Editor node can help. The DL Python Network Editor node allows custom editing of a Python-compatible deep learning network via a user-defined Python script, including Jupyter notebooks. This simple Python code snippet, for example, extracts the encoder network from an encoder-decoder architecture. from keras.models import Model from keras.layers import Input new_input = Input((None,70)) encoder = input_network.layers[-3] output = encoder(new_input) output_network = Model(inputs=new_input, outputs=output) Conclusions We have finished the exploration tour of the KNIME deep learning integration via Keras libraries. We’ve seen that the Keras deep learning integration in KNIME Analytics Platform relies on the Keras deep learning library, which in turn relies on the TensorFlow deep learning library within Python. Therefore, installation requires a few pieces to make the puzzle complete: Python, Keras, and KNIME Deep Learning — Keras extension. This particular KNIME Deep Learning extension takes advantage of the KNIME GUI and thus allows you to build, train and apply deep learning neural networks with just a minimum amount of Python code — if any at all. Dedicated nodes implement specific layers in a neural network, and a neural architecture can be assembled simply with just a few drag and drop actions. A Learner node takes care of the training and an Executor node takes care of the application. Additional support nodes are available for network editing, storage, retrieval, and format conversion. While not eliminating the mathematical complexity of deep learning algorithms, the KNIME-Keras integration allows you, at least, to implement, train and execute networks with little to no experience in Python coding. References [i] C.M. Bishop, “Neural Networks for Pattern Recognition”, Oxford University Press, 1995. [ii] S. Haykin, “Neural networks and Learning Machines”, Prentice Hall, 2009. [iii] S. Haykin, “Neural Networks: A Comprehensive Foundation”, Prentice Hall, 1999. [iv] I. Goodfellow, Y. Bengio and A. Courville, “Deep Learning”, The MIT Press, 2016. [v] The Unreasonable Effectiveness of Recurrent Neural Networks
https://medium.com/low-code-for-advanced-data-science/deep-learning-with-minimum-coding-d87550c62d85
['Rosaria Silipo']
2021-08-02 07:01:05.493000+00:00
['Knime', 'TensorFlow', 'Getting Started', 'Deep Learning', 'Keras']
Getting Fired Broke My Heart
Getting Fired Broke My Heart How losing my job dismantled my world Image by bruce lam from Pixabay The first thing I noticed that April morning was that my computer monitor wasn’t on. There was a small post-it stuck to the screen- Come see me - JM. I walked into my bosses office with zero intuition or expectation that anything was amiss. I wasn’t worried, but it only took a moment for that to change. Years later, I can still conjure up a near-perfect image of the face I’d seen as friendly turning into something else. “He wants me to let you go immediately.” The moment the words left his lips, my system went into overdrive, and it was all I could do to hold back the tears until I was away from him. My skin sparked with the shock of it, and I tried to speak, but each time I was cut off. He made it clear that it didn’t matter, they had already made up their minds, and I would not be offered the courtesy of explaining, reasoning, asking questions, or defending myself. That I had not done what they accused me of was inconsequential. When I am asked about my strengths and weaknesses in job interviews, I am honest with my answers. I am a loyal employee. I fall in love with the places I work, the people I work with. I’m sensitive and have a big heart, and that means I will do my best for you. It also means that if you turn your back on me, my heart gets broken. I wasn’t just losing my job. I was losing my friends, or people I thought were friends. Working 40 hours a week, you spend as much time with your co-workers as you do at home. What happened for me in my boss’s office that morning was 22 months of hard work, dedication, loyalty, trust, connection, and contribution negated, gone in a flash. This person I’d thought cared about me on a human level showed he didn’t know me at all. I try hard to be a good person, and being told that I am not one is a pill that I can’t choke down. I sobbed as I packed up my office. I took down my sons’ drawings from the bulletin board, dropped into my bag the mini-unicorns my coworkers had hidden scavenger-hunt style around my office during the week leading up to my birthday. I unceremoniously dumped in everything that made my desk mine, desperate to be anywhere but here. On top, I set the Starbucks sandwich I’d bought on the way to the office that would never be eaten. It turns out that something that takes months or years to build can be broken down in a fraction of the time, and 10 minutes later, the life I knew was dismantled. I tried not to make eye contact with my co-worker as I left my keys on the front desk. There was no use trying not to cry, my body wracked with out-of-control tears, but I managed to hold back the vomit I felt rising in my throat. The shattering happened in stages. By the time I knocked on my boyfriend’s apartment door four minutes later, shock had blossomed into a full grown panic. The reality of what had happened impacted me again and again. I hyperventilated until he brought me my purse and after I’d taken the Xanax, I sobbed until I couldn’t breathe. I shook in his arms, trying to stay afloat as waves of disbelief, grief, and humiliation washed over me. There are 100 ways I can think of to lose a job that would be easier. If I’d done something wrong and got caught, at least I would know I had it coming. If I got laid off, at least I would know it wasn’t me. Here I was, being told that it was nothing but me, that I had dug this grave and I’d have to lie in it. The morning I got fired, I learned what it feels like to be completely without direction, utterly lost and unable to find your way. I learned what real betrayal feels like, became familiar with the shock of realizing someone is not on your side. My heart cracked with the excruciating knowledge that someone I cared for thought so poorly of me. It only took a few hours for me to realize that my rent would be due in less than two weeks, and that I wouldn’t be able to pay it. It was terrifying, and I was paralyzed. I had less than $500 in my bank account, rent coming due, bills coming due, two kids who depended on me to take care of them, and no job. I was already having trouble squeezing by after having to take unpaid sick days to care for myself and my kids. The day I got fired didn’t just end like normal days. For the man who fired me, that bad day ended at 5:00 when he went home to have a beer, pet his dog, sleep, and start brand new the next morning. He wouldn’t see me at work anymore, and his life would go on. He moved on, but I didn’t have that privilege. Shock waves of the explosion that obliterated my self-confidence and broke my ability to trust took years to dissipate. For the first week, all I did was cry, and pack. I had worked so hard and been so proud of the perfect-for-us condo my kids and I moved into 8 months after my marriage ended. I had gotten us on our feet and moved us from bedrooms in their grandmother’s house into our own home, started building a life as a family of three. The day after I got fired, I got an email from my mom telling me that she and her husband wanted me to move back in with them. “I know you don’t want to,” she wrote, “but it will be best.” So, less than two years into the new life I’d fought for, I packed up three lives in less than 7 days, and moved back into my mom’s house for the second time. 35 years old, I was faced with not being able to support myself or my children. I did what I had to, and I was lucky the option was there for me. It was a relief and a letdown all at once. I spent every moment packing and moving, or worrying that I wouldn’t finish in time to avoid another full month’s rent. The soul-deep feeling of failure I embodied in the days after I lost that job was rivaled only by the failure I’d felt when my marriage ended. Once again, I was sure I was letting everybody in my life down. I was imposing on my mom and her husband and failing my kids who were being severely disrupted for the second time in under two years. I obsessed over the cognitive dissonance of finding myself in this situation based on the idea that I would do something to purposely take from or cause harm to the company. I was stuck in a loop of utter disbelief that my former boss thought so little of me, and my mind vacillated between feeling insulted, misunderstood, and inexplicably ashamed. I became depressed, and the next year of my life was spent in self-doubt. The confidence I’d gained after my divorce in my ability to work hard, gain skills, and improve my performance was gone. The excitement and momentum I’d gained in starting my own consulting business evenings and weekends disappeared. Ironically, being fired meant I would never start the side business that got me fired. I was in survival mode. The Monday after I got fired, I filed for unemployment. I made an appointment with DSHS to get food stamps so I could feed my children. For months, I dreamed about my coworkers almost every night. I’d wake up in the mornings with my heart torn open and the betrayal fresh on the surface. I missed my friends. Getting fired brings on a lot more grief than I’d expected. I grieved the loss of people I cared about and realistically knew I’d likely never talk to again. I grieved the loss of my autonomy and independence, something I’d worked really hard for in the wake of my divorce. I hate living in transition and in limbo, and not having a say in what’s happening to me. It wasn’t just a job that was taken from me that day. It was stability, it was knowing where I was going and when I would get there. It was a future I had planned, obliterated in moments. For the next eighteen months, I’d have to make decisions about which bills were best to let go delinquent. I began basing health decisions on the fact that I was effectively homeless and without income. Every penny was allocated. My credit card balances rose. My mother and her very generous husband took over my car payment for several months, and I was grateful but beyond embarrassed that it was necessary. Every time the trauma started to scab over, something tore it back open. Phone calls with my unemployment case worker where I had to listen to all of the bad things my ex-boss said about me trying to deny my claim split my wounds wide open. Applying for jobs and trying to decide how to explain my separation from my last job was frightening and stressful. Decisions about whether to mask my pain or somehow work it to my advantage wrenched my heart. Three years later, accusations and incidents at my new job pulled this trauma right back up to the surface. It’s been 3.5 years since the worst day I ever had at work. I’ve been in my current job for just over two, and I finally feel like everything I’ve been through led me somewhere good. If all goes to plan, I’ll retire from this job one day. Still, the traumas I experienced that day sometimes cause me anxiety and tinge my professional experiences. The aftermath of losing a job isn’t just the annoyance of finding a new job. It’s financial, yes. But it’s also emotional, traumatic, terrifying, and long-lasting. My experience snowballed in so many areas I couldn’t name them all. That day will shape parts of me for the rest of my life. I don’t know that I will ever really “get over” being fired or truly, fully let it go. I can heal, and move on, but a broken heart stays with you forever. Don’t miss a thing! Sign up for my weekly newsletter here.
https://medium.com/rachael-writes/getting-fired-broke-my-heart-18b95f755703
['Rachael Hope']
2019-08-12 19:30:47.627000+00:00
['Life', 'This Happened To Me', 'Work', 'Unemployment', 'Worst Day']
Creating and Managing Elasticsearch Indices with Python
Defining your ES index mapping Now that we have a cluster up and running, let’s look at the fundamentals of data and indices in ES. Data in Elasticsearch is in the form of JSON objects called “documents”. A document contains fields, which are key-value pairs that can be a value (e.g. a boolean or an integer) or a nested structure. Documents are organised in indices, which are both logical groupings of data that follow a schema as well as the physical organisation of the data through shards. Each index consists of one or more physical shards that form a logical group. Each shard, in turn, is a “self-contained index”. The data in an index is partitioned across these shards, and shards are distributed across nodes. Each document has a “primary shard” and a “replica shard” (that is, if the index has been configured to have one or more replica shards). Data that resides in different shards can be processed in parallel. To the extent that your CPU and memory allow (i.e., depending on the kind of ES cluster you have), more shards means faster search. Note however that shards come with overhead, and that you should carefully consider the appropriate number and size of the shards in your cluster. Now that we’ve gone over the basics of ES, let’s take a stab at setting up an index of our own. Here, we’ll use data on Netflix shows available on Kaggle. This dataset is in CSV format and contains information on movies and series available on Netflix, including metadata such as their release date, their title, and cast. We’ll first define a ES mapping for our index. In this case, we’ll go with relatively straightforward field types, defining all fields as text , with the exception of release_year (which we’ll make an integer ). While Elasticsearch is able to infer the mapping of your documents when you write them, using dynamic field mapping, it does not necessarily do so optimally. Typically, you’ll want to spend some time on defining your mapping because the field types (as well as various other options) impact the size of your index and the flexibility you’ll have in querying data. For example, the text field type is broken down into individual terms upon indexing, allowing for partial matching. The keyword type can be used for strings too, but is not analysed (or rather: “tokenised”) when indexing and only allows for exact matching (in this example, we could have used it for the type field, which takes one of two values — “Movie” or “TV Show”). Another thing to keep in mind here is that ES does not have an array field type: if your document includes a field that consists of an array of, say, integers, the ES equivalent would be the integer data type (for a complete overview of ES field types, see this page). Take for example the following document: { "values": [1,2,3,4,5] } The underlying mapping for this data would simply define the values field type to be an integer: mapping = { "mappings": { "properties": { "values": { "type": "integer" } } } } Now that we’ve defined our mapping, we can set up our index on ES using the Python Elasticsearch Client. In the Python code below, I’ve set up a class called EsConnection , with an ES client connection attribute ( es_client ). Note that I’m storing my access credentials in environment variables here for convenience (and to avoid accidentally sharing them when, e.g., pushing code to GitHub). For real applications you’ll have to use a more secure approach, like AWS Secrets Manager. The EsConnection class also includes two methods, one to create an index based on a mapping ( create_index ), and another ( populate_index ) to take a CSV file, convert its rows to JSONs, and write each entry (“document”) to an ES index. Here, we’ll only use the first function, to create our netflix_movies index: We can now inspect the mapping for our index using the following code, which should return the mapping that we have just written: Writing data to ES Now that we’ve defined our mapping, we can write the Netflix movies data to our ES index, using the following code (note that this code expects the Netflix data from Kaggle discussed earlier in this post to be available in a subdirectory called “data"):
https://towardsdatascience.com/creating-and-managing-elasticsearch-indices-with-python-f676ff1c8113
['Niels D. Goet']
2021-08-28 06:31:35.547000+00:00
['Data', 'Data Science', 'Database Development', 'Data Engineering', 'Elasticsearch']
My Journey From “Cheapskate” to “Enjoyer” of my Money
If you're anything like me, you may struggle with not spending enough of your money. I know that doesn't seem like a problem people struggle with, does it? I live in America. We love buying things, especially if it’s on credit, but if you have spent any time in the finance space online you don’t typically hear people talking about how to enjoy your money. You hear people talking about how to live on 10% of your income and how much you can earn with compound interest if you invest a certain amount of money. Save, Save, Save … Invest, Invest, Invest is the song us finance content creators love to sing. And it’s true. Saving and investing is a key part of being financially fit. So is paying off your high-interest debt and navigating taxes. All of these are core principles and you should spend time learning and applying them. The question I found myself asking recently however is … “How do I enjoy my money?” I will tell you a bit about me. I have a wife and two children. I make a good living with multiple streams of income. We have hit some major financial goals and we save and invest a good portion of our income each month. It hasn't always been like this, but we are now at a place where we are able to do this and it feels great! I read books, listen to podcasts, and watch YouTube videos about how I can be better at investing — how I can cut the fat off of our monthly expenses to allocate a larger percentage of our income to the “smart things” we should be doing with our money. I have a tendency to become completely saturated in the topics I am learning. There is no middle ground for me. If I start learning more about eating healthy we are all of a sudden buying only veggies, fruits, and organic food. If I am going through a stint of working out I all of a sudden have aspirations to complete an iron man. That’s simply who I am. So when I spend my time learning about personal finance I want to do it all the way. This “all the way” mentality can be a great thing, but it can also be detrimental. With my family, I have a tendency to force my ideologies on them when it comes to our money. This hardcore way of doing things leads us to not enjoying much of the money we saw coming into our household. We aren’t people who like overly extravagant things, but my financial goals started to make me stress overeating at Chipotle or buying clothes we needed for the changing seasons. I also noticed it was causing me to burn out a bit with my saving and investing. One thing I’ve learned is that if you aren't able to taste a bit of the success you are having you have a tendency to fall off in a major way. Think of the times you have started a diet. You swear you will never eat anything with sugar again, and maybe you do well for the first couple of weeks or months, but one day your sad deprived self has had enough of the “healthy” lifestyle and you binge on sour gummy worms and toaster strudel. This can happen with your finances as well. If you don’t have balance in how you handle your money you will burn out and find yourself binging on something you would never normally purchase. Balance is key. We need to enjoy the money we have just as much as we need to save and invest it. If we learn to live in all three of these worlds, spending, saving, and investing, we will build habits that will last a lifetime. It’s ok to splurge every once and awhile. It’s ok to take a vacation. It’s ok to drive a nice car … as long as your balancing it with a strong discipline of saving, investing, and giving. If you’re reading this and your anything like I was I would encourage you to go treat yourself. It doesn't have to be anything big, or maybe it does. For me, I bought myself a watch. Maybe for you it’s taking your significant other on a date or taking your family on their first-ever trip to the beach. Whatever you decide to do, live your life in balance. If you learn to do so you will find sustained success. My watch :) Talk soon, Jarod Dickson www.millennialecon.com
https://medium.com/@jdickson/my-journey-from-cheapskate-to-enjoyer-of-my-money-c5f93aedb5a1
['Jarod Dickson']
2020-12-26 14:40:04.354000+00:00
['Money', 'Motivation', 'Personal Finance', 'Investing', 'Inspiration']
9 Weight Loss Tips to see results in a week
Ways that experts recommend Eat less move more — it seems, if you follow these rules, the kilograms will dissolve by themselves. But experienced losing weight cannot be fooled. From diet to diet, metabolism changes, from year to year the body becomes clogged, the intestines weaken, the stomach deteriorates, but no one takes this into account. We’ve prepared nine dietitian tips to help you revitalize your body and maximize your diet so you can see the first tangible results within a week. GO THROUGH A DETOX PROGRAM It is advisable to start the process of weight loss by cleansing the body. This can be compared to the general cleaning of your apartment. This system of gentle cleansing, based on the physiological processes of our body, will speed up metabolism, “calm down” insulin, start the process of lipolysis, etc. A complete recovery course is designed for 21 days, can be divided into a period of cleansing in the clinic and the implementation of medical recommendations at home. According to therapist principles, the body is healthy when its intestines are healthy, and its recovery must begin with cleansing. Taking Epsom salt solution only under the supervision of a doctor will physiologically cleanse your intestines. Take this simple survey to know more. REGULARLY MASSAGE THE INTESTINES To improve the processes of blood circulation and lymphatic drainage of the abdominal organs in the morning on an empty stomach, before getting out of bed, do a self-massage of the abdomen for 5 minutes. Perform circular stroking movements with your palm with light pressure clockwise along the large intestine, alternating with diaphragmatic breathing, making a slow deep breath in and out. Exclusion of Intolerable terrible nourishments Food can have both beneficial and negative consequences for the body. Therapeutic nutrition comprises not just in the utilization of solid food sources that give the right proportion of proteins, fats, and carbohydrates, but also in the exclusion of intolerable ones. Food, as indicated, meet these prerequisites: the eating regimen doesn’t utilize items containing gluten, lactose, fructose, and histamine. Diagnostics(hydrogen breath tests for fructose and lactose narrow-mindedness, blood tests for food, gluten, histamine intolerance), carried out in the clinic, will permit to decide a wide range of prejudice and make an individual eating regimen. Eating terrible nourishments can mess stomach related up, weight increase, expanded instinctive fat, and disturb rest and temperament. MONITOR ACIDITY Your body’s health, metabolic rate, and weight control rely upon acid-base balance. The right blend of acidifying and alkalizing nourishments in a single dinner permits you to keep up an ordinary body pH. Accordingly, it is advisable to utilize protein-containing nourishments with a vegetable side dish. You can check your acidity both in the lab and on your own. For the subsequent choice, you have to purchase test strips at the drug store to decide the pH of saliva or urine. If the acidity is high, at that point you need to adhere to an alkaline diet eating routine — increase the utilization of leafy and non-starchy vegetables, ideally green and white. Cooking technique -steamed or in the form of vegetable alkaline soups. Simultaneously, limit the utilization of proteins, oats, and sugary food nourishments. According to statistics, most people have high acidity. The simple daily routine helps in reducing this problem. Know more here. DO NOT DRINK WATER 30 MINUTES BEFORE MEALS AND DURING It is allowed to drink the first glass of water 40 minutes after a snack. This method of fluid intake does not violate the acidity of the digestive system and allows the main enzymes to break down proteins, fats, and carbohydrates to work in full force. CHEW FOOD THOROUGHLY The digestive process begins, as is commonly thought, not in the stomach, but already in the oral cavity. There, a food lump is formed and food is wetted with saliva containing enzymes(amylase, maltase) and bactericidal substances(lysozyme). Slow intake and thorough chewing of food 30–40 times start the correct fermentation processes, allow you to feel full faster, and reduce the amount of food eaten. DO NOT EAT COTTAGE CHEESE AND DO NOT DRINK KEFIR AT NIGHT Eating dairy products containing lactose in the evening can induce fermentation and increase the release of the hormone insulin, which increases fat storage, especially in the abdomen. DO NOT PROVOKE INSULIN SPIKES Eliminate carbohydrates and sugary foods in the evening. It also stimulates insulin to rise, making weight loss difficult. Introduce intermittent fasting into your diet — avoiding one meal, such as dinner, 2 times a week. INSTEAD OF SNACKS, IT IS BETTER TO DRINK MATCHA TEA. It is a powerful natural detoxifier due to its high chlorophyll content. Tea helps to reduce weight by speeding up metabolism and fat oxidation; normalizes cholesterol metabolism, reduces the level of “bad” lipids in the blood, and improves the condition of blood vessels. Matcha tea is not much inferior to coffee in its properties to tone the body, but, unlike it, does not acidify the body, but alkalizes it. Don’t take much stress if you fail to follow the tips to stay healthy. Check this simple routine instead.
https://medium.com/@lattudalai/9-weight-loss-tips-to-get-best-results-in-a-week-51d18ecac692
['Lattu Dalai']
2020-11-16 15:56:44.118000+00:00
['Slimming World Recipes', 'Health And Wellness', 'Weight Loss', 'Health And Fitness', 'Weight Loss Tips']
Carpet Care | 10 Tips for cleaning and caring for carpet at home | Manchester House Cleaning…
Carpet remains the most popular floor covering in the UK, however, with years of use your carpet is susceptible to damage, here are 10 tips on how you can get the most out of your carpet and make sure it looks its best for years. Carpet is probably the most popular floor covering in the UK, the large majority of British homes have carpet in one form or another, and for good reason, carpet is naturally soft, relatively affordable to install, and looks beautiful. However, carpet floor covering has specific maintenance-related concerns. If you want to get the most out of your carpet and make sure it looks new for the longest period possible, there are steps you should take to clean and care for your carpet. By taking a few easy steps and developing carpet care and a carpet cleaning routine you can get the most out of your carpet for the longest possible period. If you take care of your carpet, in the long run, it will be a worthwhile investment. If you neglect it and do not preserve it restoring your carpet to its previous condition may be impossible. By following a few simple tips and paying special attention to your carpets, not limited to the occasional vacuum, you will get the most out of your precious floor coverings. We will list 10 carpet care and carpet cleaning tips you can easily follow at home. With these tips, you will make sure you preserve the condition of your carpet for the longest possible period. 1. Vacuum regularly. The large majority of surface soiling that will settle on your carpet will be dry. The easiest and one of the most important carpeting cleaning routines you should develop is regular vacuuming. Find the time to vacuum your carpet twice a week, preferably, however, if that is impossible for you you can get away with hoovering heavy traffic areas, such as hallways and staircases, once a week and thorough hoovering ones every two weeks. Cleaning is the best and possibly the easiest way to get the most out of your carpet and maximize its lifespan. You can start vacuuming your carpet as soon as it has been laid, after that develop the habit of regularly vacuuming your carpets. Regular cleaning will maintain your carpet for years. As with most things, there is a right and a wrong way to vacuum, make sure you adjust the height of your vacuum cleaner per room and do not let the vacuum bag get too full. 2. Avoid moving heavy furniture. Heavy, bulky furniture can cause serious damage to your carpet. This does depend on how you arrange the space you have to work within your home, however, if possible avoid dragging heavy furniture over your carpets. All furniture will leave a mark on your carpet, however, if you regularly drag particularly bulky furniture over your carpets you will end up crushing the carpet piles. Also, try and avoid placing furniture with sharp edges over your carpets. All furniture will leave a spot and a mark on your carpets, however, sharp legs or edges may end up tearing through the fibers and reaching the underlay. If possible pick furniture with flat edges and as difficult as that is try lifting it not dragging it across your delicate carpeted floors. 3. Attempt to remove marks. As we mentioned, all furniture will leave marks on top of your carpets. Regardless of whether you have redecorated your home, if you have simply rearranged your furniture or not, you should try to eliminate the furniture spots in your carpet. To do so you can simply loosen the carpet fibers with your hands and vacuum them thoroughly. However, that may not always work. The best option for removing furniture dents from your carpets is professional carpet cleaning. Carpet steam cleaning carried out by a reliable cleaning company will eliminate furniture marks from your carpet. 4. Immediately deal with spills. Spills will turn into stains, if stains are left in your carpet for too long they become impossible to remove, even for professional carpet cleaning specialists. If you spill anything on your carpet do not leave it, try and soak it up immediately. Use a lightly damp cloth, or a piece of kitchen roll to try and gently soak the stain up, do not scrub it and, do not brush it, and do not agitate it. Any agitation will set the stain into your carpet. Whatever you do, do not use a brush with heavy bristles, you can tear the carpet fibers and your stain can turn into damage. The best way to deal with spills and stains is professional carpet steam cleaning provided by a cleaning company. If you leave stains too long on your carpet they become permanent, therefore, if you are not able to remove stains yourself contact a professional cleaning company as soon as you can and schedule a professional carpet cleaning service. 5. Avoid direct sunlight exposure. This is an often neglected part of the carpet care and carpet cleaning routine you should develop. Prolonged exposure to direct sunlight will, over time, fade the color of your carpet. If your living room or bedroom carpet is going to be exposed to direct sunlight for prolonged periods consider blinds. Put the blinds down during particularly sunny periods of the day. 6. Clean up accidents immediately. You have been very thorough and have taken great care to protect, preserve and clean your carpets, but you have pets. Accidents will occur it’s almost inevitable, given enough time. Urine is particularly damaging to carpet fibers. With time it not only sets as a stain but also has a particularly unpleasant smell. If your pet has had an accident on your carpet cleaning it up immediately. Firstly, soak up the stain with a lightly damp paper towel, go white plain white, any color from the paper can bleed into your carpet. Be firm and remove as much urine as possible from your carpet. After removing as much moisture as you can, mix a bit of dishwashing liquid with warm water and clean your carpet. Do not scrub, do not brush, use a firm soaking motion, you’re attempting to saturate, dissolve the stain and extract it. If your beloved pet has had an accident and you do not wish to risk it the best thing you can do is to schedule a carpet cleaning service as soon as you can. 7. Shoes and boots off. We know it does seem a bit…pedantic, but asking your guests and insisting that they take their shoes off does work, especially for heavy traffic areas. As we mentioned about 80% of soiling that will eventually find its way onto your carpet will be dust and dirt particles. You can avoid particularly heavy contamination if you simply take your shoes off inside the house. Make it law for everyone in your household to remove their shoes before walking on your carpets. Work boots and high heels are particularly horrible, they do not only track dirt inside your home but can damage your carpets. To be perfectly fair, slippers aren’t much better but at the very least they have not been worn outside, and will not track dirt and soiling onto your carpet fibers. 8. Place barrier mats. All floor coverings will get dirty and all of them require care and cleaning, carpet, however, is particularly difficult. It is always a good idea to place barrier mats over heavy traffic areas and in especially sensitive areas. If your carpet is close to your kitchen, there is a risk of an oil spill, hallways and entranceways are also susceptible to heavy soiling. If you have an external terrace to your living room consider placing a moisture-absorbent mat. All of these spaces will be protected by barrier mats, you can use a regular welcome mat, pick a visually appealing one, moisture-absorbing mats, or even an offcut of your carpet. Place little mats around those areas and doorways, even over heavy furniture to protect your carpet. Keeping a doormat (or a few) inside is a good idea. 9. Deal with snags. If you discover a sticking snag in your carpet do not pull it, pulling on it will cause a bigger problem, it will damage the weave of your carpet and the damaged area will become more obvious. Use a sharp pair of scissors to cut the carpet snag instead, tuck whatever ends may be left into the surrounding carpet area. Find the place where the yarn is still attached to the underlay, cut the snag and tuck the ends back into the carpet fibers. 10. Choose professional carpet cleaning. The best and frankly easiest step you can take in ultimately protecting, preserving, and caring for your carpet is using a professional carpet cleaning service provided by a reliable cleaning company. Carpet cleaning is important, without the specialized equipment, detergents, and experience a fully equipped carpet cleaning technician has it will be ultimately impossible for you to achieve the same carpet cleaning results on your own. We recommend steam cleaning your carpets once every 3 to 6 months. A carpet cleaning service provided by a professional cleaning company will not only clean your carpets but also sanitize and disinfect them. There is no better way to deal with stains or heavy soiling in high-traffic areas than a carpet steam cleaning service. To make sure your carpet is looked after as best as possible you can choose to use a professional cleaning company and schedule a carpet cleaning service. Manchester House Cleaning Services.
https://medium.com/@office-11008/carpet-care-10-tips-for-cleaning-and-caring-for-carpet-at-home-manchester-house-cleaning-6dedb46d762e
['Manchester House Cleaning Services']
2021-12-18 15:36:07.321000+00:00
['Clean', 'Cleaning', 'Carpet', 'Carpet Cleaning', 'Carpet Cleaners']
The Smartest Wall of All
Moses Ma/DepositPhoto All the United States can think and talk about these days is “The Wall.” The reality is that this unrealistic and likely-to-fail campaign promise is actually a distraction from real emergencies… and not just opioids and climate change. America needs to realize that — in terms of our economic future — there was recently a major Sputnik moment. The Chinese government has launched a “Manhattan project” to insure they are the ones who make the major breakthroughs in quantum computing and communications — by building the world’s biggest quantum computing research complex, called the National Laboratory for Quantum Information Sciences in Hefei, the capital of the eastern province of Anhui. They have committed $12 billion to win the quantum computing race. What’s more, Baidu, Alibaba Group Holding and Tencent Holdings — the Chinese internet triumvirate known by the acronym BAT, which boasts a collective market cap of $1 trillion — are spending aggressively because they agree with the Chinese government that whoever builds a fully functioning quantum computer first will likely rule the world. Alibaba alone recently announced investing $15 billion into next-generation technology such as AI and quantum computing. These efforts have already hit paydirt: A couple of years ago, Chinese researchers launched the world’s first quantum satellite into space from a launchpad in the Gobi Desert, to accelerate their efforts to build an unhackable quantum internet. And China was first to pack 18 qubits — the most basic units of quantum computing — into just six quantum entangled photons. That’s an unprecedented three qubits per photon, and a record for the number of qubits linked to one another via quantum entanglement. This is significant technological momentum. America, it’s time to wake up and smell the quantum coffee! Here’s something to consider: China already knows that physical “walls” don’t work, because it’s a lesson learned centuries ago. In 1644, Manchurian forces were able to breach the Great Chinese Wall, not just for raiding parties, but to conquer all of China. The pain of first hand experience has taught China that Great Walls and Great Pyramids are things that emperors may want, but are, in reality, nothing more than monuments. But what if China had invested all the money that went into building something monumentally useless… into developing more advanced military weaponry? Would China have prevailed against the Mongols and Manchurians if they had spent that money more wisely? Would they have fared better against the British? Note that the Chinese are not rushing to keep up with America with a “wall race”, they are implementing wiser investment strategies that can insure the future. It’s instructive to estimate the cost of building China’s Great Wall. This could tell us how much could have been invested into more effective military technologies. If the per mile construction cost were $5 million/mile in today’s dollars (the cost the wall in the US is estimated at $23 million/mile, but things are cheaper to make in China), and then multiply by 13,170 miles and we get roughly $70 billion in today’s dollars. So what’s the cost of Trump’s wall? When Trump tells people “I can build it for $12 billion”… it reminds me of a shady contractor promising to remodel my kitchen and bathroom for only $12,000. The minute I require a fixed price contract with penalties, he’ll change his tune. The DHS and others estimate a more realistic figure between $22–25 billion, but that’s just to build it and doesn’t include maintaining the structure. The Senate Democrats did a more thorough analysis of the hidden, total costs of ownership, and the price tag skyrockets to just around $70 billion! In other words, Trump is trying to build the equivalent of China’s Great Wall in America, using roughly the same amount of money, creating something that will serve as his lasting monument. But the reality is that if it is built, it could very well someday be known as Trump’s Folly. It would be so much better to put that money into something that passes muster in terms of ROI, in terms of insuring America’s future. Allow me to put my innovation skills where my mouth is, and propose a solution that might work better and cost less to secure the border. Here’s the idea: What if we deployed a network of tethered drones along the border? By tethering drones to solar panels, we won’t need to maintain an army of drone operators. Plus, we can build sensitive seismic monitors in each base station to detect tunneling efforts. And use a long range mesh network to connect them. This technology would allow the US to invest in advanced drone technology, like night vision cameras that don’t require IR illumination or artificial vision to discern the difference between coyotes, and well, human coyotes, attempting to camouflage their efforts. And we’d need a much smaller flying drone operation (recharging at base stations) to deploy when needed or for backup. The TCO (total cost of ownership) would likely be much less than a concrete wall or untethered drones requiring pilots, and likely more effective. Technology works. So let’s use it. Let’s build a smart wall instead of a dumb wall, and learn how to bring America back together again in the process. Perhaps we can ask a number of high tech geniuses to roll up their sleeves and demonstrate their patriotism by improving on this idea, or coming up with better ones. Maybe we can ask Elon Musk to design and build this for America at cost instead of using traditional defense contractors seeking a profit. Doing so, we could possibly secure the entire border for less than the $1.375 billion lawfully allocated in the budget. The bottom line is that America would be extremely foolish to hand over what will likely mushroom to $70 billion to a guy no U.S. banker would touch, to build a wall that no one really wants, and — depending on what the Mueller reports ends up saying — could likely end up serving only as a monument to someone America will someday want to forget. Instead, what we need to build is the smartest wall of all: a wall of quantum computing, blockchain and AI patents. The caravan to worry about isn’t formed by refugees, it is the long march of Chinese scientists and engineers, building their own wall of patents as fortifications on emerging economic battlefields. If our elected representatives could work together, all the way from the President down to mayors and local boards of education, in order to do the following, it would make for a bold step toward a better tomorrow for America: 1. Through the National Quantum Initiative Act (H.R. 6227), the US government has proposed to devote a paltry $1.275 billion over five years to support research in quantum technology. (Remember that the Trump administration has proposed to cut science research and development deeply in the 2018 budget.) If we want to win this new space race, America needs to invest more than China on quantum computing. Think $20 billion. Think of building our quantum future instead of a physical wall. 2. Relax investment requirements for quantum computing based startups. This means that the SEC needs to find a way to allow qualified and self-compliant quantum ventures to launch next generation blockchain-based ICOs and STOs. This will enable a global pool of investors to rapidly back US-based quantum startups. Maybe throw in some tax benefits too! 3. For quantum-enabled technologies to be realized, the U.S. needs a workforce with new skills, so make it possible to for quantum mechanics to be taught in high schools as part of the Advanced Placement system. And while we’re at it, let’s offer special interest free educational loans to PhD students studying quantum technologies and AI. These are strategic technologies, and the country that has the most PhDs will win, so we need to implement a quantum computing brain drain strategy. 4. Have the DOD rapidly fund the development of an unhackable quantum communication network. It’s vital to adequately fund this program, and not just academic scientists, but red team hackers as well. This is a true emergency, because if China achieves quantum decryption first, the cost of retrofitting all U.S. and European computer systems reactively would exceed a trillion dollars. 5. You know the X Prize? The government should team up with private industry to offer a $100 million prize for the first running general purpose quantum computer. All you have to do is require that the winner be a US company, that will pull the top quantum computing startups to the US. Taking these first five steps would insure a solid start and would help American retain its place at the leading edge. People, let’s make this happen!
https://medium.com/@moses.ma/the-smartest-wall-of-all-701436cd20fd
['Moses Ma']
2019-03-03 17:53:53.355000+00:00
['Quantum Computing']
Crying Over Cornflakes and Cookies in a Third World Country
Moving to a third world country So, in case you don’t know one of the reasons why I came down to beautiful Guatemala to live was to help the poor people here as best I can. When I first got down here I was sponsoring families through an NGO (non-profit organization), but after almost 2 years I cut off the sponsorships. Now before you think it was mean and heartless to do that, but it wasn’t. I understand that charities have overhead and staff to pay and all that good stuff but I just felt that my money would go a lot further if I just personally helped the families myself. So I started doing that a little at a time and then my income took a huge hit at the beginning of 2018 and I could barely afford to support myself. Because of this, I had to really scale back how much I could help. It hurt my heart but I’m working my ass off trying to make it work. I know one day it will all come together for me. In any case, food and staples are reasonably priced so even just being able to buy one small bag of food for a family won’t break my pathetic bank account and I know a family will have some good food. That’s good enough for me.
https://medium.com/publishous/crying-over-cornflakes-and-cookies-in-a-third-world-country-57334aeaef3c
['Iva Ursano']
2020-12-11 03:11:22.619000+00:00
['Life', 'Poverty', 'Kindness', 'Make A Difference', 'Hunger']
7 Quick and Easy Ways to Combat Low Self Esteem
pexels- Khoa Vo Suffering from low self esteem? You are not the only one. We all could do well to incorporate these seven transformational mind shifts. Let’s get right into it. You are enough. How many of us walk around feeling like we are not enough? Not good enough. Not pretty enough. Not smart enough. Enough! You are truly valuable. You are more than enough. You are a joy and a blessing. Tell yourself, I am enough. And then, drop it. Whenever you face negative self talk, remind yourself, I am enough. 2. You are worthy. We all have inherent self worth. We are worthy of love and life and good things. We deserve goodness in our lives. We deserve to be happy. We are worthy of love. You are worthy of love. 3. You are beautiful. You are so beautiful. We become even more beautiful when we accept ourselves fully and completely. It is then that we start to shine. You are without a doubt, beautiful. I see your beauty. Do you? 4. You are so unique. You are literally, one of a kind. Out of billions of people there is not another you. I don’t care if you are a twin. You are one of a kind. Really take hold of this truth. You are so special. Embrace yourself. Be more fully you. You bring to the table things that no one else can. Your joy. Your love. Your hope. Your blessings. 5. You are made of stardust. We are literally made of stardust. We are also made of rivers and mountains and deserts and valleys. You are made of stone and water and iron and bone. You are incredible. 6. You have unlimited potential. There are literally no limits to your growth and potential. You can recreate yourself daily. Don’t believe me? Pick up a book on a subject you are interested in. Read it. You have gained knowledge and increased in understanding. Take the role of the avid student. Learn a new language, a new skill, or pick up a new outlook. There are no limits as to what you can do or who you can be. 7. You are fearfully and wonderfully made. You are a magnificent creation. Created by an awesome Creator. Created in love and with love. No detail was rushed or made haphazardly. When you were created the Creator God declared you a masterpiece. His finest work. You are fearfully and wonderfully made. Embrace yourself. You are so special. You are so important. I hope these seven mind shifts will help you on your journey to self love and acceptance. You deserve it. You really do. I wish you all the best. May we grow and learn to think of ourselves in positive and beautiful ways. You are beautiful. You are worth it. You are enough. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/7-ways-to-combat-low-self-esteem-c99f795e1a4c
['M.X. Christopher']
2020-12-27 16:03:01.979000+00:00
['Self Improvement', 'Self Esteem', 'Relationships', 'Mental Health', 'Self']
An approach to apply the Separation of Concern Principle in UI Test Automation !
At the beginning let’s emphasize that the automation is kind of engineering and these activities/goals are the core of the software industry aspects. Basically, I’m not a quality engineer but ensuring the quality has my attention. I’ve been having a curiosity for this field in last years then I had this chance with last project witch is a desktop application. We have built an automation infrastructure that is fragmented into four contexts: The arrows demonstrate the direction of the dependency. 1. Application Is responsible for launching the app with the configured values, resolving the inspection 3rd party library and initializing a session with the app. It has an entry point for the nested views. It talks mostly technical language and few business language. 2. Views It defines the views and it’s UI elements hierarchically, so each element has an owner and is composed in a parent element. It exposes the business actions and actual results. Also it is responsible for loading the UI structure from Json files. It talks mostly business language and a few technical languages. For any reason if a certain element or view has been moved to another module all what we need is to edit it’s meta data to point to the new parent. 3. Tests It has the business scenarios/assertions of the test cases. It talks only business language. 4. Inspector This is for inspecting the UI elements by any type. When we want to locate a specific element relative to another element or we want to inspect an operational data i.e., items in a grid, we do that using programmatic / dynamic inspection. To achieve that it exposes a parametrized functionality that is needed to manipulate the UI. In our case we wrap the usage of xpath, page-source parsing, get child elements “/*”, or get input elements by it’s label ”/following-sibling:”. Each time the inspector is locating an element he does that hierarchically based on the view’s meta-data. This context talks only in technical language. The Design The infrastructure is implemented based on OOP concepts i.e. composition, abstraction/polymorphism. .The following diagram demonstrates the relationship between the four contexts.
https://medium.com/@abdo-emad/an-approach-to-apply-the-separation-of-concern-principle-in-ui-test-automation-f9fd4221c1ed
['Abdulrahman Emad']
2020-12-26 19:46:49.140000+00:00
['Software Design', 'Test Automation', 'Software Testing']
Media Streaming libraries tested and used in Ramudroid
Summary of approaches to stream in realtime from Rpi based Robot Live Streaming — Rpi Camera Access Libraries flash ffmpeg WebRTC motion Uv4l Janus fswebcam Rpi Cam Pros and cons of different Media Capture Libraries Motion -mjpeg Adv: Easy to install and run Inbuilt monitoring Disadv: Delay in stream capture Frame reload visible Ffmpeg Adv: Flexibility to change parameters Disadv: Many dependencies Heavier to install and make on Rpi h/w Uv4l — WebRTC Adv: Fast , no delay open codec — vp8 Disadv: P2p only Limited codec support Requires https to capture from browser WebRTC Adv: No plugins — No installations of flash or any other 3rd party plugins Royalty-free codec — VP8 , VP9 , OPUS -MIT , GPL Rapid Support — Community support and adoption by major browsers, native SDKs Javascript — Support for js is the key to making it easily adoptable by developers Integrable with any signalling method SIP MQTT XMPP Socketio websocket Live streaming on WebRTC Presentation Ramudroid v7 for IOT PROJECT DAY Open Source Project on Github
https://medium.com/ramudroid/ramudroid-v7-for-iot-project-day-b6a5b8bad010
[]
2020-10-03 13:54:47.358000+00:00
['Media Streaming Robots', 'Robotics', 'Real Time Communication', 'IoT', 'Ramudroid']
having someone
having someone poetry Source- Shrishti Arts Having someone to talk to at your fingertip in the middle of the night or through a busy day… someone to cheer in an afternoon of crisis someone to sit beside listening whilst you chirp… someone to give you a hand when you are about to jump to reignite your spirit and make your heart joyfully pump… that’s what’s life and that’s what counts!
https://medium.com/literally-literary/having-someone-b18ba9fffbad
['Nupoor Raj']
2020-08-10 05:31:01.180000+00:00
['Poetry', 'Couples', 'Romance', 'Creative Writing', 'Relationships']
Literally Last Week
Here is the weekly wrap up from last week. LL is seeing some great new writers, and a growing depth from some of our regulars. When looking back is a good thing by Tasneem Kagalwalla I really enjoyed how this is laid out, and think you will too. winter by marika bianca I don’t think there is a piece that I haven’t loved, and I am sure that is a mutual feeling for all of us here on Medium! Bright December Lights, Big Holiday Joy by Debbie Aruta I always love seeing how writers work different themes and prompts! Samurai’s Haiku #3 by Shriharsha Shaurya I’ve really enjoyed reading this haiku series! December by Rachel B. Baxter Is it a wonder why this is on here? Exquisite haiku’s! This will be the last Literally Last Week until after the first of the year! While I’m sad not to engage over the holiday season, I believe it is an important time to be with family, friends, or ourselves! ❤ Merry Holidays ❤
https://medium.com/literally-literary/literally-last-week-2c1de655ab5d
['Jess Kaisk']
2016-12-21 13:26:07.792000+00:00
['Literally Last Week', 'Poetry On Medium', 'Literally Literary', 'Creative Writing']
Forget Knowing Yourself — Uncertainty Leads to Positive Change
1. Not knowing yourself opens you up to change. When I graduated from university, I still had no idea what I wanted to do with my life. I meandered through the summer, eventually taking a retail job to have some money. Frankly, the uncertainty was killing me. I felt like I was in limbo. I wanted to go into marketing, but that was a lie I told myself to keep a slither of hope alive. However, because I didn’t know what I wanted, I was open to suggestions. I wanted to change. So, my dad suggested writing online. He put forward various tools, and with the encouragement of my girlfriend, I started blogging. Now, I know myself more. I have a clear plan, and the sense of clarity surrounding my decisions is a welcome one. It’s more than that, however. When people asked me what I do, I would shy away from the answer, afraid of what they might think. Now I am confident in my choice; I feel assured in private and public. Every day, I sit down at my desk and get to work. While I know myself more, I am still open to change. My career path isn’t predetermined, so who knows what opportunities might arise in the future. Yes, knowing yourself and what you want does feel nice, but I would never have gotten here without the uncertainty I felt. If you’re floating around in a seemingly perpetual limbo like I was, don’t tie yourself down to a personality you think you want. By doing that, you’re pulling down a mask you can still see through. It is pointless. Accept you don’t know what you want and roll with it. Open yourself to new possibilities. Just because the people around you are doing a specific thing for someone your age doesn’t mean you need to join them.
https://medium.com/the-ascent/forget-knowing-yourself-uncertainty-leads-to-positive-change-ba52735494e2
['Max Phillips']
2020-12-06 14:02:37.346000+00:00
['Self-awareness', 'Self Improvement', 'Mindfulness', 'This Happened To Me', 'Identity']
How to Avoid Losing Your Motivation While Drawing 😣
1. See the future You can’t start drawing if you don’t know what to draw. Before you set your stylus to the screen, think about what you want to get out of this drawing session. List out the characteristics of your desired endpoint. Having a clear goal will enhance your motivation because you understand what you want. Now you must put that thought into action. 2. To heck with people Many people believe that it is easier to draw when you scour the internet for ideas or follow a tutorial. Neither of the above helps you draw faster or better. When you sit down to 15 minutes of inspiration research, it will end up being 4 hours. Searching for ideas is just an excuse that your brain concocts to procrastinate getting tasks done. Do not give in to your indolent instinct. When you watch a tutorial, you are unconsciously trying to replicate the instructors drawing. When the lines are not as straight, or the colors are off, it is frustrating, which leads to discouragement and quitting. 3. Become a hermit crab Let’s say you were drawing with your phone sitting next to you. At first, this doesn’t seem like a problem. But fiery texts from a group chat keep flashing across your home screen. Your friends are at it again. Even though you are not focusing on that negativity, it impacts your optimism. You might find yourself muttering words of dissatisfaction as you erase, scratch out, and even delete. You don’t need that negativity around you while you’re trying to be productive and enjoy yourself. Relocate your phone to another room, shut down your computer, isolate yourself in a quiet room, and clear your head. Take whatever measures you need to elude the destructive elements around you. 4. Be lazy Yaaas! You can finally kick back, relax, and sprawl out on the couch to watch some Lilly Singh. Not so fast. “Be lazy” means that you should work slowly and lightly so you can remember what a grand time you had. Your brain will yearn for the bliss of drawing, so you’ll feel compelled to revisit your art. Do not rush your drawing, impose deadlines for no reason, or draw because you have to. Just smile, slow down, and unwind. Drawing is fun! 5. Celebrate your successes Well, only celebrate if you are successful. You don’t have to perfect your drawing, go to extreme lengths, or even work too long on it. To me, success is when you feel like you have put in enough effort to be proud of the time you’ve spent on your drawing. In other words: Effort + pride in time spent = SUCCESS After you are successful with your drawing, you can reinforce your hard work with a small treat. Install that new drawing software you’ve been dreaming about, buy the funny mug you had your eyes on, or enroll in the 5-day writing course you have wanted since last month. When you meet your goal and accomplish an artwork to be proud of, you deserve a prize.
https://medium.com/@zairakhemani/how-to-avoid-losing-your-motivation-while-drawing-416cd01cd1a0
['Zaira Khemani']
2020-12-16 15:01:22.447000+00:00
['Digital Drawing', 'Procreate', 'Drawing', 'Digital Art', 'Illustration']
The Fight For The Lakes: Eutrophication in Madison Waterways
As summer in Wisconsin reaches its zenith, new dangers await in Madison lakes. According to the Wisconsin Public Radio, 2020 is shaping up to be another ample year for blue-green algae blooms. Blue-green algae, or cyanobacteria as it has more recently been known, is a prokaryotic bacteria that has increased in proliferation in the past few decades. According to Everyday Health, cyanotoxins found in the algae cause skin irritation, muscle and joint pain, nausea and many other symptoms. Photo courtesy of CIMSS Cyanobacteria thrive in warm nutrient water, blooms increasing in frequency with the temperature. According to the National Ocean Service, massive algal blooms like the ones in 2018 and 2019 cause hypoxia, where the bacteria cover a large area on a body of water, blocking out sunlight from native plants and preventing an inflow of oxygen through photosynthesis. Hypoxia has been known to cause massive animal die-outs, most commonly in Wisconsin’s native fish species. Dead cyanobacteria blooms rot and sink to the bottom, depleting oxygen levels even further. After numerous studies focused around Madison lakes, it has been found out that the recent explosion of cyanobacteria in the last few years is a result of climate change. Patterns in warmer weather, heavy rainfall and nutrient runoff result in large scale blooms. According to the Environmental Protection Agency, Wisconsin has seen a rise in precipitation of 5–10% over the past century, and overall Wisconsin has averaged an increase in temperature of 2 degrees fahrenheit. These conditions cause the explosions of bacteria that we see today, and scientists think that the blooms will only get worse in the future. Cyanobacteria // Photo courtesy of Singularity Hub David Caron, a biology professor from the University of Southern California and expert in algal blooms, explains the effects of warmer water on cyanobacterias. “Most algae can grow faster in warmer water, but there are thousands of different types of algae, and different types of algae have different optimum temperatures,” Caron said. “As global temperatures warm, there are a lot of water bodies that are going to warm, and they will select types of algae. In particular, in freshwater systems, the ones that produce toxins are typically cyanobacteria.” Even though Wisconsin algal blooms are thought to be caused by agricultural runoff, Professor Caron says that agriculture is not the only industry to blame, and it is a necessary part of society anyway. “Everybody shares blame in this and it is wrong to point a finger at any one industry. There is no question that agriculture is a major entity, but I think that working together on it is what needs to be done on a global scale is what needs to be done, not saying ‘agriculture you have to clean up your act.’” Krystyn Keiber, a PhD student in the Limnology Department at the University of Wisconsin-Madison, elaborates on the effect of climate change on eutrophication. “You have changing weather patterns leading to frequent intense rainstorms, which allows for greater weather pressure on our farms, causing runoff into our lakes.” The runoff is a mix of synthetic fertilizers and pesticides, that Keiber explains is made up of mostly phosphorus, iron and nitrogen, key ingredients in cyanobacteria reproduction. At UW Madison, Keiber works with the limnology department researching algal blooms and charting patterns over the years. While the overlying problem is climate change, Farmers, politicians and residents alike can all take action to stop eutrophication. In 2010, the city of Madison banned the usage of phosphorous-filled fertilizers on private lawns, taking notice of the negative effects the fertilizer causes. As of right now, the agriculture industry is still allowed to use the fertilizers, but a ban in the future may be the next step to halt eutrophication. Professor Caron says that many initiatives are being taken to pursue no-till farming, which keeps phosphorus in the soil and out of Wisconsin waterways by decreasing erosion. Although these solutions are viable, it is likely that the only way we can prevent worldwide eutrophication is by stopping climate change. It is a fact that warmer waters show an increase in bacteria, and decreasing the amount of fossil fuels in our atmosphere is the only thing that we can do to prevent this underlying problem. Eutrophication and the sickly state of Wisconsin waterways should be a motivating factor in advocating against climate change. Unlike other climate-related issues, we are seeing the effects of eutrophication on our lakes today, watching helplessly as our lakes become poisonous and decrepit. Motivated by saving our lakes, use this evidence as reason to advocate, and support local researchers in their search for solutions. Donate to the Center for Limnology (CFL), an organization that funds undergraduate and graduate research fellowships, producing the next generations of scientists with the link below. https://limnology.wisc.edu/support/
https://medium.com/the-climate-reporter/the-fight-for-the-lakes-eutrophication-in-madison-waterways-14554dbe48a
['Owen Tsao']
2020-08-07 17:57:02.233000+00:00
['Politics', 'Feature', 'Climate News', 'Climate Change', 'Environment']
PRETORIA WEST CLINIC ••• +̳2̳7̳6̳1̳0̳4̳8̳2̳0̳7̳1̳••• ☎[!!] .__ABORTION PILLS FOR SALE in PRETORIA WEST ATTERIDGEVIE
In PRETORIA\#,_)“ CYTOTEC by Dr. Lux__ ••• +27610482071••• __LEGAL PREGNANCY TERMINATION PILLS in EAST LYNNE, PRETORIA https://abortionpillsinroodepoort.wordpress.com (+27)0610482071% ABORTION PILLS FOR SALE IN south Africa CALL DR LUX [+27610482O71])][ LUXMED 【 +27610482071】 Our Medical Abortion Clinics are legalized With modern and well equipped primary health care facilities to provide you with a private and safe environment. You have to make the right decision because we believe that this issue is sensitive and confidential. We are proud of our reputation of being a medical abortion clinic that treats each patient with distinctive care and respect. We specialize in Medical Abortions use Clinically approved pills to terminate the pregnancy, Same day, Pain free without any complications and our services carried out by qualified doctors and nurses who make sure everything works out properly and safe. Our affordable abortion prices will suit your needs, Students get 25 % off the standard prices. CALL OR WHATAPP NOW Abortion pills for sale at ROODEPOORT|PRETORIA|CENTURION|KEMPTON PARK|PARKTOWN|HONEYDEW|ILLOVO|BRAMLEY|KAGISO|MAMELODI|MAFIKENG|BLOEMFONTEIN|RANDFONTEIN|WELKOM|KRUGERSDORP|POTCHEFSTROOM|TEMBISA|JOHANNESBURG|SHOSHANGUVE|VOSLOORUS|THOKOZA|DIEPSLOOT|COSMOS CITY|ATTERIDGEVILLE|ALBERTON|GA-RANKUWA|MIDRAND|RIVONIA|RANDBURG|MABOPANE|HEIDELBERG|SANDTON|SOUTH AFRICA|BOTSWANA|ZEERUST|WELKOM|KLERKSDORP|WITBANK|GOBORONE|SILVERTON|MONTANA|GEZINA|HATFIELD|LYNNWOOD|NELLMAPIUS|EAST LYNNE|LOTUS GARDENS|FAERIE GLENN|PRETORIA NORTH|PRETORIA WEST|MORELETA PARK|ROSSLYN|SAULSVILLE
https://medium.com/@ronaw93962/pretoria-west-clinic-%CC%B32%CC%B37%CC%B36%CC%B31%CC%B30%CC%B34%CC%B38%CC%B32%CC%B30%CC%B37%CC%B31%CC%B3-429dfb710a82
['Rona Wantaza']
2020-12-22 03:58:46.817000+00:00
['Abortion Pills Windhoek', 'Abortion Pills', 'Abortion']
Disguised Conservatism and Performative Progressivism
Hope springs eternal. It truly does. For instance, I had hoped that we could elect a reasonable Democrat with a decent amount of experience who would then populate the upper echelons of her — for I had initially hoped it would be a certain Kamala at the top of the ticket — administration with a diverse cast of characters that reasonably represented the American population as a whole and would be in a position to faithfully execute their offices. I had also hoped that once said individual was elected and started picking diverse and experienced women that we’d stop having to have this discussion about why white men totally aren’t being super sexist when applying extreme double standards to female candidates, politicians, and nominees. But evidently, we cannot have nice things. Biden has nominated some really superb people to his cabinet and to the various offices to which a President must appoint people. And as worried as I was about having yet another old, white, male President I do recognize that the women he has chosen are all superbly qualified probably to a greater degree than most or any of the men who have held those positions in the past and I sincerely doubt that any other candidate excepting Kamala Harris or Hillary Clinton would have been able to look past their race and gender to appreciate their talent. Honestly, I need to give Biden credit where it is due. He is populating his cabinet with diverse and talented people rather than restricting his choices to white men who may or may not have failed upward. So naturally there’s a lot of white men who have failed upward who are extremely upset. I’m at a loss for whether I should make a joke about decapitation, cod-pieces and infertility, or just go for the whole Harry Styles in a dress being unsuitably “manly,” thing. There were some grumblings about all of the picks because oh dearie me a woman heading out a government office in our year of the Lord 2021? Be still my beating heart. But then Neera Tanden was announced and it was a bit like Biden mic dropped or something. Unsurprisingly, the GOP super hate her. Not only is she a woman of colour but she also has a history and says things online. This can be expected because frankly, the guiding principle of the Republican Party is now and has been for the past several decades relentless bigotry and selfish near to outright authoritarian sentiment. Also, she’s the OMB pick for the office of the opposing Party who just defeated their incumbent. So yeah. They mad. Also unsurprisingly, “Bernieworld” super hate her. This is because not only is she a woman of colour but she also has a history and says things online. I made the observation — echoing another tweet from someone with a lot more followers than me — that the entire reason Bernieworld so often echoes conservative or Republican sentiment is because at their base, their guiding principle is the same. Naturally, a white man then got mad at me. According to him, Neera Tanden personally bombs people for money. And of course as we all know that was definitely the totally valid and unbiased reason so many fake progressives used to justify opposition to a litany of other female legislators and cabinet members including but not limited to Hillary Clinton, Dianne Feinstein, Nancy Pelosi, Susan Rice, Madeline Albright, and so forth. Strangely, it never comes up when discussing men in either Party despite all men having a comparable record in both votes and policy. And yes, Bernie Sanders is also guilty of this. Sanders absolutely has voted to authorize wars or leave troops in place. He’s often cited as having opposed the Iraq War and if you look at only his vote for authorization that is true, but he supported multiple amendments and bills which financed the war. I’m not going to decide for you whether thing = bad, but if it’s bad for Neera Tanden then it should also be bad for Bernie Sanders. But, okay, we have Sanders’ actual votes and we do have votes for several of the women mentioned above. We can actually evaluate those which have held elected office based on their votes. It gets a bit more dicey with Susan Rice and Madeline Albright, but they do also have had some pretty public careers. But, Neera Tanden has only held office as head of the student body at UCLA. And unless I’ve missed something she’s never headed out a diplomatic office. Even in the world of hilariously extreme double standards it’s weird that the fauxgressives are attacking Tanden on supposedly being a “hawk.” Ah. No, that makes sense. Also, this is the result you get if you do a google image search for “rapey white man who leveraged misogyny to fail upwards.” (PETER NICHOLS / REUTERS) Tanden may or may not have written a very short email back in 2011 to someone who apparently did not then and still doesn’t like her that had an ethically questionable and reductive view on war. And it got Wikileaked. To be entirely honest I question the veracity of all wikileaks because it rarely tells the whole story and it’s super interesting to me that with so many men doing incredibly questionable things most of the people “exposed” by wikileaks are women who had prior good reputations. Also, Assange’s other projects include blaming feminists for everything, hurting kittens, and smearing walls with his own faeces. Even if we make the pretty absurd assumptions that we can trust Assange, the email is not fabricated, the few lines of text indicating a likely prior conversation are not devoid of meaning taken outside of context, and Tanden has not changed her opinion on the matter in nine years it’s the only thing fauxgressives seem to have on her and it’s irrelevant to her potential position at the OMB. Members of the Republican Party especially Senators are mad specifically and exclusively because she said mean things about them on Twitter. That’s so obviously stupid that I’m really not going to take the time to point out the orange painted elephant in the room, but it also makes sense because they’re from the opposing Party. Them stomping their feet a bit is to be expected. It’s dumb and I will expect — and likely be disappointed — that after they’ve voiced their criticism, they’ll behave like adults, but it’s not really all that weird. But according to Bernieworld she’s the devil in a blue dress. They have a few other complaints about her but all of them are misinterpretations or outright lies. For instance one of the issues they seem to have is that while she was at the head of Center for American Progress, ThinkProgress which is part of their Action Fund but editorially independent released a video criticizing Sanders for changing his rhetoric about millionaires once it was disclosed that he himself was one. So basically, an editorially unrelated organization released a video that pointed out his hypocrisy and so now a woman who had nothing to do with it is definitely solely responsible for this and also all the evil in the world. Remember: Benghazi. But her emails. Cookie-gate. Pictured: A blue man failing upwards. I’m kidding. I love cookie monster. (Sesame Street) There are other things that Bernieworld aka the Lone Fauxgressives aka the BernieBros aka closeted Republicans are mad about regarding Neera Tanden, but it’s all means of not talking about why they actually don’t like her. I’ll lay it out. Neera Tanden: a.) worked with Hillary Clinton and has a history of working with other Democrats in the past even appearing at the 2016 Democratic Convention b.) is Asian-American c.) is a woman d.) isn’t from “Bernieworld.” Their stated reasoning tends to be that she’s somehow not liberal enough but everything she’s involved in is super liberal. That’s why the GOP hate her so much. She’s super effective at getting progressive legislation and policy through. They’ve got a whole thing about her supposedly trying to defund Social Security or something, but again, the narrative is so full of holes you could drain spaghetti in it. Bernieworld has their own set of people they wanted chosen for Biden’s cabinet and the issue with Neera Tanden aside from her not having man parts is that she wasn’t on their list. Keep in mind Bernieworld lost in 2016 not because there was some weird conspiracy against them as they repeatedly claim, but because Hillary Clinton was a far better candidate in all possible ways and ran a far better campaign. They then lost by more in 2020 not because Biden was better than Clinton, but because a lot of the people who had previously bought into the misogynist rhetoric woke up and got out and misogynist rhetoric doesn’t do well against a female candidate, but it does incredibly poorly against a male one. Basically, and this goes well beyond Tanden or any of Biden’s other picks, Bernieworld is not about progressivism or super far leftism. Bernie Sanders and his supporters are ideologically near identical to the Republican Party. The only place in which they differ is economic policy. A lot of them don’t seem to understand that there’s a great deal more to ideology, politics, and governance than money and so because they differ in that one place so much they think if Republicans are right then they must be left. And so they’re trying to reform the Democratic Party which is actually left in the image of the Republican Party. There are mercifully fewer and fewer BernieBros out there, which fortunately makes them — despite their wailings to the contrary — democratically irrelevant, but they like other conservatives have nothing but contempt for women and people of colour. Biden’s going to put together his cabinet. It’s going to be made up of people who actually deserve to be there and that means there’s going to be a lot of women and people of colour in positions of power. A lot of white men are going to be upset with that and a lot of them are going to engage in just loads of intellectual dishonesty to explain why their hate for all women is not sexism or misogyny. But it is. When you are presented with a perfectly reasonable female candidate for a position and you tie yourself up in knots to ensure she doesn’t get that position you are engaging in sexism. It’s not ethical, it’s not liberal, it’s not revolutionary, and it’s time we took those old authoritarian attitudes so many fauxgressives trumpet out to the trash.
https://medium.com/@ariadneschulz/disguised-conservatism-and-performative-progressivism-68bf94b14bb
['Ariadne Schulz']
2020-12-01 19:53:40.929000+00:00
['Cabinet', 'Biden', 'Kamala Harris', 'President', 'Political']
Interview Series: International Students at IMT Atlantique — Part 8
Hi Håkon! Can I ask you — what fascinates you most about the IMT Atlantique? The student life in general. Even though the campus is located a bit outside of Rennes it is easy to see all the interesting buildings as for example churches. Also, the way of studying is different and the small classes give you a ‘high school feeling’. But it’s nice as you get to know everyone and it is much easier to have interaction with the teachers and discuss things. What is your advise for incoming students? The Caf has a long and paper-based process. Don’t worry too much about it, it will be alright. Other than that I occasionally changed my name as the first part of my name is difficult to pronounce for french people. That made life easier at times. If you like research, see if you can join a lab on the side. I was placed in one more or less by accident and I co-authored an accepted paper which was really helpful in my Master’s application. Did you encounter any difficulties? The IoT program I joined should have been fully taught in English but it turned out that this was not the case. But honestly, the French of teachers is relatively easy to understand as they speak clearly and slowly. Also, in computer science, you can look up many terms on the internet. The French of students is a different story though, much more difficult to understand. However, there is no problem talking to them in English and outside of the school context, English is the main language anyway. Sometimes there are too many assignments and you are just trying to finish instead of really learning the topics. But in general, I enjoyed the high pace as I felt I learned more than back in Norway. Is there anything that surprised you? I was surprised by the long lunch breaks. In Norway it is 30 minutes long, maybe 45. But nowhere near the 1.5 or even 2 hours which we had in Rennes. Another thing was the mandatory sport course which I think is unusual but, nevertheless, another cool way to learn a language. The course was even recognised by my home university, didn’t expect that one! And before coming to France, how did you prepare yourself? Hmm, I didn’t really prepare as I expected the courses to be taught in English. After my ‘oh shit’ moment I played around a bit with Duolingo but that was it. There is some administrative stuff which needs to be sorted out in advance. For example, I needed a birth certificate and recommendation letters. Your summary of your stay with the IMT Atlantique? Joining IMT was a cool experience as there are so many motivated students. And try to explore other cultures, there is so much to discover outside of where you are from.
https://medium.com/imt-atlantique-students/interview-series-international-students-at-imt-atlantique-part-8-e18a271f72b8
['Julian Kopp']
2021-06-08 18:32:19.431000+00:00
['It', 'Europe']
Mornings are the worst
More from rstevens Follow I make cartoons and t-shirts at www.dieselsweeties.com & @rstevens. Send me coffee beans.
https://rstevens.medium.com/mornings-are-the-worst-c506816926a0
[]
2020-09-04 02:15:09.260000+00:00
['Sleep', 'Humor', 'Morning Routines', 'Comics', 'Life']
How to Actually Deploy Docker Images Built on M1 Macs With Apple Silicon
Use buildx Buildx comes with a default builder that you can observe by typing docker buildx ls into your terminal. However, we want to make and use a builder ourselves. Make a builder You will need a new builder, so make one with docker buildx create — name m1_builder . Now you should see the new builder when running docker buildx ls . Use and bootstrap the builder The next steps are to “use” the new builder and bootstrap it. Start with docker buildx use m1_builder and then docker buildx inspect — bootstrap , which will inspect and bootstrap the builder instance you just started using. You should see something like this: computer@computer-m1 ~ % docker buildx inspect — bootstrap [+] Building 5.3s (1/1) FINISHED => [internal] booting buildkit 5.3s => => pulling image moby/buildkit:buildx-stable-1 3.1s => => creating container buildx_buildkit_m1_builder0 2.3s Name: m1_builder Driver: docker-container Nodes: Name: m1_builder0 Endpoint: unix:///var/run/docker.sock Status: running Platforms: linux/arm64, linux/amd64, linux/riscv64, linux/ppc64le, linux/s390x, linux/arm/v7, linux/arm/v6 You can see that this builder has a whole host of platforms it will build for! Build with the builder Now, cd your terminal into a place where you want to build and push an image so you can run this pseudo-command: docker buildx —-platform linux/amd64,linux/arm64,linux/arm/v7 -t <remote image repository> --push . You should see the “in use” builder go crazy and build the architectures you specified. Manifest Once your image has been built and pushed, you can inspect the manifest with: docker buildx imagetools inspect <remote image repository> You should see a manifest printout with all your different architectures. Most services that pull an image are smart enough to know which architecture to go get. Clean up Finally, as you might have experienced, Docker can start to take up a bit of disk space. If you did not use buildx , you probably ran docker system prune --all , the equivalent for buildx that is luckily semantically similar. docker buildx prune — all That’s it! Happy container development!
https://medium.com/better-programming/how-to-actually-deploy-docker-images-built-on-a-m1-macs-with-apple-silicon-a35e39318e97
['Jon Vogel']
2020-12-22 16:49:09.470000+00:00
['Apple', 'Docker', 'Apple Silicon', 'Programming', 'Containers']
Mythopoesis III
Helios astride a chariot. Scarlet Penny Speaketh fast, o’ ruby lord, Ere treasure ‘scape by luckless sin, Uproot and fly your wandering horde, To steal the fruit from loins within. Deliverance, a sum so vast, As precious age doth trundle by, Thine carmine fate is writ at last, To lead us bloodied through the sky. Dressed in sackcloth, smeared with ash, A seagirt soul contumely rinsed, The baptist mouth of lies awash, Yet sees succour in thine evince. Pray, and greet them one by one — Thine scarlet suckle they will not shun. Hyperion The sun, a flying cherub shining bright, To flee from earthly tempest down below, Will soar to a refulgent upward height, And find its honeyed marvels to bestow. Creatures, flesh and green, transcend their fate, Through starry song and giddy meadow dance, Beneath the sun’s hot reign o’er earthen gate, Which guards the heavens from the mortal manse. Between us lies his son, in conquest born, Whose words are quick, to every bard’s delight, This hero of the sun will sound his horn, And to it brilliant faithfuls make their flight. Hyperion, a radiance untold, Shines upon us day-lit treasure gold. Satyricon We dance and drape with sultry pagan fawn, Yet suffer from the dignity of fools, In captive dreaming man walks through the dawn, To coax the beauty of these trembling mules. Luxurious and handsome they may be, Yet peril haunts their every vaulting stride — A judgement and a yearning to be free, They run their slipshod fingers o’er our hide. In this land where glistening wonder calls, Upon the patient and the meekly born, The future lies beyond those crumbling walls, Where soulful cries find breath amid the scorn. They laugh and shrug a sinful day away, Content to live a life beyond dismay. Nightshade Oh, serene shade, Carry me through the night! Your amethyst pearls, Beneath a deep green skull, Plucked from the fibrous branch — A poison, perfectly construed. Hemlock, viper — These are lesser powers. You deliver us to tranquil beauty, First among equals, For all are equal in the dark. If only the sky would fell its flocks, Like how the earth sows death in restive spawn — Alas, the sooty birds take flight, To feast upon our idle carrion. Oh, subtle shade, Deliver me to painless slumber, Where the oppression of the sun is masked, By your shroud of purple night. Take me home, To before the womb, When time stood still, And blood ran black. I have drank of the Pierian spring — Now cast me deep into Tyrean shadow. Brethren When dost the runt become the boar, And viper shed its sheath? These are the living mysteries, That linger far beneath. Betwixt the stars and heaven’s glare, The waxing moon retreats, And carries with it rushing tides, Which watery life entreats. Whither these alien suns abound, That shine with paltry glow? In dour jest, the creatures sleep, And ‘scape their rocks below. Time, that vexing art we weave, In life divers’ wills us believe. Wholeness What terrors lurk Behind the deserts of affection — A desire to be whole? The world as will, Divided against itself — A desire to be whole? We make our fatal image, The dead things crawling in our minds, They exalt our souls Into the dank below. The howls of Cerberus, A dog called ego. What are those immortal lineaments, Which chase our view and circumscribe, Our fertile imaginings? That is our superego, The image of the father. God the father, Is a ghost all along — The haunting projection Of our desire to be whole. Psychopomp A pilfered soul from sacred light, Entombed in wretched fancy’s flight, Can soar above the endless lie, For heaven waits beyond the sky. The dreams of verdant lovers pause To seize us in their jealous claws, Bestow a misty fortune found, To leave us dead things on the ground. Basilisk She is the serpent daughter, Labyrinthine, languid — That shy creature, With a spine of black, Sinking into hellish deep. Abreast of oceans, Her jaw agape, She bites into cresting waves. Tresses of emerald weeds, A luxuriant mane, Trail down her scales. Elegant mother, Lithe and beautiful as obsidian, Slithering through endless wake. Masterful mistress, Cunning of the waters, She is the ancient mother of eels — Maddened, corrupted, By a tortured home. In heavy waters, Where light will not stay, She finds a crepuscular manse. Daughter of the drowned, Queen of dim-lit death. Lonely Eye What do I know, I, the obscene imitator? I know the rhythm of words, Not the beat of life. Sterility, not fertility. Stability, not virility. I but hover On the pasture of erasure. A slow, grazing existence. What lurks behind — The bucolic nightmare? Limitless, but lonesome. The wandering eye, Is but a vagrant soul, Living its sad, lusty life. Don’t live to serve the God Of other people’s tragedies. Invent your own heaven. Friar Barefoot, broken, Smug self-deliverance — Pious fellow, Obsessed by blood and penance. He is the shapely wanderer, With that artful smile — Breaker of bread, And bringer of dread. Flux and fire, The portents of doom — Unfriendly catalyst, Thoughts vagrant, decomposed. A menagerie of miracles, Impresario of truths — He is the unstable sketch, Of an unfinished man. Vile, repugnant, Bound in liminal quotation. He fears to be seen, And thus he stands alone. At once lordship and privy, The maker of chains, Slaying the dragon-seed, Of liberty’s revolution. Teacher of fragrant souls, To find cheap beauty, In the tedious din, Of divine applause. Sunken Asleep he lies, With black lips closed, To conceal the sighs, Of mortal woes. Beneath the deep, The cradle lies, A precious mourn, O’er ocean cries. The hallowed march, Toward the grave, An infant’s crawl, Dost cheat the brave. The soul’s last cry, A haunted breath, For want of air, Dost rest in death. Winter’s Heart In the frigid breeze, Blood runs quick— The body is anointed, By crests of snow, Tossed in furious skies. The alabaster miles, Thickets of down, Bury the last coat of autumn, And send the world into opaque cold. The tepid pace, Of a woman maligned, Dissolves into winter, Bearing its frozen heart. Blinded by the glare, She stops to smell The desolate odour, And rest her eyes. The silence is loud, Surrounding her with death. The season betrays her, Sending its frosty fist, To bruise her violet face. Her skin is torn, Her beauty shorn. A sullen flower — A shiver, and a cower.
https://medium.com/flotsam-perspectives/mythopoesis-iii-17b69edf1968
['Gareth Gransaull']
2018-12-18 07:20:46.864000+00:00
['Philosophy', 'God', 'Poetry', 'Fiction', 'Sonnet']
I Remember Those Days
I remember those days, How you used to love to talk to me, how you’d love to hear me talk. We’d never mind the world surrounding us. Alas, those honest, meaningless conversations will never happen! I remember those days, Those walks in the wilderness, Those talks filled with tenderness. The songs you sang for me, The poems I wrote for you. The smile on your face upon seeing me could keep me sane walking through hell. The simple joy of being in each other’s presence! The safe space we’d provide each other. Now, All I get is a sigh or a ‘hi’, A ‘take care’ or a ‘bye’. What has changed, love? What has changed? Not me. I still worship you with the same earnest devotion you’d always mock me for. I still remember you in every smile, pain, and deed. Circumstances have changed. And you? Definitely. Where is that joyously tearful smile gone? Why has your warmth changed to coldness? You did admit that you could never see me as a lover. But, we were good friends, at least. Didn’t that ever mean anything to you? You were a good friend, but a bad lover. Naïve me never thought it would end this way. And even after it did end, I still sit up hopeful.
https://medium.com/catharsis-pub/i-remember-those-days-c8d00a193b62
[]
2020-10-16 11:16:03.085000+00:00
['Friendship', 'Poetry', 'Hope', 'Love', 'Sorrow']
There are no double spends in bitcoin
There are no double spends in bitcoin When you take the words “double spend” literally, it means something is spend two times. This is a problem known as the “double spending problem”. Bitcoin solves this. My proposition is: Bitcoin does not know such thing as a “double spend”. To understand this, it is important to understand what Bitcoin is. Bitcoin is just a collection of mathematical rules, nothing more. Only when these mathematical rules are followed, there is Bitcoin. When one of such rules is breached, there is no Bitcoin. So imagine the following Bitcoin Blockchain: In Block 98 and 99 there are two transactions, both with an input address A. You can say, this is a double spending, but it is not. It is not, because, following the rules of Bitcoin, Block 99 is not valid and thus, does not exist, so nothing is spend in Block 99: So, only the transaction in Block 98 is spending address A following the mathematical rules of Bitcoin. Now, imagine the following Bitcoin Blockchain: In this state of the Blockchain, there are two chains. In each chain, there is a transaction with an input of address A. And the question is: are those transaction spending something? The answer is no, these transactions do not spend anything, because both chains are invalid. The only valid chain is the chain until Block 97: Block 98 and 98' are invalid, because none of them makes the longest chain (see Additional note below). Now imagine the following Bitcoin Blockchain: In this state, the transaction in Block 98 did spend address A. Important to realize is that this is just a temporary state. After some time, the state can be changed to this: So, which one of the transaction spends address A can change during time. However, this is by design of Blockchain: the greater the difference is in length of the two chains, the lesser the chance this will change. This is why you need to wait for confirmations to be sure the spending transaction will not be changed to ‘not spending’ over time. Additional note: To be more precise, it is not about what is the longest chain which one is valid following the rules of Bitcoin, it is about which chain has the most Proof of Work. This difference is important, because each chain can be produced with a different difficulty. This, is the magic of Bitcoin. Thank you Satoshi, you are my hero!
https://medium.com/@juniorboomerx/there-are-no-double-spends-in-bitcoin-a9c3063c9269
[]
2021-01-22 11:56:53.040000+00:00
['Double Spending', 'Bitcoin Mining', 'Bitcoin', 'Bitcoin Double Spend', 'Double Spend']
Learning D3 — Books, Code Editor, and Other Resources to Help You Learn
Do you need to learn D3? “D3 was created to fill a pressing need for web-accessible, sophisticated at visualization.¹” Before you start, you should know if it’s necessary for you to learn D3. It’s a tool that serves a need, do you actually have the need? For most visualization needs, you don’t need to learn D3. There are so many other libraries and tools that are way easier to master and use, e.g. Excel, Powerpoint, Google Data Studio, Plotly, and Tableau, etc. D3 was created so people can create rich visualizations with a high level of interactivity that can be accessed like web content with flexibility. Well..that’s a tongue twister. Basically, you should learn D3 if: you want to create web-accessible interactive visualizations, for example, visual essays. you want to have control over the whole visualization process vs. relying on another platform. you’ve encounter visualization needs that existing platforms cannot resolve with ease, for example, Tableau requires a lot of manipulation to build visualizations like tree charts and sankey charts — so I decided to resort to learning D3 vs. learning Tableau tricks that only work for that platform. Otherwise, your precious time should be spent elsewhere — maybe read some books? Books Instead of Udemy courses, I found a book to follow along this time, and I cannot recommend it enough. It’s Elijah Meeks’s book — D3 in Action*. *You should get the 2nd edition for updated code. I also recommend you getting the Manning live version so you can copy code easily. Unlike most Udemy courses that try to feed you everything D3 has to offer, which can be extremely overwhelming and set you up for failure, Meeks’s book focuses on the core of D3 for visualization needs. He goes right into how to build shapes, charts, layouts, and visualizations — so you don’t get stuck at the boring syntax. Because D3 uses javascript syntax, if you’ve never learned the language before, I do recommend you quickly go through the documentation of Javascript before you dive into Meeks’ book. Code Editor You can use any editor you like. I love using Visual Studio Code, it’s free and comes with many handy plugins — one of which is Live Server (also mentioned in this post). Live Server allows you to run scripts on your local server and see code changes live — a great time-saver especially when you are learning D3. Live Server Other Resources to Bookmark The best way to learn is from copying others' work. Steal like an artist! D3 gallery: http://christopheviau.com/d3list/gallery.html D3 gallery: https://www.d3-graph-gallery.com/index.html Visual essays using D3: https://pudding.cool/ D3 Tutorial+Gallery: https://www.d3indepth.com/introduction/ Creator of D3: https://observablehq.com/@mbostock References: [1]: Elijah Meeks. D3.js in Action, Second Edition
https://chiandhuang.medium.com/learning-d3-books-ide-and-other-resources-to-help-you-learn-efea5910a779
[]
2020-12-14 20:41:01.362000+00:00
['D3js', 'Visual Design', 'Data Visualization', 'Visualization']
Data Cleaning 101
Now that I have my data, I examine it and realize it is somewhat rough on the edges. I examine its structure. The data is organized into a list of list of strings. I call the first element of the list and see that it is made of 60 strings each holding the price, living area, address, etc separated by a ‘ ’ in one entire string. This is definitely unusable at the moment. I import regex and use re.split( ‘ ‘, string) which will break the string into individual elements. Regular expressions is an invaluable and necessary tool in string splicing and cleaning. I do the same thing for the rest of the 334 elements on the list and iterate through with a nested for loop. I call the finished object z. I import pandas and df= pd.DataFrame(z[i] for i in range(len(z))). I then rename the columns with appropriate column names. I call the data frame again and see my rows and columns configured neatly. The next step of data cleaning involves observing your data. Unless your data is from an organized database or curated dataset, extensive data cleaning is often necessary. Data cleaning maybe another step even with organized data such as replacing nan values, or outliers. We may impute these values with a specific category, or use the mean, median, or mode. We may drop them out of the data completely if we aren’t losing too much information. If the data frame has too many columns that aren’t required for EDA, we may drop them too. We may choose to engineer certain columns to better organize our data. Data engineering is a final step that is put into data pre-processing. This stage also requires skill and experience from a data scientist. To sum up in this brief blog, data can come to us in a variety of ways and when we aggregate it, it can sometimes be messy and unusable. Strategies we must use to convert our data include examining our data’s structure. It can be in a form of a .json, a .csv, and or just lists of lists, a list of strings, or list of dictionaries. Some tools we can use are built-in python methods and regex. Correctly organizing the data takes experience, knowledge of data types, and general coding know-how. Many times for EDA, we must get our data into a pandas data frame. Once in the data frame, we need to examine it to see if it makes sense. We replace nan values, outliers, and make small tweaks such as imputing the mean, median, or mode for certain missing values or outliers. We may also drop columns or rows or rename columns into more informative descriptions. Engineering columns or features that may help our model may be done as well, but this is towards the end of data cleaning and is sometimes mixed in with data pre-processing. This post was written more for me than you, the reader. However, I hope readers can enjoy my insight into data cleaning. It is often stated that a data scientist spends 50–60% of their time on data cleaning and data pre-processing. There is no exact method and no data sets are built equally! Each is unique with their own identity and challenges. Happy coding everyone!
https://medium.com/swlh/data-cleaning-101-8d42a2681faf
['Jeffrey Ng']
2020-12-12 20:15:30.443000+00:00
['Data Cleaning', 'Web Scraping', 'Regex', 'Regular Expressions']