Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
2,300
10 Cool Python Project Ideas for Python Developers
Python Project Ideas for Python Developers If you have made up your mind about the platform you’re going to use, let’s jump straight into the projects. Mentioned below are some fun projects addressed towards developers of all skill levels that will play a crucial role in taking their skills and confidence with Python to the next level. 1. Content Aggregator Photo by Obi Onyeador on Unsplash The internet is a prime source of information for millions of people who are always looking for something online. For those looking for bulk information about a specific topic can save time using a content aggregator. A content aggregator is a tool that gathers and provides information about a topic from a bulk of websites in one place. To make one, you can take the help of the requests library for handling the HTTP requests and BeautifulSoup for parsing and scraping the required information, along with a database to save the collected information. Examples of Content Aggregators: 2. URL Shortener URLs are the primary source of navigation to any resource on the internet, be it a webpage or a file, and, sometimes, some of these URLs can be quite large with weird characters. URL shorteners play an important role in reducing the characters in these URLs and making them easier to remember and work with. The idea behind making a URL shortener is to use the random and string modules for generating a new short URL from the entered long URL. Once you’ve done that, you would need to map the long URLs and short URLs and store them in a database to allow users to use them in the future. Examples of URL Shortener — Here is the link to join the course for FREE: — 3. File Renaming Tool Photo by Brett Sayles from Pexels If your job requires you to manage a large number of files frequently, then using a file renaming tool can save you a major chunk of your time. What it essentially does is that it renames hundreds of files using a defined initial identifier, which could be defined in the code or asked from the user. To make this happen, you could use the libraries such as sys, shutil, and os in Python to rename the files instantaneously. To implement the option to add a custom initial identifier to the files, you can use the regex library to match the naming patterns of the files. Examples of Bulk File Rename Tools — 4. Directory Tree Generator A directory tree generator is a tool that you would use in conditions where you’d like to visualize all the directories in your system and identify the relationship between them. What a directory tree essentially indicates is which directory is the parent directory and which ones are its sub-directories. A tool like this would be helpful if you work with a lot of directories, and you want to analyze their positioning. To build this, you can use the os library to list the files and directories along with the docopt framework. Examples of Directory Tree Generators — 5. MP3 Player Photo by Mildly Useful on Unsplash If you love listening to music, you’d be surprised to know that you can build a music player with Python. You can build an mp3 player with the graphical interface with a basic set of controls for playback, and even display the integrated media information such as artist, media length, album name, and more. You can also have the option to navigate to folders and search for mp3 files for your music player. To make working with media files in Python easier, you can use the simpleaudio, pymedia, and pygame libraries. Examples of MP3 Players— 6. Tic Tac Toe Tic Tac Toe is a classic game we’re sure each of you is familiar with. It’s a simple and fun game and requires only two players. The goal is to create an uninterrupted horizontal, vertical, or diagonal line of either three Xs or Os on a 3x3 grid, and whoever does it first is the winner of the game. A project like this can use Python’s pygame library, which comes with all the required graphics and the audio to get you started with building something like this. Image by OpenClipart-Vectors from Pixabay Here are a few tutorials you can try: More Fun Python projects for game dev: 7. Quiz Application Another popular and fun project you can build using Python is a quiz application. A popular example of this is Kahoot, which is famous for making learning a fun activity among the students. The application presents a series of questions with multiple options and asks the user to select an option and later on, the application reveals the correct options. As the developer, you can also create the functionality to add any desired question with the answers to be used in the quiz. To make a quiz application, you would need to use a database to store all the questions, options, the correct answers, and the user scores. Examples of Quiz Applications— Read about the Best Python IDEs and Code Editors — 8. Calculator Photo by Eduardo Rosas from Pexels Of course, no one should miss the age-old idea of developing a calculator while learning a new programming language, even if it is just for fun. We’re sure all of you know what a calculator is, and if you have already given it a shot, you can try to enhance it with a better GUI that brings it closer to the modern versions that come with operating systems today. To make that happen, you can use the tkinter package to add GUI elements to your project. 9. Build a Virtual Assistant Photo by BENCE BOROS on Unsplash Almost every smartphone nowadays comes with its own variant of a smart assistant that takes commands from you either via voice or by text and manages your calls, notes, books a cab, and much more. Some examples of this are Google Assistant, Alexa, Cortana, and Siri. If you’re wondering what goes into making something like this, you can use packages such as pyaudio, SpeechRecognition, gTTS, and Wikipedia. The goal here is to record the audio, convert the audio to text, process the command, and make the program act according to the command. Here is the link to join the course for FREE — 10. Currency Converter As the name suggests, this project includes building a currency converter that allows you to input the desired value in the base currency and returns the converted value in the target currency. A good practice is to code the ability to get updated conversion rates from the internet for more accurate conversions. For this too, you can use the tkinter package to build the GUI.
https://towardsdatascience.com/10-cool-python-project-ideas-for-python-developers-7953047e203
['Claire D. Costa']
2020-09-08 19:41:49.515000+00:00
['Python', 'Software Development', 'Technology', 'Data Science', 'Programming']
2,301
I’m narrating and releasing free audiobooks almost every day.
Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore. Good times and bad times. Happy times and sad times. But always, life is a movement forward. No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career. https://todaygamelive.medium.com/tennis-live-nadal-vs-tsitsipas-livestream-nadal-vs-tsitsipas-tennis-live-2020-5ce434110c2 https://todaygamelive.medium.com/nadal-vs-tsitsipas-live-stream-4k-705b34c2cd49 https://steemit.com/dxfgchgvh/@swbrwrentryt/fc-fgxd-fcfgdx-fcgfxdg-fcxdf-gfcgdxg-f https://paiza.io/projects/JnLrOGJvZehhAWuVHJD4Zw?language=php https://blog.goo.ne.jp/fchfchhh/e/8940f9bbbadff517d7659f08e6a8bcfd https://onlinegdb.com/SJJ9p74qv https://note.com/seresrtrdtf/n/nf8f7a3d3a143 https://caribbeanfever.com/photo/albums/rafael-nadal-vs-stefanos-tsitsipas-live-mubadala-tennis-full-game What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.” 1. Most people are scared of using their imagination. They’ve disconnected with their inner child. They don’t feel they are “creative.” They like things “just the way they are.” 2. Your dream doesn’t really matter to anyone else. Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you. 3. Friends are relative to where you are in your life. Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends. 4. Your potential increases with age. As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way. 5. Spontaneity is the sister of creativity. If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment! 6. You forget the value of “touch” later on. When was the last time you played in the rain? When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby. Do that again. You will feel so connected to the playfulness of life. 7. Most people don’t do what they love. It’s true. The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same. Don’t fall for the trap. 8. Many stop reading after college. Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.” 9. People talk more than they listen. There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again. 10. Creativity takes practice. It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable. If you want to keep your creative muscle pumped and active, you have to practice it on your own. 11. “Success” is a relative term. As kids, we’re taught to “reach for success.” What does that really mean? Success to one person could mean the opposite for someone else. Define your own Success. 12. You can’t change your parents. A sad and difficult truth to face as you get older: You can’t change your parents. They are who they are. Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door. 13. The only person you have to face in the morning is yourself. When you’re younger, it feels like you have to please the entire world. You don’t. Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that. 14. Nothing feels as good as something you do from the heart. No amount of money or achievement or external validation will ever take the place of what you do out of pure love. Follow your heart, and the rest will follow. 15. Your potential is directly correlated to how well you know yourself. Those who know themselves and maximize their strengths are the ones who go where they want to go. Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future. 16. Everyone who doubts you will always come back around. That kid who used to bully you will come asking for a job. The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way. Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help. 17. You are a reflection of the 5 people you spend the most time with. Nobody creates themselves, by themselves. We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them. 18. Beliefs are relative to what you pursue. Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs. Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative. Find what works for you. 19. Anything can be a vice. Be wary. Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them. Never mistakes, always lessons. As I said, know yourself. 20. Your purpose is to be YOU. What is the meaning of life? To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece. Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish.
https://medium.com/@omp792/life-is-a-journey-of-twists-and-turns-peaks-and-valleys-mountains-to-climb-and-oceans-to-explore-de3a4df6909b
[]
2020-11-19 17:45:38.008000+00:00
['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming']
2,302
Programming isnt religion, be open to different solutions
Developers definitely belong to a group of very inteligent people, yet, they are able to stuck at thinking paradigms fighting for them no matter what. Whenever I learn something new, or I search for best practices, I find it very difficult to get objective information. Years ago, when I was considering a PHP framework to use, I read as much as possible about Laravel, Symfony and a few more. However, many of the arguments were „religion-like“ fights about who owns the ultimate truth. Some programmers look only at what they already know and are skilled in, but take down everything else. I think fights over „the best“ technology create a very confusing environment for beginners. It is almost impossible to decide what to start with, especially if you search for a piece of objective information. I have stumbled upon many articles and stackoverflow answers of this kind. Here are some examples: · OOP is dead, we need functional programming · Relational databases are dead, we need noSQL · noSQL is crap, relational DB are the only solution · why we left noSQL completely · PHP is almost dead, NodeJs is the future · Weak typing is the best · Strong typing is the best · Etc etc You will find people who often think in a black/white manner, but the world is much more complicted than that. I don’t think there is a single true answer for any technology. All of them have their strong and weak parts. Hammer is as useful as a screwdriver, but they are hardly changable. It just doesn’t make sense to fight over which tool is better. When I trained Karate, I liked one particular quote: „No style is better than the master teaching it“ This is also true for programming. Good or bad code is created by a programmer, not the language or frameworks. I recently read an article making fun of the OOP. I am personally a big fan and promoter of OOP, because it contains an amazing set of useful rules. This author created ridicolious examples using strange OOP concepts to demonstrate how flawed the style is. This sort of argument does not make any sense whatsoever. Functional programming, for instance, has many great concept that can be used even within the OOP programming. Why not learning more and use what is the best for the problem at hand? On the other hand, following any style or paradigm too strictly ultimately creates ridicolous code. This is where programmers can demonstrate their experience and intuition. Setting the optimial comprimise between work time, extendablity, readability, testability, design patterns, code speed is an extremely difficult task. Plus, each single problem is different. Even helpful rules, such as SoC (Separation of concerns) and DRY (Do Not repeat Yourself) lead to a state, where we, developers, have to decide for the appropriate depth of their use. We must, at some point, say „this level of separation is enough“. When it comes to databases, I find the concept of both relational and noSQL very interesting. I am used to relational mySQL style, however, I implement some of the noSQL concepts. It is a kind of a hybrid type of database. mySQL has gotten much better with handling jsons making noSQL concept easier to implement than ever before. However, there are still developers, who would rather create 2–3 additional tables to avoid breaking normalization at all costs. Their solution would be much less effective than using a json in a single table, but they often feel proud to follow normalization forms strictly. Does it make any practical sense? Well, in many cases no. And that is the point. In the end, styles, languages, frameworks will not help, if developers don’t think about their coding enough. I saw a website with over 30 pages, each having its own header. If the client asked for changing the favicon, they would need to edit 30+ pages. Why didn’t those guys use layouts/templates or at least include for the header? Well, they were probably busy being proud of how long they are in business. What are your opinions about ideology wars within the tech communities? Do you participate? Let’s have a discussion in the comments.
https://medium.com/dataseries/programming-is-not-a-religion-lets-be-open-to-different-solutions-rather-than-searching-for-a-746534d9a5c6
['Peter Matisko']
2020-01-03 12:45:47.448000+00:00
['Programming', 'Better Programming', 'Technology', 'Framework', 'Programming Languages']
2,303
The 5 Largest Companies Exploring Blockchain
Blockchain can no longer be considered simply a fringe technology that may be suitable for cryptocurrency companies and nothing more. In fact, the largest and most influential companies in the world have all launched blockchain projects. This doesn’t necessarily mean that Distributed Ledger Technology (DLT) is on the verge of widespread acceptance by mainstream businesses, but it is an intriguing development nevertheless. Who Are the Biggest Companies Ready to Use Blockchain? Industrial and Commercial Bank of China (ICBC) By all measures (revenue, profits, assets, and market value) the largest single business enterprise in the world with over $165 billion in sales and assets of $4.2 trillion. Last November, the bank filed a patent for using sharable blockchain technology to authenticate and store digital certificates, such as birth, death, and graduation certificates. The sharable blockchain would then make this data available to other entities that require verification of the same documents, without the need for customers to obtain them individually from the issuing authorities and then submit them separately to these entities over and over again. The bank has been very hush hush about this project, as have most of their brethren to varying extents, but something is definitely stirring in this space. The China Construction Bank Corporation (CCB) With around $143 billion in sales assets of $3.63 trillion, is another powerhouse that is exploring the use of DLT, though a little more publicly than their previously-mentioned rival. CCB, in collaboration with several major insurance companies, sells those companies’ insurance policies to its own banking customers. They call this “Bancassurance”. Last fall, they announced a partnership with IBM to use IBM’s blockchain to decrease the delays in data and document transmission between the insurance companies and their own banking branches, while vastly improving the security, transparency, accountability, and trust associated with their Bancassurance processes. Though this project is still in the testing phase, all indications suggest that it will become an integral part of their operations. CCB has also announced a blockchain-based platform for issuing loans to small businesses. They have also partnered with Baidu to focus on fintech and blockchain technology. JP Morgan Next on the list, with $118 billion in sales and around $702 billion in assets. They designed and built their own internal blockchain based on the Ethereum platform, which they’ve named Quorum. They describe it as “ideal for any application requiring high speed and high throughput processing of private transactions within a permissioned group of known participants” while addressing “specific challenges to blockchain technology adoption within the financial industry, and beyond.” What’s interesting about the Quorum implementation is that it provides both public and private transactions. The state of private smart contracts, for example, is only known to and validated by parties to the contract and approved regulators and no one else. Quorum is open-source, and downloadable by anyone. Pharmaceutical giant Pfizer, as well as London’s financial services heavyweight HIS Markit, have shown interest in Quorum. In addition, JP Morgan and Microsoft have teamed up to explore blockchain-based financial services tools. Berkshire Hathaway Warren Buffet’s company, with $235 billion in sales and assets of $702 billion, is in fourth place this year, and has also, surprisingly, become a player in the DLT space. This is quite ironic, since Buffet has gone on public record calling cryptocurrency “rat poison squared” and “a mirage”. However, the company also seems to recognize the incredible potential of the underlying technology and has ventured forth into DLT projects. For example, early this year Hathaway’s BNSF Railway Co. joined the “Blockchain in Transportation Alliance” which wants to establish DLT standards in areas such as vehicle maintenance, quality control, and fraud prevention, as well as in BNSF’s primary area of interest, freight transportation. The Agricultural Bank of China Limited With $3.4 trillion in assets but only $129 billion in sales, is currently in fifth place. In July of 2017, in partnership with Hyperchain, a Hangzhou-based blockchain startup, they developed their own DLT network for automating the process of offering unsecured loans to its agricultural eCommerce merchants. This innovative online supply-chain financing system overcomes the hurdles that many farmers and small agricultural business have encountered when trying to obtain the credit necessary to purchase products and equipment. How About the Rest of the Major Players? Truth be told, each and every one of the next five largest companies are also exploring blockchain technologies to improve their businesses. Bank of America, Wells Fargo, Apple, the Bank of China, and Ping An Insurance, the largest insurance company in the world, have all launched DLT projects, either publicly or privately. In addition, many of the well-known companies further down the list, such as Microsoft, Alphabet, Walmart, Daimler, and Mitsubishi, have also made it clear that they are exploring these technologies. Barriers to Adoption However, despite these promising ventures into blockchain technology by the world’s largest companies, many skeptics remain, and even some of its supporters have certain reservations. For example, Lin Sen, director at CPCW, a Shanghai-based consultancy helping Chinese financial institutions launch DLT projects said, “Blockchain is serving as more of a marketing tool than as infrastructure.” The security and privacy of a bank’s transactions are also of great concern to this sector, given the public and open nature of blockchain technology. Banks are generally very private about their business dealings, both for the protection of their customers and themselves. So additional layers of encryption have been added to some blockchain implementations to anonymize transaction entities and to keep data private and inaccessible by others on the network. Some maintain that it won’t be until new technologies are developed on top of blockchains, that they can provide genuine data security. Zero Knowledge Proofs (ZKP), the ability to authenticate transactions without conveying any information about them, and Homomorphic Encryption solutions, which allows calculations to be performed on encrypted data without decrypting it, are good examples of these. Efficiency within the blockchain network is another concern that must be resolved. Though individual transactions occur with tremendous speed, aggregated transactions, which are used by centralized systems to, for example, net all the trades down to a single day’s transaction that needs to be cleared, are much more efficient than requiring every individual trade to be transacted on a DLT system, one by one. Conclusion It should be obvious from the above that blockchain is rapidly developing a presence in a huge diversity of industries above and beyond just cryptocurrency and fintech. Yes, there are problems that must be addressed and resolved, but that has been the case throughout history as new technologies emerge and their use-cases are explored and eventually established. Look for it. If you’re interested in cryptocurrency and blockchain, come take a look at our project, Snowball. It is the only platform that is democratizing access to the best crypto fund manager and crypto index strategies for the 99%. Learn about it and register to be one of our first users at www.snowball.money About Snowball Snowball is the first Smart Crypto Investment Automation (SCIA) platform that enables access to professionally curated portfolios, empowering everyone to invest like accredited investors. Written by:
https://medium.com/snowball-money/the-5-largest-companies-exploring-blockchain-e1e62068eeae
['Parul Gujral']
2018-12-19 20:09:28.039000+00:00
['Ethereum', 'Technology', 'Blockchain', 'Bitcoin', 'Cryptocurrency']
2,304
Going Face-to-Face With Facial Recognition
Co-authored by Johanna Jamison and Sumanth Channabasappa Once a dominion of science fiction (e.g., Star Trek,) facial recognition technology has not only caught up to us in reality this century, but awareness around its benefits and pitfalls has also risen with its heightened presence in the news over the last few months. We hope to shine some light on the reasons for this ascent and the myriad thoughts and actions it has raised. To be sure, all the complex issues, implications, and ethics surrounding facial recognition technology are far too important and expansive to cover in this piece. We also recognize there is much more worth exploring, and a variety of valid and informed views on the subject. Our aim is for this piece to be informative, unbiased, and thought-provoking as the topic of facial recognition technology continues to gain attention and relevance. To begin, we offer a brief overview of facial recognition technology and how it is becoming increasingly commonplace. We include the underlying reasons as to why this is raising concerns, and provide references to reactions from both corporations and civic leaders alike, especially around its use for surveillance and law enforcement. We then wrap-up with thoughts on how what we have presented may be further expanded to ensure that key technical and policy issues are addressed and we can reap the many benefits — while minimizing the consequences — of this technology. As humans, we are used to recognizing faces. The ability to visually identify our loved ones, friends, and acquaintances from amidst a crowd is due to our ability to store and recollect anthropomorphic features. This propensity not only works face-to-face or within the now ubiquitous virtual encounters, but also during non-real-time viewing of videos or photos. Our brains have the ability to store faces in memory, use what our eyes see, and leverage biological matching algorithms to identify. Sometimes our brains do this even when we haven’t seen someone in ages, or when features have changed such as with new hairstyles. Of course, we should acknowledge our shortcomings, such as challenges with identifying people of races other than our own. In addition to evolutionary utility, we have also used this ability within our communities. For instance, to referee games, and to either confirm people’s innocence or to convict them of crimes. With regards to the latter example, studies have shown the unreliableness of human eye witnesses, and steps being taken to mitigate them such as via The Innocence Project. Of course, we haven’t always been able to have one or more humans at every event that could have benefited from an eyewitness. Cameras have been a way to address this. The desire for more than passive “eyes,” e.g., for automated identification of humans, led to the development of facial recognition technology from limited automation efforts starting in the 1960s to the seamless unlocking of the phones of today. In a simplistic abstraction, tech mimics our own machination. It stores facial data in memory, uses cameras to see, and leverages matching algorithms to identify. Unfortunately, these anthropomorphic constructs also share a few of our flaws. Stored pictures or videos may be vague or eroded (e.g., too grainy or blurry) to be useful, learning algorithms typically need to be trained and are only as good as the tutorial data sets (similar to race, for us,) and matching is only as competent as the algorithms authors’ abilities. However, facial recognition algorithms continue to become more efficient, and are an increasingly commonplace means of identification, including making it easier to order fast food. Coupled with the ubiquity of cameras, increasingly granular satellite imagery, and the ease with which algorithms are being developed and deployed, unchecked facial recognition technology is poised to become pervasive, even with masks. This has bubbled up privacy and trust concerns over the years. The presence of plentiful datasets (e.g., via social media, search engines) that can be mined easily for various (and not always initially disclosed) purposes — especially those that are malevolent — has magnified worries. The availability and use of such data by law enforcement are two different considerations, especially in countries that have laws to protect privacy. The lack of adequate and legally usable data sets for training facial recognition algorithms has resulted in companies like Axon banning the use of facial recognition in their body cameras for reasons explained in this article by the Ombudsman of their AI ethics Board. However, this hasn’t stopped the use of facial recognition technology by law enforcement, and has raised unease. This amplification of worries is evident in recent news items, which have heightened the profile of these issues and drawn attention from a broad audience. For example, flaws in facial recognition algorithms have led to wrongful criminal accusations, despite an easily proven alibi in one prominent case (Ars Technica, NYT’s The Daily.) As previously alluded to, research shows the risk of misidentification is higher for non-white, non-male individuals, since facial recognition technology is predominantly ‘trained’ with pictures of caucasian men. Privacy advocates worry alongside some Black Lives Matter protestors that footage of their civil disobedience will be analyzed using facial recognition software and used for individual tracking and surveillance purposes. In at least one case in New York City, this very situation occurred. While notable on their own, today’s historical context — especially momentum for police reform in light of recent incidents that have garnered national and international attention — makes these developments even more significant. Even the pandemic has spurred a flurry of dialog about whether and to what extent facial recognition might be used to enforce quarantine and social distancing measures, and perhaps enable contact tracing. Legalities around data collection, storage, retention, the ability to opt-out, and beyond are unclear, which has only exacerbated these and related concerns. Concurrently, and in many cases in response, to this controversy, both major corporations and a few municipalities are distancing themselves from facial recognition technology. IBM was the first major company to pull their product from the market. Amazon and Microsoft soon followed, announcing their solutions cannot be used for police purposes for one year and indefinitely, respectively. All called for regulation at the federal level of the controversial technology. This is in stark contrast to the approach of companies such as Clearview.ai, whose seemingly indiscriminate and secretive approach to growing their client base raises alarms for many industry insiders. While on-screen stories have weaved dystopian tales such as Eagle Eye and The Circle, they have also illuminated possible threats when Deepfake is coupled with facial recognition in shows such as The Capture. From the government perspective, more than 13 U.S. cities have banned the use of facial recognition technology entirely. Many did so entirely preemptively, before any of their agencies had access to or use of facial recognition technology resources. These cities span the continuum from big to small, east coast to west — Boston, San Francisco, and Portland, ME are just a few examples. New cities are continually joining these ranks; it was just announced Pittsburgh is considering doing so also. In Portland, OR what is widely considered the strictest ban on facial recognition — encompassing not only government but also private businesses — was passed in early September. On the state scale and abroad, rigorous data privacy standards that span facial recognition and far beyond are being considered and adopted. The California Consumer Privacy Act (also known as CCPA) is one example here in the U.S., while the General Data Protection Regulation (GDPR) is the european standard. Earlier this year, police in London were making plans to expand the use of facial recognition for law enforcement purposes, despite (now outdated) reports that the technology didn’t reduce crime in that city. More recently, police use of facial recognition technology in the U.K. was ruled unlawful. The U.S. Congress has already seen efforts such as The Facial Recognition and Biometric Technology Moratorium Act of 2020, which proposes a ban on facial recognition tech by federal agencies, and creates incentives for local and state prohibitions. Places such as Singapore have taken a different tack, employing facial verification as part of its government-issued identification system. Here in Colorado, there are no specific laws regarding facial recognition as far as we know. And up until recently, available (though outdated) data showed the state had no facial recognition or limited data. This recent story from the Denver Post reported otherwise, revealing usage of facial recognition by law enforcement in several jurisdictions across the state, both since 2011 in cooperation with the Department of Motor Vehicles (as allowed by state statutes) using driver’s license photos and starting in 2016 via a commercial software tool using jail booking photos (since disabled by the provider, citing a lack of policy.) While recent police reform legislation (‘Enhance Law Enforcement Integrity’) passed out of the statehouse is silent on facial recognition, it could give content and clues for how lawmakers may approach the topic. Furthermore, it mandated all law enforcement agencies in the state use body cameras, which could potentially lead to an increase in the use of facial recognition on the gathered footage, as opposed to the opposite. Denver specifically has voluntarily opted out of facial recognition for law enforcement purposes, despite the failure of a local group to gather enough signatures for a ballot initiative that would ban its use. Of course, not all applications of facial recognition technology are as consequential and divisive as those in the law enforcement realm. In some scenarios such as phone unlocking and food ordering, the tool can seem helpful. As decision-makers struggle to find pathways toward a ‘return to normal’ (or some semblance of it) facial recognition technology has been identified as a potential identity verification tool to facilitate and control entry to and movement in work and other environments, enhancing safety. It can also analyze face or skin to identify health concerns such as arrhythmia. If corroborated by other evidence and appropriately verified, facial recognition may play a useful role in the more controversial policing use case, especially if the algorithms are better and more extensively ‘trained’ with a larger and more diverse array of facial images. There may also be simple ways to mitigate concerns via policy. For instance, to restore privacy, lawmakers could decide that facial recognition technology cannot be applied to video footage except with legal warrants. Or, for facial recognition technology to purge recognition data when people are cleared, a form of digital face blindness (a la prosopagnosia.) There are similar considerations for providers of facial recognition products. For those interested in learning more, we recommend work by the Future of Privacy Institute around categorization and uses, and Privacy Principles for facial recognition technology in commercial applications. Amidst all of the complexity and controversy surrounding the topic of facial recognition technology — particularly when used for law enforcement — one thing is clear: this is far from the last we’ll hear of it. As the technology continues to advance and responses to it grow, it will be increasingly important for us and our representatives to have a good understanding and awareness of this topic and the various implications (e.g., privacy.) While facial recognition has immense potential in applications such as identity verification and health monitoring, it also faces a litany of pitfalls including bias and those that might be exploited — taking a page from thus far fictional (as far as we know) scripts — by Deepfake. Thus, technologists developing facial recognition technology should pay attention to the consequences of narrow datasets upon which algorithms are trained, and pursue ways to ensure that technology shortcomings don’t inadvertently and negatively impact our societies. Like all technology facial recognition is a tool, which when used properly may be helpful, and when used improperly may be harmful. Our hope is that civic leaders will get ahead of what is becoming a messy regulatory patchwork and craft appropriate macro (e.g., international, national) policy frameworks while allowing more micro levels of government the ability to institute locally appropriate rules. This piece only scratches the surface, and offers a foundation for further research, developments, and decisions.
https://medium.com/swlh/going-face-to-face-with-facial-recognition-10a0d14ca5f6
['Johanna Jamison']
2020-10-23 05:41:10.210000+00:00
['Privacy', 'Facial Recognition', 'Data', 'Smart Cities', 'Technology']
2,305
Here Is Why The Recent Cyber Attacks Have Been So Bad
Photo by Tima Miroshnichenko from Pexels SolarWinds: A Legendary Breach It isn’t uncommon for a Cybersecurity professional to be kept up at night. With a constant onslaught of security threats and emerging vulnerabilities, the field of information security is not for those who are prone to worry. As the President and CEO of The Penn Group, I’ve seen my fair share of events that have made me reach for the panic button. But the recent attacks on the federal government and private industry elevates beyond anything many of us in this industry have ever seen. To get a glimpse of how serious these attacks are, how vulnerable we still are, and how it was done, we have to break down a few specific facts. What happened? As details continue to emerge/leak, it is still unclear how far reaching this attack was/is. In simple terms, it was bad. Really bad. According to Tomas Bossert, in his New York Times Op-Ed, “the magnitude of the hack is hard to overstate.” Tomas Bossert was a US Homeland Security Advisor to President Trump. The number of organizations effected is estimated to be between 425 of the Fortune 500 or up to 18,000 companies. The basic premise of the attack is that a singular company, SolarWinds, experienced a reasonably normal security incident. A security incident is some security event that causes broader consequences. Depending on the reporting, the incident centered around credentials (a username and password) landing on a code repository called GitHub. This is a relatively common event, and modern-day security tools are capable of detecting these events. The credentials were used to upload a malicious update to the SolarWinds update server. This update was then automatically propagated to customers, which is typical operational best practice. Typically, an update server’s credentials would be heavily fortified, but in this case, the credentials were “solarwinds123”. From a security standpoint, this would seem to indicate a less than ideal security culture within SolarWinds, but we can’t judge the book by its cover. Although the fall out of the Solar Winds hack is still ongoing, the long-term consequences of this hack will be felt for many years to come. The simplicity of the attack on SolarWinds illustrates a few things: 1) Process based security is exceedingly bad at protecting organizations, but it is the best method we have. 2) Cybercriminals do not care about your compliance or policy. 3) You can have a robust security program and still be hacked. (FireEye) We all suck at security One of the biggest challenges with information security is, from a technical standpoint, you have to have a way to secure your infrastructure with hundreds of thousands of devices under management. In order to manage all of the devices on your network there has to be a way to manage all of devices. Otherwise, you’d need a literal army of people clicking buttons on every endpoint, which isn’t practical or secure. Through a combination of tooling and processes, organizations implement security programs based on frameworks that are designed to standardize the security implementation within their organization. As of 2016, just a little under five years ago most, most enterprises were just in the beginnings of adopting a broader information security program. Cybersecurity programs are exceedingly difficult to implement and take a long time as they touch all areas of the organization. For large organizations challenges to implement a strong information security program exponentially increase. Ultimately, with process-based security, cost is pilot on a crashing plane. As organizations continue down the path of securing their organization, security funds are allocated elsewhere. Ultimately, this behavior leads to the inevitable crash. The reality is that most security professionals uniquely understand these challenges and understand that most organizations could be hacked if someone really wanted to, and the reality is that people want to. The plain reality is simple: we suck at security. We All Suck At Security From the enterprise to a small business. From a local municipality to the United States as a nation, our cybersecurity practices are exceedingly bad. The problems with our security posture are well documented: The report’s findings indicate that 71 of 96 agencies (74%) participating in the process had cybersecurity programs that were either “at risk” or at “high risk.” (The report defines the term “high risk” as “Key, fundamental cybersecurity policies, processes, and tools are either not in place or not deployed sufficiently”; the term “at risk” applies to agencies where “Some essential policies, processes, and tools are in place to mitigate overall cybersecurity risk, but significant gaps remain.” — Federal Cybersecurity Risk Determination Report and Action Plan The Lack of US Federal Regulation As of 2021, there are only a few industries that have specific security regulations that must be followed. Cybersecurity is exceedingly challenging to implement. Cybersecurity is unreasonably expensive, the talent doesn’t exist, and most of our policy makers writing our laws have little to no understanding what a bit or a byte is. As of this writing 38 states are considering some sort of security legislation, but the vast majority of those states are considering privacy focused legislation. This approach fragments the information security guidance that organizations must follow, which prioritizes compliance over security. It is time for the Federal Government to step up and implement a baseline standard for information security implementation that scales with organizational size and risk. Otherwise, there will never be sufficient consequences to convince business leaders to act on strong security. Trust but Kind of Verify Although a cybersecurity law at the national level would help standardize security implementation across organizations big and small, events like the SolarWinds breach will continue to happen. Information security, no matter how much money or time you put into it, isn’t perfect. Until then, we must refer to the old adage “Trust, but Verify”. This is typically said at the leadership level, in relation to making assumptions. From an information security standpoint, too often the policy is trust but kind of verify. The problem is: it is just too difficult to fully validate one’s security practices. The SolarWinds breach was a supply chain breach. Security for thousands of companies was placed in the hands of a third party, which was ultimately breached. Who is responsible for this? Is it SolarWinds? Or is it the management processes and validation of vendor security? The SolarWinds Breach Is A Wakeup Call To Anyone Using Cloud With the majority of organizations shifting their computing to the cloud, the security of major cloud vendors will become more important than ever. As evidenced by the Office 365 component of this breach, the notion that the cloud is more secure is fading. Organizations must begin to choose, for the first time, to vertically integrate their information technology one again. The long term consequences of supply chain security may force organizations to transition their assets to a private cloud model. Keep on Checking the Box Ultimately, information security professionals will continuously implement security up to the point of the limitations of their resources. For many organizations, that is simply their goal. It is one thing to say that you do information security. It is an entirely different level of cost and dedication to implement security with excellence, something my company The Penn Group values. Most organizations simply opt to checking the box, which results in breaches becoming more severe and more common. The tension between checking the box and defending the organization will continue to pull, and in 2021, we may see that pull begin to twist its way into collateral damage into other organizations, as we’ve seen with the SolarWinds breech.
https://medium.com/age-of-awareness/here-is-why-the-recent-cyber-attacks-have-been-so-bad-2e523452d6ec
['Austin Harman']
2021-01-03 14:52:27.588000+00:00
['Politics', 'Information Technology', 'Business', 'Cybersecurity']
2,306
The Plumo Ceremony
We are excited to kickoff the trusted setup for Plumo with a multi-party computation ceremony — together with the Celo community and friends. You can find the form to sign up here for the initial phase. What is Plumo? Plumo is a zero-knowledge SNARK based syncing protocol that takes the “lightest sync” mode to the next level. By reducing the time and data needed to sync the blockchain by multiple orders of magnitude, Plumo enables even the most resource constrained mobile devices to transact trustlessly on the Celo network. The SNARK at the heart of Plumo enables light clients to sync with the Celo network via ultra-light sync mode. A single SNARK generated by the Plumo protocol can verify over 100 epoch headers instantaneously, allowing light clients to verify this in a quick, light, and trustless way. This results in sync speeds that are orders of magnitude faster, with improvements in the amount of data you need to sync with the Celo network by a factor of around 1,000,000 relative to other networks. Plumo is one of the largest SNARKs to be deployed. Internally, it proves the correctness of the evolution of Celo epochs, just as a light client would have seen. This means that essentially it proves the light client protocol. It does this through verifying over 100 BLS signatures that use a combination of a Bowe-Hopwood variant of a Pedersen hash and Blake2s for its hash to curve, and performs the consistency checks of validator elections, asserting that ⅔ of the validator set agreed on the next set. Plumo uses the two-chain of BLS12–377 and BW6 to create BW6 proofs that ultra-light clients verify. This allows them to verify a large number of epochs by verifying a single SNARK proof. The SNARK requires one final step before it can go live on mainnet — the Plumo MPC Ceremony, which requires the help of Celo friends and community. What is the Plumo Ceremony? A Multi-Party-Computation (MPC) is a cryptographic mechanism for different parties to jointly perform a computation. SNARK circuits require a “trusted setup” where parties work together to generate shared parameters that can be used to prove and verify SNARKs. If one person ran this setup, then they could potentially prove incorrect things by exploiting a backdoor in the circuit. However, with an MPC, this setup process is split amongst tens or hundreds of contributors, and if even one of the participants is honest (keeps their inputs private), then the system will be secure. In the case of the Plumo Ceremony, this collective computation will be a series of joint actions done by a group of participants from within the Celo community and beyond. They will be working to perform an MPC that secures the SNARK proving the Plumo ultralight client protocol. The ceremony will consist of rounds of about 6–10 participants each running the Plumo setup software for a certain period of time. Each round will last approximately 36 hours. While much of the activity is passive and involves simply running the computation on a desktop machine, for the Ceremony participants should feel confident with running commands in the terminal and destroying USB keys. This will allow participants to feel comfortable with the commitment so that the Ceremony can run smoothly. The SNARK being secured by the Plumo Ceremony is one of the most complex yet powerful SNARKs to ever be secured, and the outcome can be used not only by cLabs & the Plumo construction, but also by any project that uses the BW6 curve with the Groth16 proving system or any other based on polynomial commitments. By participating in this Ceremony, participants can be part of something that is a public good, and will hopefully be used to power many more systems to come. How do I get involved? If you are interested to know more about Plumo and participate in Phase 1 of the Ceremony, please fill out the form here. You can also join the Celo Discord if you want to ask questions or discuss more in the #plumo channel. Read and watch here for more resources on Plumo: *Please read the Terms & Conditions of the Plumo Ceremony here.
https://medium.com/celoorg/the-plumo-ceremony-ac7649e9c8d8
[]
2020-12-23 21:35:38.524000+00:00
['Zero Knowledge Proofs', 'Technology', 'Cryptography', 'News', 'Blockchain']
2,307
#SupplyChainTech: What Is Supply Chain Technology, and Why is it the Biggest Investment Opportunity of Our Lifetime?
Everywhere You Look, the Global Supply Chain Is a Mess ($). That is the surprising title of an article published by the Wall Street Journal on March 17, 2021. However, it is only surprising if you have not been paying attention to the state of the world since the World Health Organization declared the COVID19 pandemic just over a year ago. Moreover, as I observed in my OpEd published by Morning Consult, during the time since that declaration: The UN Secretary General, Democratic members of the House Select Committee on the Climate Crisis, President Biden, and the CEO of BlackRock — the world’s largest asset manager, have found agreement on one thing: We can no longer ignore climate change. Increasingly, as my partner, Lisa Morales-Hellebo, and I have been speaking with potential investors in our venture fund — REFASHIOND Ventures, about our investment thesis, we are asked the question; “What is supply chain technology?” We have previously touched on this topic indirectly in our October 2019 blog post, which we turned into a booklet in March 2020, The World Is A Supply Chain. However, given the frequency with which the question has come up more recently, a more direct answer seems in order. In this blog post; I will put forward a definition of supply chain technology (#SupplyChainTech) that builds on our previous work, I will explain how we think about the scale and scope of the opportunity — in other words I will explain why #SupplyChainTech is important, I will discuss some areas of application that we are focused on at REFASHIOND Ventures, I will discuss some of the opportunities for different categories of investors, and finally I will conclude with some final observations. This blog post is centered on our observations, since 2015/2016, that unstoppable environmental, socioeconomic, and technological forces are ushering the world towards a golden age of technology-driven innovation in global supply chains. Those observations motivated us to launch The Worldwide Supply Chain Federation in late 2017, and this blog post reflects what I have learned over the years of building the community into a distributed community with 12 chapters and nearly 10,000 members and followers across various social networking platforms. What is the definition of Supply Chain Technology? In his book, Logistics and Supply Chain Management, Martin Christopher shared a definition of supply chain that I find particularly useful in the world that we live in today. It is taken from a 1998 Ph.D thesis by James Aitken, Supply chain integration within the context of a supplier association: case studies of four supplier associations. Here’s the definition: A supply chain is “ A network of connected and interdependent organisations mutually and co-operatively working together to control, manage and improve the flow of materials and information from suppliers to end users. “ This means that: #SupplyChainTech is any technology or innovation that enables and facilitates the control, management, and improvement of the flow of materials and information from suppliers to end users in the context of a network of connected and interdependent organizations working mutually and cooperatively with one another to satisfy the needs and wants of customers or end-users. Why is #SupplyChainTech important? #SupplyChainTech is important because everything that we depend on for the life that we have become accustomed to exists because there’s an underlying supply chain that makes its production, storage, transportation, and delivery to us possible. The world we know would not exist but for supply chains. Technology and innovation applied to supply chains ensures that we can meet the environmental, sustainability, and governance requirements necessary to ensure that life on earth continues to be possible for an indefinitely long period of time in the future. In The World Is A Supply Chain, we made the observation that: “If current trends hold, between 2015 and 2050 the world’s population is expected to increase by about a third, to roughly 10 billion people. According to Our World in Data, the world’s population stood at about 190 million people in the Year 0, and approximately 4 billion in 1975. In other words, the world’s population will jump by about 6 billion people over the 75 years between 1975 and 2050 after having only climbed to 4 billion people over the previous 1,975 years. This is happening, according to Our World in Data, despite the world’s population growth rate peaking at 2.2% per year in 1962 and 1963, and then declining to its current rate of about 1% per year.” The image below drives home what’s happening very clearly. Concurrently: The Climate Crisis is leading to an increasing frequency of severe weather events that are causing more frequent disruptions to global supply chains. Trade, economic, and political disputes between countries are causing disruptions to trans-national supply chains. Consumption continues to put a strain on the earth’s resources. Dynamic resource allocation is becoming a more pressing need in the context of everything else that is happening. The fundamental argument, which Lisa and I have made many times is this: Human activity is causing climate change. All human activity is driven by supply chains. We can’t solve the climate crisis until we refashion global, man-made supply chains so that they cause less harm to the environment. What changed starting in March 2020 is that the outbreak of SARS-CoV-2 and #COVID19 have made supply chains everyone’s business. We experienced this first hand, in 2019, whenever we met with potential investors in our fund, we often encountered blank stares and a lack of interest, summarized in statements like, “Isn’t supply chain just boxes and trucks? Why does anyone care about supply chain? Why would anyone build a technology venture fund for supply chain?” These days we encounter a much more nuanced appreciation of the role that supply chains play in the world. However, in my opinion, the signals have been talking since 2015/2016 when I started paying attention to supply chain logistics specifically. Each year since then, the evidence that supply chain is becoming more of a fixture in the public consciousness has become increasingly apparent to me as I have quietly observed what’s been happening around me, and in other parts of the world. Nothing else that human beings do matters as much as how manmade supply chains interact with natural supply chains to create the environment that we each experience daily. The bottomline conclusion reached by the Intergovernmental Panel on Climate Change, and documented in the October 2018 Special Report on Global Warming of 1.5°C is that reigning in and adapting to climate change will require economic, social, technological, and infrastructural transformations that are unprecedented in the history of humanity in scale, scope, and depth. Another way to think about this is that global GDP, which is entirely dependent on man-made supply chains is at risk. The image below from Statista shows how global GDP has grown since 1985. The #COVID19 pandemic demonstrates that all the progress that has been made in improving life for people in different parts of the world is put at risk when supply chains fail. #SupplyChainTech matters because it plays a central and critical role in the world’s ability to confront the challenge posed by the ongoing climate crisis. No other area of technology and innovation matters as much as #SupplyChainTech. How does the team at REFASHIOND Ventures think about Supply Chain Technology? Supply Chain can be broken down into 3 broad, distinct, but tightly interlinked categories; First — Supply Chain Management, which is concerned with the design and management of upstream and downstream production AND consumption networks. Supply Chain Management is mainly focused on consistently delivering superior customer service at reduced cost. Supply Chain management has the goal of improving the long-term financial performance of both each individual company within a supply chain AND the supply chain as a whole. Second — in the words of Martin Christopher, Supply Chain Logistics is “ . . . the process of strategically managing the procurement, movement and storage of materials, parts and finished inventory (and the related information flows) through the organisation and its marketing channels in such a way that current and future profitability are maximised through the cost-effective fulfilment of orders.” Third — Supply Chain Finance, which is a network of intermediaries, processes, relationships, and technologies that provide funding and financing methods to facilitate and enable the activities that occur within global supply chains. This is sometimes referred to by the term Trade Finance. It is important to note two things; First, everything around us exists because of a man-made or natural supply chain. Second, every industry has a unique and distinct supply chain as its foundation. At REFASHIOND Ventures, we think of ourselves as #SupplyChainTech specialists as it pertains to these broad categories of supply chain. However, we are generalists when it comes to the application of supply chain innovation, and technology to industry specific problems. This makes sense. The fundamental problems that supply chains solve are the same no matter the region of the world, or the industry. The specific details are different, of course, but the basic problem is the same. So we see our duty as early stage technology investors as taking what we learn in one industry, adjusting for the variables that are different in other industries, and then applying our knowledge to new opportunities where we believe the potential for venture scale returns exists. We think about supply chains in these broad segments; Decision Analytics (aka Data & Predictive Analytics) — which is about using data, information, and computational technologies to make better decisions in the face of uncertainty by harnessing the best of human creativity AND computational power. This was the focus of REFASHIOND Ventures’ inaugural Quarterly Executive Salon which we announced on March 11, 2021, featuring the work of Warren Powell — a professor emeritus of Princeton University, cofounder of Optimal Dynamics, and world renowned expert in Computational Stochastic Optimization and Learning. It is available on video here. Next-Generation Logistics — which is mainly about using technology to improve how society manages the logistics infrastructure and networks that facilitate the storage, transportation, and distribution of physical goods AND in the process to reduce harmful emissions and waste. Advanced Materials — which is about using advancements in computational technology AND material science to reduce waste through the creation of circular and regenerative supply chains. Advanced Manufacturing — which is about using advancements in computational technology, materials science, AND manufacturing processes to shift towards manufacturing that fulfils actual demand AND away from manufacturing that is done to fulfil anticipated demand which may never actually materialize. It is worth reiterating this point, which I am repeating from The World Is A Supply Chain. Supply chains play two critical functions: First, they enable the flow of goods and services from producers to consumers. Second, they facilitate the transfer of information about the movement of goods and services between every entity that is part of the supply chain network. What are some of the opportunities for different categories of investors? To keep this discussion as simple as possible, I will group investors into 3 broad groups; Part-time investors, Full-time investors, and government agencies. Part-time #SupplyChainTech Investors: In this group, I am including actual individuals as well as single- and multifamily offices making capital allocations, BUT who do not intend to pursue investing in supply chain technology as a full time focus. ALSO, I am thinking of investors in this group as investors who wish to pursue the risk and uncertainty that is characteristic of early-stage technology and innovation. Depending on the willingness such investors have to delve into the nuances of each investment that they might encounter, I suggest that they invest in a fund whose principals focus on supply chain as an area of specialization. Full Disclosure: Lisa and I are co-founders and general partners of REFASHIOND Ventures, and we are currently raising REFASHIOND Seed — the world’s first supply chain technology Rolling Fund TM, on AngelList. REFASHIOND Seed is open to every category of investor, with no geographic limitations. The reason I believe strongly that this is the best approach for individual investors to take is because supply chain technology is so complex and nuanced that encouraging investors to engage with #SupplyChainTech on a part-time basis would be both irresponsible and unethical. An important question these investors should ask themselves about the funds and managers they consider is this; What’s the manager’s ability to tap into an extended community network for all the benefits such a network could offer as it relates to the sources of persistent returns over time for an investment fund? As I have already pointed out previously in this article, Lisa and I have been building The Worldwide Supply Chain Federation since 2017. Our community gives us access to relationships, expertise, and other intangible assets we would not have access to without it. It is a cornerstone of our investment thesis and strategy. Over the long term the community and the fund have a symbiotic relationship with one another. We are constantly thinking about ways to strengthen and enhance the positive impact that each has on the other. Full-time #SupplyChainTech Investors: In this group I am thinking of any entity that is pursuing #SupplyChainTech as an area of full-time investment focus — across stages. This might occur in the context of a specialist supply chain technology investment thesis, or it might occur in the context of a generalist investment thesis. The challenge that could arise in the context of a generalist fund, especially at the early stage, is that if the key decision-makers are not innately and personally excited about supply chain technology, there’s a very high likelihood that the fund will fail to invest in startups and companies that subsequently go on to become some of the best opportunities one could have found and invested in. Also, it is important for capital allocators to investment funds to determine what the fund manager means when they describe themselves as a supply chain investor. As an acquaintance said to me; “In Asia when someone says ‘supply chain’ they probably really mean to say manufacturing. In North America when someone says ‘supply chain’ they probably really mean logistics.” She’s right. I know a few early stage funds that are actually logistics technology investors. There’s nothing wrong with that, but investors allocating capital to fund managers should know the difference between a more holistic supply chain fund as I have defined it here versus a fund that is taking a more focused approach. There are pros and cons to each approach, and limited partners should think this through before allocating capital according to their unique preferences and assessments of each manager’s capabilities. The reader might ask where corporations fit. I’d lump them with full-time investors. The reason is quite simple; Supply chain is the basis on which companies compete. Any company that is not harnessing technology and innovation to improve its operations both internally AND across its extended supply chain is courting loss of market share and competitiveness. If I were managing an innovation program for a corporation and operating a corporate venture capital firm was part of our strategy, I would adopt an outside-and-inside strategy. What does that mean? The inside strategy would center on having an internal team making venture investments on behalf of the firm. The outside strategy would involve the firm making a number of limited partner investments in a number of independent venture firms whose investment theses complement the firm’s long term interests, but whose investments would be more bleeding-edge than those that would reasonably be made by the team executing the inside strategy. Think of any company that we objectively consider dominant within its industry. Invariably, that company will also be one that is objectively judged as having a superior supply chain in relation to its industry peers. It will also invariably be the firm that’s comparatively more willing to push the envelope when it comes to deploying technological innovations within its internal operations, and across its extended supply chain. Government Agencies and #SupplyChainTech for Economic Growth: As we state in The World Is A Supply Chain, the technological transformation of global supply chains is both an economic problem and an economic opportunity. Furthermore, In The Supply Chain Economy: A New Framework for Understanding Innovation and Services, Mercedes Delgado and Karen Mills state that; “The U.S. supply chain contains 37% of all jobs, employing 44 million people. These jobs have significantly higher than average wages, and account for much of the innovative activity in the economy.” ( Summary, 05/2016 Paper via Harvard Business School website.) The observation by Delgado and Mills poses an interesting question to government agencies seeking to create jobs and boost economic productivity and performance for their constituents; How could this insight, and others in Delgado and Mills’ research be harnessed to develop engines of job creation and prosperity? I made a tentative attempt to tackle that question in my January 22, 2021 FreightWaves column, Commentary: Time to turn rest of US into innovation factory. It is also one of the long-term visions of the community we are building through The Worldwide Supply Chain Federation — to partner closely with organizations like Blue North, as well as state and local governments to harness the power and potential of grassroots-driven supply chain innovation and technology communities to serve as catalysts and engines for job creation and economic growth. Some Closing Thoughts My intention in writing this blog post is to start a conversation amongst investors, entrepreneurs, corporate executives, and any other group of people interested in the future of the world and the role that supply chain, innovation, and technology will play in that future. To end this post, here are two observations I feel are worth highlighting; First, although I argue that climate change is driven by mankind’s choices in designing, managing, and operating man-made supply chains, I do not think that all climate change technology (#ClimateTech) should be classified as #SupplyChainTech. I believe that all #SupplyChainTech is #ClimateTech, but not all #ClimateTech is #SupplyChainTech. For example, carbon capture and sequestration technology would not fall under my definition of #SupplyChainTech although it is clearly #ClimateTech. Second, because supply chains run on energy, innovation and technology transformation in energy supply chains is one of the areas of greatest promise and consequence. Mankind’s transition from fossil fuels to more sustainable and renewable sources of energy is one of the most exciting opportunities there is. However, I am of the opinion that the transition will be slower than we would like, and so I am as interested in technologies that disrupt energy supply chains as I am interested in technologies that incrementally but meaningfully make our use of fossil fuels less harmful to our planet. True disruption in supply chains will be of both the demand-side and supply-side varieties, and the effects will only be felt by incumbents when the two forms of disruption happen simultaneously. This a topic I started exploring on my own in July 2015, in Notes on Strategy; Where Does Disruption Come From?, and again, in October 2018 with Lisa, in Where Will Technological Disruption in The Fashion Supply Chain Come From? At REFASHIOND Ventures we understand that because of climate change, industrial supply chains have become the single source of the world’s biggest problems — COVID19 has made that blindingly obvious. This also means that the transformation of global supply chains offers the world’s biggest business opportunities. Our job as managers of our investors’ capital is to develop prescience about where venture scale investment opportunities exist within the vast and humongous landscape of global supply chains, and then to harness uncertainty and risk in favor of generating investment returns for limited partners in REFASHIOND Ventures’ funds, while simultaneously empowering risk-taking entrepreneurs and innovators as they set out to secure our future and our childrens’ futures.
https://medium.com/@brianlaungaoaeh/supplychaintech-what-is-supply-chain-technology-and-why-is-it-the-biggest-investment-d9592b4c779b
['Brian Laung Aoaeh']
2021-03-20 02:51:38.225000+00:00
['Technology', 'Startup', 'Venture Capital', 'Innovation', 'Supply Chain']
2,308
Should We Clone Extinct Animals?
Perspective Piece Should We Clone Extinct Animals? Is it money, genetics, or ethics? Why haven’t we started cloning extinct animals? Image by Geran De Klerk on Unsplash “Why can’t we just clone extinct animals and bring them back to life?” With the recent death of the last northern white Rhino, scientists and animal lovers all around the world are starting to consider this solution seriously. A straightforward solution is challenging to explain. Let me walk you through the difficult process of cloning and why cloning extinct species might not be the best idea just yet. Let’s start with a basic term: “De-extinction”. De-extinction is the modern term scientists use when referring to bringing animals back that are already extinct. It’s like when you accidentally delete your final semester essay, freak out for two minutes, and then realize you could just press the “undo” button. Except, with animals, it’s not that easy. There are three methods to de-extinct an animal, but since I don’t want to bore you with all the details, we’ll get straight to the “Cloning”. “Cloning”, is the process of creating a genetically identical copy of a cell or an organism. The term was made relatively popular when the movie “Jurassic Park” came out in the early nineties. Cloning, in the real world, occurs more often than you think; for example, when bacteria produce an identical copy of themselves using asexual reproduction (only one parent is needed for reproduction). Now, let’s get to the juicy questions: have scientists ever cloned animals? Yes. Almost 25 years ago in 1994, Ian Wilmut, an experienced embryologist, managed to clone a lamb known as “Dolly”. Although she lived half of her normal life, Dolly was the first mammal to be successfully cloned from an adult lamb cell. Was it hard to clone Dolly? Yes. It took the scientists 29 early embryos being developed, 277 cell fusions, and 148 days to successfully produce the cloning. That’s right! more than four months just to clone a single sheep, now imagine the time it would take to clone an entire species of extinct animals, a whole lot. Image from howstuffworks.com “How Cloning Works” But that was twenty-five years ago right? Surely we must be able to clone extinct species easily nowadays. Well, you might want to take a look at this: A list of all cloned animals failed tries, and published genome sequences. As you can see, there have many cloning experiments that have either failed or succeeded, yet there has only been one experiment on an extinct species. This is the Pyrenean Ibex, a Mountain Goat which used to live in the Northern part of Spain and France. The Ibex’s DNA was implanted into fifty-seven surrogate mothers, but only one of those mothers managed to give birth to the clone. Despite being the first extinct mammal to be brought back to life, the 4.5-pound clone only managed to survive for a mere ten minutes before dying due to respiratory problems. So as you can tell, cloning an extinct animal isn’t too easy to do. Yet, to even get to cloning an animal, you need to finance your project first. This takes us to the next little detail: Money. According to Market Watch, it takes $85,000 to clone a horse, $50,000 to clone a dog, and $35,000 to clone a cat. Despite the 66.66% price drop from the initial $150,000 dog cloning fee in 2008, the price is still quite high. Now, considering the cloning price of a single dog, imagine trying to bring back to life a whole pack of Mountain Goats. That’s not going to work anytime soon. Another problem with cloning would be the variety of DNA samples taken from the extinct species. There would have to be numerous samples to properly bring back a species from the dead. “Why?” you may ask yourself. Well, it’s as simple as this; certain people have more immunologic weaknesses than others, this is because everyone has a different DNA. That’s partly why only 3/100 people die to Covid-19. Now imagine a similar virus spreading throughout a species where every animal has the same genetic composition; instead of 3/100, the mortality rate would be 100/100. Now, just for a second, imagine if all the requirements that we just talked about were met. Even then, we would still be missing quite an important variable: Ethics. Experts are still debating whether De-extinction is the correct way to go. On one side, De-extinction advocates say that since we caused the extinction of those animals, we should be the ones that bring them back. Ben Novak, ecologist and head of the “passenger pigeon project” at Revive & Restore argues that bringing back extinct species is all about “ecological restoration and function”. For example, the Wooly Mammoth, an animal that went extinct almost 4000 years ago, could improve the soil quality of the Tundra like it once did during the last ice age. On the other side, some people believe that even if scientists somehow managed to bring back a whole species of animals, humans would just find another way to make them extinct again. They argue that the real problem that we have to fix resides in society itself. We may be able to bring them back to life a million times, yet the arrogance, greed, and thrust for power in human nature would naturally cause their extinction over and over again. Animalists also argue that cloning efforts would just take away funding from current conservation efforts. No matter the choice you make, we need to respect the lives of the animals that exist now. Even if we don’t end up cloning extinct animals, you can still do something to help the ones that are suffering today. Get informed, find solutions, and share them with everyone you know; it's not about finding solutions for future problems, it’s about solving the ones we have now.
https://medium.com/creatures/should-we-clone-extinct-animals-fdd4777d2553
['Bruno Rojas Lopez']
2020-12-08 16:01:56.623000+00:00
['Technology', 'Animals', 'Education', 'Creative', 'Extinction']
2,309
Azure: What Is It and What Should We Know About Cloud Computing?
During a talk with friends, we started talking about the change of paradigm of a client we are developing a big system for. This client changed its whole infrastructure from its own datacenter to Azure. One of my friends asked me: Azure?! What is that? I heard about it but I have no clue. After that conversation, I kept thinking about the fact that lots of people in the IT industry don’t know about Azure or how to use Cloud Computing, and that will segment many positions they can apply for in the future since, in the coming years, it will be essential to know about Cloud Computing to be able to develop systems or support them. When we talk about Cloud Computing, we refer to a business model through the internet, which is booming right now since the computing paradigm we knew up until a few years ago has changed, and it is based on cloud services; one of the most popular ones being Microsoft Azure. A Brief Introduction to Azure With the boom of the “.com” domains, more software was developed and that continued to increase during the following years. In order for software to be able to run, we need hardware, and the thing is that hardware has a cost. Before Azure, the size had to be measured in order to calculate how much hardware was necessary to run certain software. The problem was that it was calculated regarding maximum levels, and hardware was bought to meet that demand. But what happened if those stipulated levels weren’t reached, or if there were moments in which the application wasn’t used? That money spent on hardware was not amortized. It was actually wasted. Not to mention how quickly the teams fell short and the companies had to invest again to upgrade them. With that in mind, Microsoft Azure was launched in 2008, with the aim of having dynamic data centers in the cloud. The problems due to power outages or Internet interruptions at data centers were left behind. On top of that, due to geographical redundancy, if a data center crashes, Azure automatically starts working in another region. And, most importantly, hardware turns into a service. This means that we can add or remove hardware from our virtual machines on-demand, according to use. Thus, the following 3 concepts emerge: Azure as PaaS Windows Azure was born as PaaS. That is to say, as platform as a service. This area is in charge of the servers and hosting management of websites (up to 10 shared machines). It is also where software is deployed. Azure as IaaS Infrastructure as a service (IaaS) allows us to create virtual machines with the operating system of our choice. On the other hand, it is also possible to choose the number of processing cores, the size of the RAM, and to import virtual disks. Moreover, through the PowerShell tools, we can execute complex scripts. Azure as SaaS Software as a service is Azure’s most popular aspect. This is where we can find, for example, Office 365 or Microsoft Dynamics 365. These 3 concepts provide the dynamism that allows us to create hardware we’ll later use in our developments, and thus, to be able to scale or remove resources when necessary, which results in money saving when the peak usage levels are lower. Now, going back to the main topic: why is it necessary to know about Azure in order to develop software? Developing software entails 3 main aspects: the code, the compiler, and the database. Before Azure, those three aspects had to be available within the company. Azure provides PaaS and IaaS making it available for any device from anywhere in the world, which means that we can have our task force distributed without great hardware and communication efforts.This changed the trends and development paradigm. One of its innovations is the use of DOCKER (which comprises the storage of everything necessary for the developed application to run autonomously within a container). Using Docker usually entails many things. Initially, a Docker image repository, which is where we store the container so that it can be used. For that purpose, Azure offers Azure Container Registry, a private repository that allows us to store images that will be deployed in the infrastructure. We’ll also need an infrastructure that reads Docker and uses that image to make the application within available. In that regard, we have Azure Kubernetes Service (AKS), which mounts Docker images and makes them available through a service or ingress, so that we can access it through an IP or a public DNS. As Linus Torvalds would say: “Show me the code”. Here’s a Kubernetes (K8s) implementation: In order to do the trick, we have to create a YAML file, which will tell K8s what to create, and it is divided into 2 parts: 1.- A deployment, which is where the referenced Docker code is, in what repository it is stored, under which conditions Docker will run, and on which port it will listen to requests. 2.- A service, which is in charge of telling the world how to access the pod. A pod is a group of one or more containers (such as Docker containers) with a shared storage/network and some specifications on how to execute those containers. Let’s take a look at the YAML code: Deployment Service However, this wouldn’t be complete without a data backup. Azure allows us to create different kinds of storage media for many languages, such as SQL Server, MongoDB, MySQL or Oracle, without having to buy servers for each one of them (great advantage regarding costs and performance). With a clear idea of the development part, the following question comes up: How can we implement a Docker application to Azure? The answer is Azure DevOps. Azure DevOps is a Microsoft platform that allows us to perform all kinds of actions within a DevOps flow, from ALM (Application Lifecycle Management), continuous integration, package management with Azure Artifacts, to deployment on different environments, repository management with Azure Repos, and the creation of test plans. It is divided into projects, the user management is complete (by roles) and, according to its groups, it has different privileges. It works with Github and many other repositories. Azure DevOps main screen. What can we do with Azure DevOps? We can automate our deployments and through the dashboard, we can track our projects, their sprints, and more. Due to the complexity of projects that a solution involves, it’s essential to have a deployment administrator in order to control everything that is developed and deployed. And if this seems too good, the best part is that it’s for free: we can use Azure DevOps to manage our projects (of course, with some constraints). Azure has a pay-as-you-go subscription model, which allows up to 5 users to work with Azure DevOps for free. So, let’s get to it… We create a pipeline, which will be in charge of compiling Docker images according to the DockerFile file (which is used to create the Docker images), and then it will execute the UnitTest to check that those changes haven’t generated other bugs in the existing code. This will generate an artifact, which we’ll read from the release module to later implement it in K8s. Once we’ve concluded this process, we’ll have our application implemented in K8s with its corresponding services. The implementation of a Cloud Computing solution would look like this:
https://medium.com/flux-it-thoughts/azure-what-is-it-and-what-should-we-know-about-cloud-computing-1a418e5e562
['Demian Sclausero']
2020-12-17 14:13:18.296000+00:00
['Kubernetes', 'Docker', 'Technology', 'Cloud Computing', 'Azure']
2,310
The Perfect Bloody Storm Is Coming: Tesla Buying Bitcoin
Image from the author For the last thousand years, humankind has never created a store of value owned by ordinary people without the hand of the powerful ones. Bitcoin is a savings account in cyberspace that it’s available to 7.8 billion people where no politician can steal your money. That’s a simple idea everybody on earth needs to have. It’s the first truly engineered monetary network in the history of the world. It’s a closed thermodynamic system that doesn’t lose power. If you put a hundred million dollars in the system, it’s like you have a battery with no power loss.- Michael Saylor Michael Saylor is the CEO of a company called Microstrategy. A couple of weeks ago, he made the most life-changing move in the financial world. Saylor was in his board meeting, having a courageous conversation with his staff. He concluded that having $500,000,000 in cash in the bank or 3-month treasuries was a liability: it was worthless. He said that the monetary expansion rate went from about 5% a year to 15%. They would lose 15% of their purchasing power every year for the next 4 or 5 years. A block of $500,000,000 in cash was starting to look like a melting cube. It was almost sure that they would lose at least half of that money in 36 to 48 months if they did nothing. So, they did it. They bought Bitcoin. 500 million of it. Photo by Dmitry Demidko on Unsplash Be Brave Enough To Start a Conversation That Matters Some conversations have the power to change the world. And as far as I know, Elon Musk has an invitation to a real conversation with Michael Saylor. From one rocket scientist to another. For now, they just tweeted to each other. Image by the author I wonder if this conversation has already happened? And more importantly, I wonder what Elon Musk’s final impressions of Saylor’s explanations were. Michael Saylor gave a bunch of interviews in the last weeks, as you can imagine. I did analyze two of them. One from Real Vision with Raoul Pal. The other one with Galileo Rush from Hyperchange. And these are the top issues I’ve learned from Saylor: Amazon dematerialized the storefronts. Facebook — the social networks. Apple — the mobile devices. Google — all kinds of libraries. Tesla — the auto and energy industry. Bitcoin — the monetary system. Bitcoin becomes the first monetary network because it’s a synthetic safe-haven asset. It’s like all the gold’s right parts, and it’s sitting on a massive network. It has a 21 million coin cap/limit. After that, miners can’t produce any more coins. And it’s been secure for the last 12 years. It’s adopted and regularized by congress and the monetary regulators as a legal asset. Bitcoin is a savings account in cyberspace that’s available to 7.8 billion people. Where no politician can steal your money. There’s inflation in bonds and stocks. There’s inflation in real estate and gold. Inflation measure what the central bank tracks. It’s a market basket of consumer goods and services. It doesn’t include the higher volatile food and energy. These are two separate worlds right now. If you have something like Bitcoin and can’t make any more of it, everybody wants it. It’s a simple concept, and it’s going to double its value as central banks double their supply of money. You can move one hundred million dollars of Bitcoin from New York to London in a few minutes, and it’ll cost you $3. It would cost you one million dollars to do that with gold. We’re in an era where all the established principles taught to us are being called into question.
https://medium.datadriveninvestor.com/the-perfect-bloody-storm-is-coming-tesla-buying-bitcoin-c9df6a330e7
['Nuno Fabiao']
2021-02-11 15:32:16.169000+00:00
['Bitcoin', 'Money', 'Tesla', 'Disruptive Innovation', 'Technology']
2,311
Amazon and Apple Have Never Been Less Scared
Amazon and Apple Have Never Been Less Scared Jeff Bezos testifies via video conference during the House Judiciary Subcommittee on Antitrust, Commercial and Administrative Law hearing on Online Platforms and Market Power on July 29, 2020. Photo: Graeme Jennings-Pool/Getty Images Practically everyone agrees that the biggest and most familiar tech companies — the so-called FAANG gang: Facebook, Amazon, Apple, Netflix, and Google — are overdue for a regulatory comeuppance. Experts maintain it could be arriving soon: After all, many of these firms’ CEOs have been hauled before congressional committees repeatedly, and rumors of potential antitrust action abound. So how are those threatened giants behaving under that heat? Well, one popular target of regulatory rumors, Amazon, just announced that it’s using its supersized market power to muscle its way into yet another consumer category: prescription drugs, which it is now promising to make available on a two-day delivery scheme for members of its Amazon Prime service. The news instantly whacked a combined $10 billion off the valuations of Walgreens and CVS, demonstrating the market’s appraisal of Amazon’s muscle. Meanwhile, fellow FAANG Netflix, which has enjoyed an influx of new customers thanks to everyone being trapped at home, seems to have realized that we’ll be stuck binge-watching for a while longer, and raised its prices. (Rival streamer Hulu is charging more, too.) Google announced last week it will end free unlimited storage for its Google Photo service, to the consternation of users who signed on for that very feature. And Apple, lately under pressure for charging app developers a whopping 30% cut of sales through its app store — a policy that has sparked regulatory scrutiny, complaints from app creators, and a lawsuit from Fortnite maker Epic Games — announced a new policy this week: It will now charge smaller app-makers 15%, which is hardly a capitulation. In other words, the tech giants aren’t acting worried. If anything, they’re as brashly ambitious as ever. The Amazon example is particularly stark: Just weeks ago, a supposedly “damning” Congressional report on the tech giants painted the company — simultaneously the biggest e-tailer and a major e-commerce marketplace — as an outsize influence in digital commerce, abusing its massive data-collection abilities to undercut competition. The European Union has just charged Amazon with antitrust violations on similar grounds. But is Amazon running scared? Are any of the FAANGsters? The evidence suggests just the opposite. Even Apple’s erstwhile concession seems, on closer look, mostly tactical. The 15% fee applies to developers who made less than $1 million through the app store in the prior year — a big group, but one that by one estimate accounted for just 5% of App Store revenue in 2019. This is more of a fig leaf than a sacrifice — Epic Games says the move is designed to “divide app creators” — and remains a steep price that any fledgling app-maker has almost no choice but to pay. As with Amazon continuing to throw its weight into new retail sectors, that’s not FAANG running scared. That’s just running as hard as ever — and daring anyone to catch them.
https://marker.medium.com/amazon-and-apple-have-never-been-less-scared-5be5e0d16fa9
['Rob Walker']
2020-11-20 15:47:39.700000+00:00
['Companies', 'Regulation', 'Amazon', 'Apple', 'Technology']
2,312
The Cost of Avast’s Free Antivirus: Companies Can Spy on Your Clicks
Avast is harvesting users’ browser histories on the pretext that the data has been ‘de-identified,’ thus protecting your privacy. But the data, which is being sold to third parties, can be linked back to people’s real identities, exposing every click and search they’ve made. By Michael Kan Your antivirus should protect you, but what if it’s handing over your browser history to a major marketing company? Relax. That’s what Avast told the public after its browser extensions were found harvesting users’ data to supply to marketers. Last month, the antivirus company tried to justify the practice by claiming the collected web histories were stripped of users’ personal details before being handed off. “The data is fully de-identified and aggregated and cannot be used to personally identify or target you,” Avast told users, who opt in to the data sharing. In return, your privacy is preserved, Avast gets paid, and online marketers get a trove of “aggregate” consumer data to help them sell more products. There’s just one problem: What should be a giant chunk of anonymized web history data can actually be picked apart and linked back to individual Avast users, according to a joint investigation by PCMag and Motherboard. How ‘De-Identification’ Can Fail The Avast division charged with selling the data is Jumpshot, a company subsidiary that’s been offering access to user traffic from 100 million devices, including PCs and phones. In return, clients-from big brands to e-commerce providers-can learn what consumers are buying and where, whether it be from a Google or Amazon search, an ad from a news article, or a post on Instagram. The data collected is so granular that clients can view the individual clicks users are making on their browsing sessions, including the time down to the millisecond. And while the collected data is never linked to a person’s name, email or IP address, each user history is nevertheless assigned to an identifier called the device ID, which will persist unless the user uninstalls the Avast antivirus product. For instance, a single click can theoretically look like this: At first glance, the click looks harmless. You can’t pin it to an exact user. That is, unless you’re Amazon.com, which could easily figure out which Amazon user bought an iPad Pro at 12:03:05 on Dec. 1, 2019. Suddenly, device ID: 123abcx is a known user. And whatever else Jumpshot has on 123abcx’s activity-from other e-commerce purchases to Google searches-is no longer anonymous. PCMag and Motherboard learned about the details surrounding the data collection from a source familiar with Jumpshot’s products. And privacy experts we spoke to agreed the timestamp information, persistent device IDs, along with the collected URLs could be be analyzed to expose someone’s identity. “Most of the threats posed by de-anonymization-where you are identifying people-comes from the ability to merge the information with other data,” said Gunes Acar, a privacy researcher who studies online tracking. He points out that major companies such as Amazon, Google, and branded retailers and marketing firms can amass entire activity logs on their users. With Jumpshot’s data, the companies have another way to trace users’ digital footprints across the internet. “Maybe the (Jumpshot) data itself is not identifying people,” Acar said. “Maybe it’s just a list of hashed user IDs and some URLs. But it can always be combined with other data from other marketers, other advertisers, who can basically arrive at the real identity.” The ‘All Clicks Feed’ According to internal documents, Jumpshot offers a variety of products that serve up collected browser data in different ways. For example, one product focuses on searches that people are making, including keywords used and results that were clicked. We viewed a snapshot of the collected data, and saw logs featuring queries on mundane, everyday topics. But there were also sensitive searches for porn-including underage sex-information no one would want tied to them. Other Jumpshot products are designed to track which videos users are watching on YouTube, Facebook, and Instagram. Another revolves around analyzing a select e-commerce domain to help marketers understand how users are reaching it. But in regards to one particular client, Jumpshot appears to have offered access to everything. In December 2018, Omnicom Media Group, a major marketing provider, signed a contract to receive what’s called the “All Clicks Feed,” or every click Jumpshot is collecting from Avast users. Normally, the All Clicks Feed is sold without device IDs “to protect against triangulation of PII (Personally Identifiable Information),” says Jumpshot’s product handbook. But when it comes to Omnicom, Jumpshot is delivering the product with device IDs attached to each click, according to the contract. In addition, the contract calls for Jumpshot to supply the URL string to each site visited, the referring URL, the timestamps down to the millisecond, along with the suspected age and gender of the user, which can inferred based on what sites the person is visiting. It’s unclear why Omnicom wants the data. The company did not respond to our questions. But the contract raises the disturbing prospect Omnicom can unravel Jumpshot’s data to identify individual users. Although Omnicom itself doesn’t own a major internet platform, the Jumpshot data is being sent to a subsidiary called Annalect, which is offering technology solutions to help companies merge their own customer information with third-party data. The three-year contract went into effect in January 2019, and gives Omnicom access to the daily click-stream data on 14 markets, including the US, India, and the UK. In return, Jumpshot gets paid $6.5 million. Who else might have access to Jumpshot’s data remains unclear. The company’s website says it’s worked with other brands, including IBM, Microsoft, and Google. However, Microsoft said it has no current relationship with Jumpshot. IBM, on the other hand, has “no record” of being a client of either Avast or Jumpshot. Google did not respond to a request for comment. Other clients mentioned in Jumpshot’s marketing cover consumer product companies Unilever, Nestle Purina, and Kimberly-Clark, in addition to TurboTax provider Intuit. Also named are market research and consulting firms McKinsey & Company and GfK, which declined to comment on its partnership with Jumpshot. Attempts to confirm other customer relationships were largely met with no responses. But documents we obtained show the Jumpshot data possibly going to venture capital firms. ‘It’s Almost Impossible to De-Identify Data’ Wladimir Palant is the security researcher who initially sparked last month’s public scrutiny of Avast’s data-collection policies. In October, he noticed something odd with the antivirus company’s browser extensions: They were logging every website visited alongside a user ID and sending the information to Avast. The findings prompted him to call out the extensions as spyware. In response, Google and Mozilla temporarily removed them until Avast implemented new privacy protections. Still, Palant has been trying to understand what Avast means when it says it “de-identifies” and “aggregates” users’ browser histories when the antivirus company has refrained from publicly revealing the exact technical process. “Aggregation would normally mean that data of multiple users is combined. If Jumpshot clients can still see data of individual users, that’s really bad,” Palant said in an email interview. One safeguard Jumpshot uses to prevent clients from pinpointing the real identities of Avast users is a patented process designed to strip away PII information, such as names and email addresses, from appearing in the collected URLs. But even with the PII stripping, Palant says the data collection is still needlessly exposing Avast users to privacy risks. “It is hard to imagine that any anonymization algorithm will be able to remove all the relevant data. There are simply too many websites out there, and each of them does something different,” he said. For example, Palant points out how visiting the collected URL links for one user could consistently reveal which tweets or videos the person commented on, and thus expose the user’s real identity. “It’s almost impossible to de-identify data,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, who also took issue with an antivirus company monetizing users’ data. “That just sounds like a terrible business practice. They’re supposed to be protecting consumers from threats, rather than exposing them to threats.” ‘Mind Sharing Some Data With Us?’ We asked Avast more than a dozen questions concerning the extent of the data collected, who it’s being shared with, along with information about the Omnicom contract. It declined to answer most of our questions or provide a contact for Jumpshot, which didn’t respond to our calls or emails. However, Avast did say it stopped collecting user data for marketing purposes via the Avast and AVG browser extensions. “We completely discontinued the practice of using any data from the browser extensions for any other purpose than the core security engine, including sharing with Jumpshot,” the company said in a statement. Nevertheless, Avast’s Jumpshot division can still collect your browser histories through Avast’s main antivirus applications on desktop and mobile. This include AVG antivirus, which Avast also owns. The data harvesting occurs through the software’s Web Shield component, which will also scan URLs on your browser to detect malicious or fraudulent websites. For this reason, PCMag can no longer recommend Avast Free Antivirus as an Editors’ Choice in the category of free antivirus protection. Whether the company really needs your URLs to protect you is up for debate. Avast says taking the information directly and letting Avast’s cloud servers immediately scan them provides users with “additional layers of security.” But the same approach has its own risks, according to privacy researcher Gunes Acar, who said the safest way to process visited URLs is to never collect them. Google’s Safe Browsing API, for example, sends an updated blacklist of bad websites to your machine’s browser, so the URLs can be checked on your machine rather than over the cloud. “It can be done in a more private way,” Acar said. “Avast should definitely adopt that. But it seems they’re in the business of making money from the URLs.” On the flip side, Avast is offering a free antivirus product. The company also points out the browser history collection is optional. You can shut it off on install or within the settings panel. “Users have always had the ability to opt out of sharing data with Jumpshot. As of July 2019, we had already begun implementing an explicit opt-in choice for all new downloads of our AV (antivirus), and we are now also prompting our existing free users to make an explicit choice, a process which will be completed in February 2020,” the company said. Indeed, when you install the Avast or AVG antivirus on a Windows PC, the product will show you a pop-up that asks: “Mind sharing some data with us?” The pop-up will then proceed to tell you the collected data will be de-identified and aggregated as a way to protect your privacy. However, no mention is made about how the same data can be combined with other information to connect your identity to the collected browser history. Nor does the pop-up mention how Jumpshot can retain access to the data for three years. For that detail, you’ll have to look at the fine print in Avast’s privacy policy. As a result, users who see the pop-up may assume their data will be protected, and opt in when in reality, the privacy policies around tech products are often deliberately vague and simplified. “You want the consumer to buy or use your product. But you don’t want to scare them either,” said Kim Phan, a partner at legal firm Ballard Spahr, who works in privacy and data security group. The trade-off is that the policies can become opaque. “It’s harder to figure out what you’re doing,” she added. “People won’t be able to understand the details, or they will think you are trying to hide something.” In Avast’s case, the controversy around the largely unknown data-collection practices prompted enough scrutiny that US Sen. Ron Wyden decided to investigate. “Americans expect cybersecurity and privacy software to protect their data, not sell it to marketers,” he tweeted at the time. In a statement, Wyden said he was encouraged that Avast is ending the data collection through the company’s browser extensions. “However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data,” he added. “The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past.”
https://medium.com/pcmag-access/the-cost-of-avasts-free-antivirus-companies-can-spy-on-your-clicks-7f36f67c8fad
[]
2020-01-27 15:30:36.676000+00:00
['Cybersecurity', 'Antivirus', 'Privacy', 'Technology', 'Data']
2,313
Object-Oriented JavaScript — Multiple Inheritance
Photo by Eric Prouzet on Unsplash JavaScript is partly an object-oriented language. To learn JavaScript, we got to learn the object-oriented parts of JavaScript. In this article, we’ll look at multiple inheritance. Multiple Inheritance We can do multiple inheritance easily by merging properties from different properties into one object and then return the object. For instance, we can write: function multi(...args) { let obj = {}; for (const arg of args) { obj = { ...obj, ...arg }; } return obj; } Then we can use it by writing: const obj = multi({ foo: 1 }, { bar: 2 }, { baz: 3 }); obj is then: {foo: 1, bar: 2, baz: 3} We pass in multiple objects and then copied the properties with the spread operator. Mixins Mixins is an object that can be incorporated into another object. When we create an object, we can choose which mixin to incorporate into the final object. Parasitic Inheritance Parasitic inheritance is where we take all the functionality from another object into a new one. This pattern is created by Douglas Crockford. For instance, we can write: function object(proto) { function F() {} F.prototype = proto; return new F(); } const baseObj = { name: '2D shape', dimensions: 2 }; function rectangle(baseObj, width, height) { const obj = object(baseObj); obj.name = 'rectangle'; obj.getArea = function() { return this.width * this.height; }; obj.width = width; obj.height = height; return obj; } const rect = rectangle(baseObj, 2, 3); console.log(rect); console.log(rect.getArea()); We have the object function which returns an instance of F . Then we created the rectangle function that incorporates the baseOf , width and height and incorporate that into the object returned by object . object takes the baseObj into the prototype of F . And we add own properties to obj , which we return to add more properties. This way if we log the value of rect , we get: {name: "rectangle", width: 2, height: 3, getArea: ƒ And __proto__ for rect has: dimensions: 2 name: "2D shape" We can also call the getArea as we did in the last console log, and we get 6. So we know this is referring to the returned object. Borrow a Constructor We can call a parent constructor from a child constructor. For instance, we can write: function Shape(id) { this.id = id; } Shape.prototype.name = 'Shape'; Shape.prototype.toString = function() { return this.name; }; function Square(id, name, length) { Shape.apply(this, [id]); this.name = name; this.length = length; } Square.prototype = new Shape(); Square.prototype.name = 'Square'; We have the Shape constructor, which takes the id parameter. We call the Shape constructor with apply so that we this is set to the Shape constructor. This will set this.id from Square . Then we can populate our own properties. And then we create Square ‘s prototype with the Shape instance. Then we set the name to 'Square' since this is shared because of all the Square instances. The way, we copy the properties from Shape to Square by setting Square ’s prototype property. Photo by Eric Prouzet on Unsplash Conclusion We can create objects with various ways of inheritance. We can call the parent’s constructor and incorporate various properties from various objects.
https://medium.com/dev-genius/object-oriented-javascript-multiple-inheritance-ca07df32d137
['John Au-Yeung']
2020-11-19 21:15:47.785000+00:00
['JavaScript', 'Web Development', 'Software Development', 'Technology', 'Programming']
2,314
Human Development and Adequate Environment
“You are a product of your environment. So choose the environment that will best develop you toward your objective. Analyze your life in terms of its environment. Are the things around you helping you toward success — or are they holding you back?”(Stone). Our environment affects us all in many different ways, just because we are all human and have different ways of adapting to specific situations, our reactions can be based upon how we were raised we have seen throughout our lives. But it is not arguable that we as intellectual beings adapt, but the next question is where does the border lie between adaptation and environment and the result of this adjustment. Let’s consider that our natural habitat is all deserts, will we develop the same way and outcome we are now? So, it is unquestionable that nature and its resources manipulate main roll in humans improvement. Natural resources are those visible or non-visible things, which are produced or given by nature for living on this earth. These natural resources have no source but its existence means a lot for human beings. Human can only modify it but cannot produce natural resources which are essential in growth and inventions. Thus the consumption of these resources should be in limit in order to make sure that the existence of these resources should not be end ever in future. Every tiny secret of science man discovered was buried inside nature components. Natural Resources and Humans Air, water, soil, land, sunshine, forest, animals, coal, petroleum and minerals are the natural resources with no source. Air, water, animals and tree comes in category of renewal resources as they can be regained by nature, but by using in efficient manner. On the other side, petroleum, metals and coals are found in limited quantity and come under non-renewal category. These non renewal resources cannot be reproduced by anyone either human or nature. Some resources come from other natural resources like wood and oxygen come from tree, petroleum and metal come from land, solar energy comes from sunshine, wind energy comes from natural wind and hydro electrical energy comes from water. Beginning from the birth till death, a human life completely depends on these natural resources and by consuming it in efficient manner we just do assurance of availability of these resources in future always. Conservation refers to the process of protection, preservation, management, and restoration of natural environments and their inhabitants. The main objective of sustainable development is to preserve the resources of the environment for future generation use even after being used by the present generation. Hence, to achieve the objective of sustainable development, conservation of the environment is important.
https://medium.com/visions/human-development-and-adequate-environment-e5fd0bf4898a
['Harry Hotmann']
2019-05-09 15:27:08.484000+00:00
['Environment', 'Humans', 'Climate', 'Climate Change', 'Technology']
2,315
Google DeepMind might have just solved the “Black Box” problem in medical AI
Segmentation Video of an OCT Scan/ taken from DeepMind’s original publication The key barrier for AI in healthcare is the “Black Box” problem. For most AI systems, the model is hard to interpret and it is difficult to understand why they make a certain diagnosis or recommendation. This is a huge issue in medicine, for both physicians and patients. Deep Mind’s study published last week in Nature Medicine, presenting their Artificial Intelligence (AI) product capable of diagnosing 50 ophthalmic conditions from 3D retinal OCT scans. Its performance is on par with the best retinal specialists and superior to some human experts. This AI product’s accuracy and range of diagnoses are certainly impressive. It is also the first AI model to reach expert level performance with 3D diagnostic scans. From a clinical point-of-view, however, what is even more groundbreaking is the ingenious way in which this AI system operates and mimics the real-life clinical decision process. It addresses the “Black Box” issue which has been one of the biggest barriers to the integration of AI technologies in healthcare. An optical coherence tomography (OCT) scan of the retina Two Neural Networks DeepMind’s AI system addressed the “Black Box” by creating a framework with two separate neural networks. Instead of training one single neural network to identify pathologies from medical images, which would require a lot of labelled data per pathology, their framework decouples the process into two: 1) Segmentation (identify structures on the images) 2) Classification (analyze the segmentation and come up with diagnoses and referral suggestions) DeepMind’s framework tackles the “Black Box” Problem by having 2 neural networks with a readily viewable intermediate representation (tissue map) in between 1. The Segmentation Network Using a three-dimensional U-Net architecture, this first neural network translates raw OCT scans into tissue maps. It was trained using 877 clinical OCT scans. For each scan’s 128 slices, only about 3 representative ones were manually segmented. This sparse annotation procedure significantly reduced workload and allowed them to cover a large variety of scans and pathologies. The tissue maps identify the shown anatomy (ten layers of the retina) and label disease features (intra-retinal fluid, hemorrhage) and artifacts. This process mimics the typical clinical decision process. It allows physicians to inspect the AI’s segmentation and gain insight into the neural network’s “reasoning”. This intermediate representation is key to the future integration of AI into clinical practice. It is particularly useful in difficult and ambiguous cases. Physicians can inspect and visualize the automated segmentation rather than simply being presented with a diagnosis and referral suggestion. This segmentation technology also has enormous potential in clinical training as it can help professionals to learn to read medical images. Furthermore, it can be used to quantify and measure retinal pathologies. Currently, retinal experts can only eyeball the differences between a present and past OCT scans to objectify disease progression (eg: more intra-retinal fluids). With the AI’s automated segmentation, however, quantitative information such as the location and volume of seen anomalies can be automatically derived. This data can then be used for disease tracking and research, as an endpoint in clinical trials for example. left: a raw OCT scan; middle: manual segmentation; right; automated segmentation 2. The Classification Network This second neural network analyses the tissue-segmentation maps and outputs both a diagnosis and a referral suggestion. It was trained using 14884 OCT scan volumes from 7621 patients. Segmentation maps were automatically generated for all scans. Clinical labels were obtained by examining the patient’s clinical records in order to determine retrospectively 1) the final diagnosis (after all investigations), 2) the optimal referral pathway (in light of that diagnosis). The classification network, therefore, takes segmentation maps and learns to prioritize patients’ need for treatment into urgent, semi-urgent, routine and observation-only. It then outputs a diagnosis in form of a probability of multiple, concomitant retinal pathologies. Output: predicted diagnosis probabilities and referral suggestions Image Ambiguity and Ensembling Image interpretation and segmentation can be difficult for humans and machines alike due to the presence of ambiguous regions, where the true tissue type cannot be deduced from the image, and thus multiple equally plausible interpretations exist. To overcome this challenge, DeepMind’s framework uses an ensemble of 5 segmentation instances instead of 1. Each network instance creates a full segmentation map for the given scan, resulting in 5 different hypotheses. These different maps, just like different clinical experts, agree in areas with clear image structures but may differ in ambiguous low-quality regions. Using this ensemble, the ambiguities arising from the raw OCT scans are presented to the subsequent decision (classification) network. The classification network also has an ensemble of 5 instances which are applied to each of the 5 segmentation maps, resulting in a total of 25 classification outputs for every scan. Results: The framework achieved an area under the ROC curve that was over 99% for most of the pathologies, on par with clinical experts. As for the referral suggestion, its performance matched that of the five best specialists and outperformed that of the other three. Future: OCT is now one of the most common imaging procedures with 5.35 million OCT scans performed in the US Medicare population in 2014 alone. The widespread availability of OCT has not been matched by the availability of expert humans to interpret scans and refer patients to the appropriate clinical care. DeepMind’s AI solution has the potential to lower the cost and increase the availability of screening for retinal pathologies using OCT. Not only can it automatically detect the features of eye diseases, it also prioritizes patients most in need of urgent care by recommending whether they should be referred for treatment. This instant triaging process should drastically cut down the delay between the scan and treatment, allowing patients with serious diseases to obtain sight-saving treatments in time. “Anytime you talk about machine learning in medicine, the knee-jerk reaction is to worry that doctors are being replaced. But this is not going to replace doctors. In fact it’s going to increase the flow of patients with real disease who need real treatments,” Dr. Ehsan Rahimy, MD, a Google Brain consultant and vitreoretinal subspecial­ist in practice at the Palo Alto Medical Foundation. Read more from Health.AI: AI 2.0 in Ophthalmology — Google’s Second Publication Deep Learning in Ophthalmology — How Google Did It Machine Learning and OCT Images — the Future of Ophthalmology Machine Learning and Plastic Surgery AI & Neural Network for Pediatric Cataract The Cutting Edge: The Future of Surgery
https://medium.com/health-ai/google-deepmind-might-have-just-solved-the-black-box-problem-in-medical-ai-3ed8bc21f636
['Susan Ruyu Qi']
2018-08-25 15:15:23.576000+00:00
['Machine Learning', 'Ophthalmology', 'Artificial Intelligence', 'Medicine', 'Health Technology']
2,316
Unsupervised Learning
Unsupervised Learning As you have seen in my previous work for classifying objects in satellite imagery, deep learning is used, and it does achieve good results, though it has a high dependency on the amount of available data, and more exactly labeled data. Remote sensing creates a huge amount of data every day which mostly lacks annotations. The data complexity and the lack of a well-defined typology make it impossible to use supervised tools. And here is where clustering methods come in. For better results: Using all the bands available For example, Sentinel 2 has up to 13 spectral bands available and this extra information can bring more clarity to the algorithm and eliminate noise. Bellow, I will use all the 3 bands available for clustering and compare the results with the clustering obtained for using only 1 band. Conclusions Using all the bands available In order to obtain the results above I have been using: — images from both Planet and Sentinel — Rasterio, Numpy, Scikit-learn, KMeans, Matplotlib, Python. One can save the resulted images as a GeoTIFF (either using rasterio or gdal) and it can be imported into a GIS platform for further study and classification.
https://medium.com/dataseries/unsupervised-learning-cd0b78adbf7c
['Daniel Moraite']
2020-06-01 08:33:58.317000+00:00
['Satellite Technology', 'Unsupervised Learning', 'Clustering', 'Satellite Imagery', 'Machine Learning']
2,317
Level Up Your Mobile Data Privacy
Introduction First of all, I should say that the idea of writing this article struck me a long time ago. The thought “C’mon these things are elementary, everybody should know these” kept me wondering for a while. But, here you are reading this right now, which means I finally made up my mind and gathered time to finish it off. In this article, I am aggregating most of the features that you might already know. That will help you increase your data security for most commonly used day to day mobile apps and services — at the same time, pointing out the common pitfalls which might render all your security configurations worthless. What is Security? What is security? Let’s get that cleared out next. Think of the door lock at your home, while it prevents someone without the key from unlocking, it can’t help if someone picked the lock or used a crowbar to break the door open. Similarly, in the digital world, while password protecting your sensitive data will provide adequate security under most circumstances, it won’t be able to prevent someone from accessing your data 100%. That’s where additional security steps such as two-factor authentication, identification of unusual behavior, or unknown devices come in. But will that be enough? Anyway, the latter is more towards alerting the genuine user in case the security has been breached. In case someone else is trying to access data, you can take immediate actions to rectify the situation if that wasn’t you who logged in (Yeah yeah, all those emails and SMS you get when you logged in to your email account from a new device.). So that is similar to someone breaking the door lock and entering your home, so you will at least get to know that your home has been intruded once you get back home. Call me paranoid if you want, or if you think, “It won’t happen to me.” Once I take you through the rest of the article, I will leave you doubting yourself. Back to the same door lock analogy, what if you locked the door but left a window open, scared? Yes, that’s what I am going to discuss, not about how your home can get robbed, but how your data can get exposed due to plotholes in your security configurations. Sensitive Data??? Next thing we need to agree upon before proceeding is what sensitive data is. That is relative to each person. If you have no sensitive data, you wouldn’t be here reading this anyway, right. Unfortunately or fortunately we all have something to secure or protect, maybe your bank account details, personal media or your top-secret office work. Why I am calling this relative is that you won’t consider your bank details as sensitive data if only a few cents are remaining in your account. In that case, you should consider making your bank details publicly available, so there is a possibility that someone might sympathize and make a transfer. Or would you leave those glamor shots of you or your partner (Let’s just accept the fact that it happens more than we think) just hanging around in your phone? Or email for someone else to see unless you are a model or a pornstar who is planning to, or already published them in social media? Got the point? The less we talk, the more it happens. So here we are. 1. Two Factor Authentication Let’s get started with what you know already. You are probably using Facebook, Gmail, Whatsapp daily. If you have not enabled two-factor authentication in any of these apps, it’s time you do so. I am not planning to dive into the details of “How to” in any of the security configurations I am going to mention; since It’s going to make this lengthy, and there are more than enough tutorials and help available on the internet. That’s it? No, we are just beginning here. 2. Hide Notification Content in Lock Screen Your mobile phone is attached to your life, or rather your mobile phone is your life. What if some attacker gets their hands on your phone? Even though you have locked your phone using some locking mechanism (password, pin, face lock, fingerprint, etc), If your messages and notifications popup on the lock screen, there is a possibility that the attacker could read the messages and notifications to get an OTP, Recovery link or passwords into any of your accounts. Most smartphones provide you with the settings to disable content being visible on the lock screen. You may choose to hide the content if you need additional security using those settings. 3. Use SIM card lock Do you feel safe enough? What if the attacker can remove the sim and insert it into another phone and receive text messages, use your number to get access to your accounts or make calls. Never thought of that? Well, if you didn’t know, every sim has the option of enabling pin lock, which will pop up whenever you insert the sim into a phone or when you are restarting your device. Same as encrypting your phone with a password, the sim lock helps to almost prevent someone from using your sim card. 4. Face lock or Fingerprint lock? Speaking of phone locks, how many of you are using patterns, passwords, pins to lock your phone? I don’t know about you; I couldn’t help but notice how many people use their patterns or pins in public places or public transport to unlock the phone to check in to Facebook or Whatsapp. Your screen lock pattern or PIN is something you will be using regularly regardless of where you are. If you are in a very crowded place, it would be challenging to avoid prying eyes from noticing your PIN or the password. This is where face lock and fingerprint lock features would come in handy. While face lock is the latest tech and is cool to have, there are situations where someone else having similar face features could unlock your phone. Hence I prefer to have the fingerprint lock enabled and face lock disabled. People have come up with all sorts of hacks to bypass these security features. As technology advances with time, security technologies we consider safe right now will become just another set of fancy features that look cool. But, as of right now, I believe we don’t have to worry beyond the security measures we are discussing in this article unless you are an undercover agent holding confidential government information, or an international terrorist. Digging Deeper What we discussed above will provide adequate security to ensure your mobile data doesn’t end up in the wrong hands. Let’s dig deeper and take a peek at measures we can take to minimize the exposure in the unfortunate situation where an attacker manages to access your data. 5. Image Caches The best way to secure your data is not to save your data at all. Confused? All your images are cached in the storage for performance. Even when you deleted the original file, the cached images might still be intact in the storage. So it is better to make sure that once you delete a file you get rid of the cached images as well. Speaking of deleting files, in modern operating systems you will be able to undo the delete operation. This feature is possible only because the files you delete are not immediately removed from the storage, only they are being stored to a separate location so in case you decide to retrieve the files you can recover it from the location where your deleted files are being stored. While making a note on that, next I would like to bring your attention to Whatsapp. Being an app used by millions of daily users which provides end to end encryption for messages we ought to believe that our messages are secure and our privacy is preserved. However, even the end to end encryption adds resistance against man in the middle attack, it is not effective against someone who directly extracts the WhatsApp database from your phone. Yes, It’s possible to extract the WhatsApp encryption key. Even though the WhatsApp key is stored in a sandbox, and eventually can be used to decrypt all stored and backed up messages, I will spare the details on how we can do it as it is not related to this article. Maybe we will follow that up in a later article. But one thing you should take note of is all multimedia files you sent across WhatsApp gets stored in a folder named “sent” inside the WhatsApp multimedia directories, and it is not visible in the image gallery. Therefore there is a chance you might have spared some private data in these directories without knowing that it is there eating up your storage space and making your data vulnerable, even though you believe you have removed them permanently. Some cleaning apps may help you in detecting these files and removing these files for you. Otherwise, you may manually delete these files to make sure they don’t exist in your storage. 6. Encrypt Confidential data In case you have decided you need to store some of your private data for some reason, it’s better to keep them encrypted to avoid any data leaks. There are various encryption tools you can find at your disposal on the internet. But be sure to use a secure encryption algorithm and to secure your passwords and keys. One thing I would like to highlight is to be sure to wipe the plain data after encrypting. Yes, read that again, not “Delete”; wipe your plain data after encrypting them. When you encrypt your data, it will create a new encrypted data file while leaving the original data file intact. If you simply leave it as it is, or just delete it using a generic file removal command, there is no point of encryption since it is possible to recover deleted data. I am going to write articles on Data storage and how data recovery works in detail. For now, I will just end the article by giving a brief introduction to what Wiping is, in case you haven’t heard already. 7. Wiping your storage Sorry to break it out to you, this might seem like that your whole belief on data storage is falling apart, but the truth is when you delete your files from a storage system, the file system doesn’t remove the records entirely. At least, not immediately. That’s what makes it possible for recovery software to recover your lost or deleted files. So if you are considering encrypting your data, or selling your used phone or laptop, it is better you reset your device, wipe it clean (free space wiping) before giving it away. Let’s discuss data recovery in detail in my next article, “HDD, SSD and Data Recovery.” Conclusion While I have just introduced and presented basic security features and how to configure them in a way that they complement each other, it’s up to the user to decide what is level of security you need to implement on your data. So some of the features I have mentioned might sound a bit overkill, but you know, no point in closing the stable once the horse is gone. So plan your security configurations. Protect your data. Ultimately there is nothing unbreakable or unbreachable about security; it’s just a matter of time and effort at disposal. Right now, we have managed to keep it in control up to some level. Let’s hope it stays that way. Feel free to express your feedback and any more features you might know of that might not include in here. After all, this is all about sharing knowledge and helping each other.
https://medium.com/@umayangag/level-up-your-mobile-data-privacy-5338a4781949
['Umayanga Gunawardhana']
2020-03-12 06:16:26.664000+00:00
['Privacy', 'Storage', 'Mobile', 'Security', 'Technology']
2,318
12 Questions on the Future of Design
Dong-Ping Wong (Family): Buildings, mostly. Jasmina Grase: I design everyday objects with a moment of discovery. Sustainable improvements to our fast lifestyle. Right now I’m a co-founder of “MIITO” and the studio “Chudy & Grase.” Tim O’Reilly: I make books, conferences, and big ideas that help organize communities of practice — ideas like “open source,” “Web 2.0,” “makers,” or “open government” (“government as platform”). Colin Raney (Formlabs): We design/make 3D printers. Grase, Wong, Raney, O’Reilly, and Waldman Matthew Waldman: I created NOOKA Inc. to bring the revolution of interface design to fashion and objects. That said, I’m best known for my timepieces that express the ideas of universal global language. Emily Fischer (Haptic Lab): Physical projects and textiles that explore haptics — the body’s sense of touch. Eric J. Winston (SFDS): Fabrication and design. Hidden (John van den Nieuwenhuizen & Vitor Santa Maria): We design poetically simple audio projects. Fischer, Desbiens, Winston, the Hidden team, and Perry Justin Nagelberg (Parallelogram): I design objects that try and explore form and function in new, unexpected, and ideally more interesting or useful ways. Nicholas Desbiens: I am an architect, computational designer, and, most recently, the creator of fahz, the 3D printing project that lets you put your face in a vase. Marco Perry: Pensa is a consulting firm that works with startups and Fortune 500 firms for just about any consumer project. We spun off another company that makes the D.I.Wire, the first desktop CNC wire bender. Bapu, Thrift, Danzico, and Nagelberg Pavan Bapu: I’m the co-founder of a consumer electronics startup called Gramovox, and we reimagine vintage audio design with modern technology. Liz Danzico: Chair of the MFA Interaction Design program at the School of Visual Arts, and creative director at NPR. Scott Thrift (The Present): I’m an artist and I produce work on time.
https://medium.com/kickstarter/12-questions-on-the-future-of-design-f4ef3d69375f
[]
2018-05-09 15:37:38.774000+00:00
['Industrial Design', 'Design', 'Technology', 'Graphic Design', 'Information Design']
2,319
Manipulating Data With Django Migrations
What’s Under the Hood? Django migrations are Python files that help you add and change things in your database tables to reflect changes in your Django models. To understand how Django migrations help you work with data, it may be helpful to understand the underlying structures we’re working with. What’s a database table? If you’ve laid eyes on a spreadsheet before, you’re already most of the way to understanding a database table. In a relational database — for example, a PostgreSQL database — you can expect to see data organized into columns and rows. A relational database table may have a set number of columns and any number of rows. In Django, each model is its own table. For example, here’s a Django model: from django.db import models class Lunch(models.Model): left_side = models.CharField(max_length=100, null=True) center = models.CharField(max_length=100, null=True) right_side = models.CharField(max_length=100, null=True) Each field is a column, and each row is a Django object instance of that model. Here’s a representation of a database table for the Django model, Lunch , above. In the database, its name would be lunch_table . A table representing lunch_table. The model Lunch has three fields: left_side , center , and right-side . One instance of a Lunch object would have Fork for the left_side , Plate for the center , and Spoon for the right_side . Django automatically adds an id field if you don’t specify a primary key. If you wanted to change the name of your Lunch model, you’d do so in your models.py code. For example, change Lunch to Dinner , and then run python manage.py makemigrations . You’ll see: python manage.py makemigrations Did you rename the backend.Lunch model to Dinner? [y/N] y Migrations for 'backend': backend/migrations/0003_auto_20200922_2331.py - Rename model Lunch to Dinner Django automatically generates the appropriate migration files. The relevant line of the generated migrations file in this case would look like: migrations.RenameModel(old_name="Lunch", new_name="Dinner"), This operation would rename our Lunch model to Dinner while keeping everything else the same. But what if you also wanted to change the structure of the database table itself and its schema? Maybe you’d also like to make sure that existing data ends up in the right place on your Dinner table. Let’s explore how to turn our Lunch model into a Dinner model that looks like this: from django.db import models class Dinner(models.Model): top_left = models.CharField(max_length=100, null=True) top_center = models.CharField(max_length=100, null=True) top_right = models.CharField(max_length=100, null=True) bottom_left = models.CharField(max_length=100, null=True) bottom_center = models.CharField(max_length=100, null=True) bottom_right = models.CharField(max_length=100, null=True) It’s database table would look like this:
https://medium.com/better-programming/manipulating-data-with-django-migrations-93833b69e9c
['Victoria Drake']
2020-09-18 22:08:59.291000+00:00
['Programming', 'Software Development', 'Data Science', 'Technology', 'Learning To Code']
2,320
Review: Amazon Kindle Kids Edition Is the Best eReader Value
By Sascha Segan Don’t be fooled by the name: Amazon’s new Kindle Kids Edition isn’t just for kids. As I was testing it, I realized there’s no reason for adults not to buy it. It’s a $109.99 bundle of hardware and services that makes sense for everyone. There is nothing inherently locked-down about the kids’ unit; turn off the parental controls, and it’s just a Kindle. While it isn’t our favorite Kindle overall — that remains the Paperwhite — the kids’ Kindle represents the best value for your eReader dollar. Pros: Excellent value. Parental dashboard lets you monitor your child’s reading and control their Amazon product use. Cons: Not waterproof. FreeTime Unlimited pushes picture books that don’t look great on the 6-inch screen. Bottom Line: No matter your age, the Amazon Kindle Kids Edition is a terrific value for anyone looking to buy an ebook reader. What You Get Like other Amazon kids’ products, the Kindle Kids Edition isn’t actually new hardware—it’s a bundle of existing hardware and services at a discount. For your money, you get a base-model Kindle without ads ($109.99); a cute, kid-friendly case ($29.99); a two-year, no-questions-asked warranty ($14.99); and a year of FreeTime Unlimited or a year added to your subscription if you have one ($35.88 for a one-child plan). That means you get $190.85 of product for $109.99, which is a pretty solid deal. Even if you discount FreeTime Unlimited, you still come out ahead. You get four case options: a blue one, a pink one, one with cute little creatures, and one with birds on a wire. I love the birds-on-a-wire case. It’s gender-neutral and totally adorable. As mentioned, the Kindle itself is Amazon’s base model, which has a front light for night reading, Bluetooth support for audiobooks, and a just-okay 167ppi screen. It’s not our Editors’ Choice in large part because it isn’t waterproof; the latest Paperwhite is, and a lot of people want to read in the bath, by the pool, or on the beach. The Paperwhite also has a flat front panel so sand and other gunk doesn’t get caught around the screen. A Paperwhite without ads plus a case and a two-year warranty costs $205. What About the Kids? I’m happy making a Kindle a regular part of my child’s reading diet. Kids go through books fast, and we have limited shelf space in our house. We definitely supplement with the public library, but that doesn’t always have what we want, especially if it’s the latest book in a hot series. (Also, we have a running problem with library fines.) It’s important to know what a Kindle is good for, though. This Kindle is best for chapter books, not picture books, comics, or picture-heavy books like the Geronimo Stilton series. The black-and-white, six-inch screen is just a little too small for most picture books, comics, and manga. For picture books and comics, you need a tablet, and for manga, I recommend the Kobo Libra H2O. There are exceptions. Amazon’s subscription service includes the terrific Phoebe and Her Unicorn series, which uses four large, clear panels per page and renders beautifully on the six-inch Kindle. But a manga like Yotsuba&! has text that’s just too small to read here. This is actually fine at our house, where we sometimes have to strong-arm our child to read chapter books. Saying, “You can read, but only on this device that properly renders only chapter books,” works out pretty well. Amazon’s subscription service has appealing books, but many of them have graphic elements that don’t render well on this Kindle’s small screen. Freeing Your Child to Read FreeTime Unlimited is a key element of the kids’ Kindle experience; it’s a subscription service that offers a bunch of popular book series. If your child is into these series, and you don’t have your act together to get them out of the public library, it pays for itself pretty quickly. FreeTime Unlimited has Fancy Nancy, Harry Potter, Narnia, and Warriors. It doesn’t have Emily Windsnap, His Dark Materials, or Wings of Fire. It has a lot of TV and video game spinoffs, which I don’t care for, but a lot of elementary school kids like to cut their teeth on. In my mind, a little too much of its library is focused on picture books or picture-enhanced books. Amazon’s goal here, of course, is to get you to also buy a Fire tablet (such as the Fire 7) for your picture books, and you can use that tablet with the same FreeTime subscription. The Kindle plugs into an overall parental dashboard that lets you monitor your child’s reading and activities and gives you some quick talking points about certain books so you can have discussions about them. Pop over to the parent dashboard on your phone or the web, and you’ll see which books your child last read, for how many minutes, and on which days, along with whatever else they’ve been doing on their Amazon devices. You can add content from your own adult library to their profile, disable all devices for a while, or set time limits. I’d like it to be even more granular, letting you toggle on and off specific content on specific devices, but it’s a start. There are also some kid-friendly Kindle features that apply to all Kindles now, like the ability to press on words to get dictionary definitions, the ability to create flash cards from those definitions, and Word Wise, which pops definitions up over complex words automatically. If a kid would rather listen than read, there’s a collection of Audible audiobooks in FreeTime Unlimited, but you’ll need a Bluetooth speaker or headphones. And at the end of the day, this is just a Kindle. When your child isn’t reading, you can pull it out of kids’ mode by entering a password and use it with your regular library. As we mentioned, though, the base Kindle isn’t our favorite. The Paperwhite, with its smaller dot pitch, has sharper text and thus is easier on my tired old eyes. But your kids probably won’t mind. Word Wise helps kids understand the meaning of more difficult words. The Right Reader for Your Kids? The Amazon Kindle Kids Edition is a terrific ebook reader value for children and adults alike. No other eReader provides this overall package at this price, and if your budget is limited, this is a very good buy. That said, there are better ebook readers at higher prices. I strongly prefer a waterproof model, like the $130 Kindle Paperwhite or the $170 Kobo Libra H2O. Amazon sells most of the ebooks in the US, and if you have an Amazon library, then you’re probably getting a Kindle. Kobo models shine if your primary use is borrowing from a public library or reading books from non-Amazon sources. The good news is that these are all strong choices, and we can all stand to read a little more.
https://medium.com/pcmag-access/review-amazon-kindle-kids-edition-is-the-best-ereader-value-5b717c81d5bc
[]
2019-11-01 13:52:57.797000+00:00
['Gadgets', 'Kindle', 'Technology', 'Kids', 'Parenting']
2,321
The world’s most densely populated cities are not its “most livable.” Can they be, and for how long?
Written by Move Forward This article was originally published on October 5, 2017 It’s long been felt that urban density — a term used by city planners, politicians, economists, sociologists — has an inverse relationship with quality of life. As population and density rise, transportation problems multiply, and quality of life goes down. Right? The world’s ten most densely populated large cities (more than 1 million in population) are all outside of the U.S.: 1. Manila (Philippines, 108,000 people per square mile) 2. Mumbai (India, 74,000) 3. Dhaka (Bangladesh, 73,600) 4. Cairo (Egypt, 67,000) 5. Chennai (India, 63,000) 6. Kolkata (India, 63,000) 7. Bandung (Indonesia, 48,000) 8. Quezon City (Philippines, 46,000) 9. Paris (France, 56,000) 10. Caloocan (Philippines, 73,000) High density and transportation headaches are interconnected Manila has long been considered the world’s most congested city (Forbes Magazine). And according to Waze’s Global Driver Satisfaction Index, with its notorious traffic jams and crippling density, Manila owns the honor of being the city with the world’s worst traffic. On the positive side, it was the first city in Southeast Asia to operate a public light rail system, which has a daily ridership of 700,000. Expansion of that system, along with several other transportation projects, is at the top of the city’s ‘Metro Manila Dream Plan,’ a long-term vision for mitigating the city’s transportation woes by the 2030s. And looking beyond the top-ten density list, it’s apparent that the densest cities are clustered within countries throughout the developing world. The United Nations Department of Economic and Social Affairs predicts that by 2050 an unprecedented 66% of the world’s population will live in urban centers (compared to about 54% today). As the world continues to ‘urbanize,’ cities mostly in the developing world will continue to become denser and larger, and attract residents from lower-density outlying areas. Cities urbanize and densify around employment opportunity. In the 20th and 21st centuries (so far), production and manufacturing of mass goods has required a critical mass of labor nearby. However, there’s evidence that as the 21st century progresses further, emerging digital technologies like 3-D printing will disrupt and ultimately render obsolete the need for mass labor. High-density may be the opposite of high livability Not surprisingly, none of the most densely-populated cities are within the World Economic Forum’s list of the world’s most innovative countries. And, none are within the WEF’s list of cities with the highest quality of life. These cities were among 450 worldwide ranked annually by Mercer, one of the world’s largest HR consultancies, on metrics ranging from political and social stability, crime and law enforcement, economics, medical and health considerations, education, availability of public services, infrastructure and amenities, recreation, housing, and more. Here’s the top-10: 10. (Tie) Basel (Switzerland, 19,000 people per square mile), Sydney (Australia, 1,000) 9. Copenhagen (Denmark, 22,900) 8. Geneva (Switzerland, 32,000) 7. Frankfurt (Germany, 7,600) 6. Dusseldorf (Germany, 7,300) 5. Vancouver (Canada, 14,000) 4. Munich (Germany, 12,000) 3. Auckland (New Zealand, 6,900) 2. Zurich (Switzerland, 12,000) 1. Vienna (Austria, 11,205) Each of the world’s most livable cities is a culturally vibrant economic powerhouse for its respective country. And, each one is in the density ‘Goldilocks zone.’ Vienna, Austria, with its mid-range density of 11,205 people per square mile has topped the list for seven consecutive years. Wow. And each of these cities has invested heavily in modern transportation infrastructure, as one of the key factors contributing to a high degree of livability is ease of getting around. None are so-called ‘megacities’ (Manila, Mumbai, Kolkata), which are predicted to lose much of their economic cache during the 21st century, as digital technologies disrupt much of their reason for being. In the U.S., where economists and politicians tend to equate population growth and density with economic vitality, the highest-density cities are also some of the biggest: New York, NY (27,000 people per square mile), San Francisco (17,000), Boston (12,800), Chicago (11,800) and Philadelphia (11,400). But population growth and density are not always concomitant with economic vitality. Take for example, Chicago, whose population density is very similar to Vienna’s. Chicago is home to the nation’s worst traffic bottleneck, a 12-mile stretch of Interstate 90 that runs through the city. Chicago saw its population decline by about 210,000 (-7.2%) between the years 2000 and 2010. And during that time, while the city’s urban flight was comprised of mostly lower-income residents, Chicago’s downtown and close-in neighborhoods were booming with new well-educated denizens as the city became a renewed magnet for large corporate headquarters. The city’s economy was strengthening and the population was actually becoming more affluent, even as it was shrinking and declining in density. Yet the congestion on I-90 has not abated. Perhaps with Chicago’s growing affluence will come greater investment in traffic-reducing infrastructure. Either that, or emerging technologies will ultimately render the city’s reason for existence obsolete.
https://medium.com/move-forward-blog/the-worlds-most-densely-populated-cities-are-not-its-most-livable-b3896eaa18a1
['Move Forward']
2019-05-29 21:21:35.124000+00:00
['Mobility', 'Expanding Access', 'Transportation', 'Moving People', 'Technology']
2,322
Solutions Architect Tips — The 5 Types of Architecture Diagrams
Solutions Architect Tips — The 5 Types of Architecture Diagrams Photo by Kelly Sikkema on Unsplash. Have you ever been in a meeting where someone is trying to explain how a software system works? I was having a conversation with a relatively new solutions architect who was trying to describe a system they had come up with. It had about eight different components to it and they all interacted with each other in multiple ways. They were explaining the solution using hand gestures and a lot of “and this piece communicates with this one by…” I understood the words coming out of their mouth, but they didn’t make sense strung together. Words get lost when explaining complex conceptual architecture. I was trying to build a mental model while following a train of thought. I needed a visual. I needed a diagram. But not just any diagram. Architecture diagrams are not a “one size fits all” solution. We’ve discussed recently that a significant part of being a solutions architect is effectively communicating your ideas to both technical and non-technical audiences. Your diagrams must take that into consideration. If you want to get your idea across to different sets of people, you must make multiple versions of your diagrams. Today, we are going to talk about the five different types of diagrams you should make depending on five different audiences. We will take an example from my fake business but real API, Gopher Holes Unlimited, where we add a new gopher into the system to be tracked.
https://betterprogramming.pub/solutions-architect-tips-the-5-types-of-architecture-diagrams-eb0c11996f9e
['Allen Helton']
2021-03-30 11:59:01.289000+00:00
['Software Engineering', 'Programming', 'Software Development', 'Careers', 'Technology']
2,323
Teens Are Addicted to Socializing, Not Screens
Teens Are Addicted to Socializing, Not Screens Screenagers in the time of coronavirus Photo: Rebecca Nelson/Getty Images If you’re a parent trying to corral your children into attending “school” online, you’ve probably had the joy of witnessing a complete meltdown. Tantrums are no longer the domain of two-year-olds; 15-year-olds are also kicking and screaming. Needless to say, so are the fortysomethings. Children are begging to go outside. Teenagers desperately want to share physical space with their friends. And parents are begging their kids to go online so that they themselves can get some downtime. These are just some of the ways in which today’s reality seems upside down. I cannot remember a period in my research when parents weren’t wringing their hands about kids’ use of screens. I started studying teenagers’ use of social media in the early 2000s when Xanga and LiveJournal were cool. I watched as they rode the waves of MySpace and Facebook, into the realms of Snap and Instagram. My book It’s Complicated: The Social Lives of Networked Teens unpacks some of the most prevalent anxieties adults have about children’s use of technology, including the nonstop fear-inducing message that children are “addicted” to their phones, computers, and the internet. Needless to say, I never imagined how conditions might change when a global pandemic unfolded. I cannot remember a period in my research when parents weren’t wringing their hands about kids’ use of screens. The tone that parents took paralleled the tone their parents took over heavy metal and rock music, the same one their grandparents had when they spoke of the evils of comic books. Moral panics are consistent — but the medium that the panic centers on changes. Still, as with each wave of moral panic, there’s supposedly something intrinsic to the new medium that makes it especially horrible for young people. Cognizant of this history and having gone deep on social media activities with hundreds of teenagers, I pushed back and said that it wasn’t the technology teens were addicted to; it was their friends. Adults rolled their eyes at me, just as their teens rolled their eyes at them. Now, nearly a month into screen-based schooling en masse, I’ve gotten to witness a global natural experiment like none I ever expected. What have we learned? The majority of young people are going batshit crazy living a life wholly online. I can’t help but think that Covid-19 will end up teaching all of us how important human interaction in physical space is. If this goes on long enough, might this cohort end up going further and hating screens? Until the world started sheltering in place, most teens spent the majority of their days in school, playing sports, and participating in other activities, almost always in physical spaces with lots of humans co-present. True physical privacy is a luxury for most young people whose location in space is heavily monitored and controlled. Screens represented a break from the mass social. They also represented privacy from parents, an opportunity to socialize without parents lurking even when their physical bodies were forced to be at home. Parents hated the portals that kids held in their hands because their children seemed to disappear from the living room into some unknown void. That unknown void was those children’s happy place — the place where they could hang out with their friends, play games, and negotiate a life of their own. Now, with Covid-19, schools are being taught through video. Friends are through video. Activities are through video. There are even videos for gym and physical sport. Religious gatherings are through video. Well-intended adults are volunteering to step in and provide more video-based opportunities for young people. TV may have killed the radio star, but Zoom and Google Hangouts are going to kill the delight and joy in spending all day in front of screens. The majority of young people are going batshit crazy living a life wholly online. Fatigue is setting in. Sure, making a TikTok video with friends is still fun, but there’s a limit to how much time anyone can spend on any app — even teens. Give it another month and there will be kids dropping out of school or throwing their computers against the wall. (Well, I know of two teens who have already done the latter with their iPads.) Young people are begging to go outside, even if that means playing sports with their parents. Such things might not be surprising for a seven-year-old, but when your 15-year-old asks to play soccer with you, do it! As a child of the ‘80s, I was stunned during my fieldwork to learn that most contemporary kids didn’t find ways to sneak out of the house once their parents were asleep because going online was so much easier. I can’t help but wonder if sneaking out is becoming a thing once again. As we’re all stuck at home, teens are still doing everything possible to escape into their devices to maintain relationships, socialize, and have fun. Their shell-shocked parents are ignoring any and all screen time limitations as they too crave escapism (people who study fortysomethings: explain Animal Crossing to me!!?). But when physical distancing is no longer required, we’ll get to see that social closeness often involves meaningful co-presence with other humans. Adults took this for granted, but teens had few other options outside of spaces heavily controlled by adults. They went online not because the technology is especially alluring, but because it has long been the most viable option for having meaningful connections with friends given the way that their lives have been structured. Maybe now adults will start recognizing what my research showed: youth are “addicted” to sociality, not technology for technology’s sake.
https://onezero.medium.com/teens-are-addicted-to-socializing-not-screens-a26da5d92983
['Danah Boyd']
2020-04-14 05:31:01.024000+00:00
['Teenagers', 'Screentime', 'Technology', 'Digital Life', 'Social Media']
2,324
Stories on ILLUMINATION-Curated — All Volumes
Archives of Collections — Volumes Stories on ILLUMINATION-Curated — All Volumes Easy access to curated and outstanding stories Photo by Syed Hussaini on Unsplash ILLUMINATION-Curated is a unique collection, consists of edited and high-quality stories. Our unique publication hosts outstanding and curated stories from experienced and accomplished writers of Medium. We compile and distribute stories submitted to ILLUMINATION-Curated daily. Our top writers make a great effort to create outstanding stories, and we help them develop visibility for their high-quality content. The purpose of this story is to keep volumes in a single link for easy access. As a reference material, we also provide a link to all editorial resources of ILLUMINATION-Curated in this post. Our readers appreciate the distribution lists covering stories submitted to ILLUMINATION-Curated daily. The daily volumes make it easy to access the articles and discover our writers. Some readers are closely following specific writers that they found in these circulated lists. This archive version can be a useful resource for researchers and those who are studying specific genres. We cover over 100 topics. This story allows our new writers to explore stories of our experienced writers and connect with them quickly and meaningfully. ILLUMINATION-Curated strives for cross-pollination. Writers learn from each other by collaborating. Our writers do not compete; instead, they enhance and extend each other’s messages. Customised Image courtesy of Dew Langrial 07 December 2020 06 December 2020 05 December 2020 04 December 2020 03 December 2020 02 December 2020 01 December 2020 30 November 2020 29 November 2020 28 November 2020 27 November 2020 26 November 2020 25 November 2020 24 November 2020 23 November 2020 22 November 2020 21 November 2020 20 November 2020 19 November 2020 18 November 2020 17 November 2020 16 November 2020 15 November 2020 14 November 2020 13 November 2020 12 November 2020 11 November 2020 10 November 2020 09 November 2020 08 November 2020 07 November 2020 06 November 2020 05 November 2020 04 November 2020 03 November 2020 02 November 2020 01 November 2020 30 October 2020 29 October 2020 28 October 2020 27 October 2020 26 October 2020 25 October 2020 24 October 2020 23 October 2020 22 October 2020 21 October 2020 20 October 2020 19 October 2020 18 October 2020 17 October 2020 16 October 2020 15 October 2020 14 October 2020 13 October 2020 12 October 2020 11 October 2020 10 October 2020 09 October 2020 08 October 2020 07 October 2020 06 October 2020 05 October 2020 04 October 2020 03 October 2020 02 October 2020 01 October 2020 30 September 2020 29 September 2020 28 September 2020 27 September 2020 26 September 2020 25 September 2020 24 September 2020 23 September 2020 22 September 2020 21 September 2020 20 September 2020 19 September 2020 Editorial Resources About ILLUMINATION Curated
https://medium.com/illumination-curated/stories-on-illumination-curated-627b289571b4
[]
2020-12-08 18:04:45.367000+00:00
['Business', 'Technology', 'Self Improvement', 'Science', 'Writing']
2,325
A quick guide to Redis Lua scripting
Prerequisites You should have Redis installed on your system to follow this guide. It might be helpful to check Redis commands reference while reading. Why do I need Lua Scripts? In short: performance gain. Most tasks you do in Redis involve many steps. Instead of doing these steps in the language of your application, you can do it inside Redis with Lua. This may result in better performance. Also, all steps within a script are executed in an atomic way. No other Redis command can run while a script is executing. For example, I use Lua scripts to change JSON strings stored in Redis. I describe this in detail closer to the end of this article. But I don’t know any Lua A person who doesn’t know any Lua Don’t worry, Lua is not very hard to understand. If you know any language of the C family, you should be okay with Lua. Also, I am providing working examples in this article. Show me an example Let’s start by running scripts via redis-cli. Start it with: redis-cli Now run the following command: eval “redis.call(‘set’, KEYS[1], ARGV[1])” 1 key:name value The EVAL command is what tells Redis to run the script which follows. The ”redis.call(‘set’, KEYS[1], ARGV[1])” string is our script which is functionally identical to the Redis’s set command. Three parameters follow the script text: Number of provided keys Key name First argument Script arguments fall into two groups: KEYS and ARGV. We specify how many keys the script requires with the number immediately following it. In our example, it is 1. Immediately after this number, we need to provide these keys, one after another. They are accessible as KEYS table within the script. In our case, it contains a single value key:name at index 1. Note, that Lua indexed tables start with index 1, not 0. We can provide any number of arguments after the keys, which will be available in Lua as the ARGV table. In this example, we provide a single ARGV-argument: string value . As you already guessed, the above command sets the key key:name to value value . It is considered a good practice to provide keys which the script uses as KEYS, and all other arguments as ARGV. So you shouldn’t specify KEYS as 0 and then provide all keys within the ARGV table. Let’s now check if the script completed successfully. We are going to do this by running another script which gets the key from Redis: eval “return redis.call(‘get’, KEYS[1])” 1 key:name The output should be ”value” , which means that the previous script successfully set the key “key:name” . Can you explain the script? Doge after seeing the script above Our first script consists of a single statement: the redis.call function: redis.call(‘set’, KEYS[1], ARGV[1]) With redis.call you can execute any Redis command. The first argument is the name of this command followed by its parameters. In the case of the set command, these arguments are key and value. All Redis commands are supported. According to the documentation: Redis uses the same Lua interpreter to run all the commands Our second script does a little more than just running a single command — it also returns a value: eval “return redis.call(‘get’, KEYS[1])” 1 key:name Everything returned by the script is sent to the calling process. In our case, this process is redis-cli and you will see the result in your terminal window. Something more complex? A person planning to build a complex Redis script I once used Lua scripts to return elements from a hash map in a particular order. The order itself was specified by hash keys stored in a sorted set. Let’s first set up our data by running these commands in redis-cli: hmset hkeys key:1 value:1 key:2 value:2 key:3 value:3 key:4 value:4 key:5 value:5 key:6 value:6 zadd order 1 key:3 2 key:1 3 key:2 These commands create a hash map at key hkeys and a sorted set at key order which contains selected keys from hkeys in a specific order. You might want to check hmset and zadd commands reference for details. Let’s run the following script: eval “local order = redis.call(‘zrange’, KEYS[1], 0, -1); return redis.call(‘hmget’,KEYS[2],unpack(order));” 2 order hkeys You should see the following output: “value:3” “value:1” “value:2” Which means that we got values of the keys we wanted and in the correct order. Do I have to specify full script text to run it? No! Redis allows you to preload a script into memory with the SCRIPT LOAD command: script load “return redis.call(‘get’, KEYS[1])” You should see an output like this: “4e6d8fc8bb01276962cce5371fa795a7763657ae” This is the unique hash of the script which you need to provide to the EVALSHA command to run the script: evalsha 4e6d8fc8bb01276962cce5371fa795a7763657ae 1 key:name Note: you should use actual SHA1 hash returned by the SCRIPT LOAD command, the hash above is only an example. What did you mention about changing JSON? Sometimes people store JSON objects in Redis. Whether it is a good idea or not is another story, but in practice, this happens a lot. If you have to change a key in this JSON object, you need to get it from Redis, parse it, change the key, then serialize and set it back to Redis. There are a couple of problems with this approach: Concurrency. Another process can change this JSON between our get and set operations. In this case, the change will be lost. Performance. If you do these changes often enough and if the object is rather big, this might become the bottleneck of your app. You can win some performance by implementing this logic in Lua. Let’s add a test JSON string to Redis under key obj : set obj ‘{“a”:”foo”,”b”:”bar”}’ Now let’s run our script: EVAL ‘local obj = redis.call(“get”,KEYS[1]); local obj2 = string.gsub(obj,”(“ .. ARGV[1] .. “\”:)([^,}]+)”, “%1” .. ARGV[2]); return redis.call(“set”,KEYS[1],obj2);’ 1 obj b bar2 Now we will have the following object under key obj : {“a”:”foo”,”b”:”bar2"} You can instead load this script with the SCRIPT LOAD command and then run it like this: EVALSHA <your_script_sha> 1 obj b bar2 Some notes: The .. is the string concatenation operator in Lua. is the string concatenation operator in Lua. We use a RegEx pattern to match key and replace its value. If you don’t understand this Regular Expression, you can check my recent guide. One difference of the Lua RegEx flavor from most other flavors is that we use % as both backreference mark and escape character for RegEx special symbols. as both backreference mark and escape character for RegEx special symbols. We still escape ” with \ and not % because we escape Lua string delimiter, not RegEx special symbol. Should I always use Lua scripts? No. I recommend only using them when you can prove that it results in better performance. Always run benchmarks first. If all you want is atomicity, then you should check Redis transactions instead. Also, your script shouldn’t be too long. Remember that while a script is running, everything else is waiting for it to finish. If your script takes quite some time, it can cause bottlenecks instead of improving performance. The script stops after reaching a timeout (5 seconds by default). Redis scripts should not take too much time Last Word For more information on Lua check lua.org. You can check my node.js library on GitHub for some examples of Lua scripts (see src/lua folder). You can also use this library in node.js to change JSON objects without writing any Lua scripts yourself. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — Thank you for reading this article. Questions and comments are much appreciated. You are also welcome to follow me here and on Twitter.
https://medium.com/@andrwknight/a-guide-to-redis-lua-scripting-737431d2455
['Andrei Chernikov']
2019-09-13 18:35:01.905000+00:00
['Programming', 'Tech', 'Technology', 'Redis', 'Lua']
2,326
How To Make Command Deicide When To Run When To Not In Ansible
Hey guys in today's article we are going to solve an exciting use case in ansible we have to use a command module to start or stop the service but restarting service again and again or installing software one software again and again which is already installed is not a good option for production. Here we are taking the example of an httpd server to suppose our server is running then we want our system to go to the remote system and check the status of the server is running then it should not do anything, but if the server is stopped then it should execute the command and start the server. We have a module for this it's just an example. For doing this we are going to use the register keyword,ignore_yes keyword etc - hosts: all tasks: — command: “systemctl status httpd” register: x ignore_errors: yes — debug: var: x.rc — command: “systemctl start httpd” when: x.rc != 0 register keyword is used to store all output of a particular task in one variable as you can see above we have stored output of command module in x variable if the command no run then for ansible it is error and to skip it we have to use ignore_yes keywords because in ansible if one task fails then all succeeding task fail automatically. When ansible run command it gives RC which called running code if its 0 then command run successfully and if any value other than 0 then it mean command not run. This is the reason we have set x.rc !=0 this means the command only runs when RC is not equal to zero. Guys, here we come to the end of this blog I hope you all like it and found it informative. If have any query feel free to reach me :)
https://medium.com/@gupta-aditya333/how-to-make-command-deicide-when-to-run-when-to-not-in-ansible-e275bfc00b2
['Gupta Aditya']
2020-12-25 04:17:32.573000+00:00
['Articles', 'DevOps', 'Information Technology', 'Ansible', 'Automation']
2,327
Frank A. Brownell: The Father of The Modern Camera
Nowadays, it’s less common than it was twenty years ago for someone to walk into a shop and buy a dedicated camera. A device designed solely to capture images. Unless you’re a professional photographer, or at least an aspiring amateur photographer, the odds are that the camera on your mobile phone is more than enough to quench your photographic inclinations. But the digital cameras on our cell phones are based in many ways on the earlier digital cameras that are still sold separately, and those earlier digital cameras are based on the roll film, analogue cameras that our families still bought in the 80’s, 90’s, and decreasingly in the early 2000’s. But what were the roll film, analogue cameras that some of us still remember based on? I remember unto you, Frank A. Brownell. The father of the modern camera. My Interest in Frank Brownell comes from the same place I first learnt of him. Which is to say it comes from a place of some uncertainty. Quite a while ago I had a dream about a man, in a shirt between blue and green, asking many different questions, seeking knowledge about something. Then I heard a voice mentioning a name with the word brown in it. Browning, Brown, Brownell, I’m not entirely certain. Clarity is not a consistent factor in all dreams. But the voice said that he was to the camera as Turing was to the computer. In the modern age, people tend to dismiss their dreams more commonly than not. This is not without good reason. Many dreams are, to my eternal gratitude, meant to be dismissed. But I happen to believe that some are not. So the first thing I did upon waking up was hit the search engines. My first search was for the “Browning Camera”. I was half expecting results for the Browning firearms, but no. The results came up with Frank Brownell. A renowned camera designer and inventor, he was reportedly a major contributor to the design and invention of “The Kodak #1”, the first camera ever made by the storied Kodak company. He designed other cameras for Kodak. The acclaimed and economic “Brownie” camera series released by Kodak began with one of his most celebrated inventions. And though it has been claimed with good reason that the “Brownie” moniker was a reference to the mythological domestic spirits of folklore, it is arguable that these cameras take their name from their inventor. Palmer Cox, the cartoonist, had illustrated some of these mythological beings as characters in his work, and his illustrations were frequently used to advertise the Brownie Camera. My favourite explanation for the camera’s name was that it was a combination of the two to some extent. Perhaps originally named for the inventor, then later associated with the popular illustrated characters to boost sales. The Brownie is remembered for being the first camera widely available to average working men. Before that, cameras’ high production costs restricted their purchase to professional photographers or wealthy amateurs. However, Brownell’s contribution to the modern cameras we know today does not end with the very long running “Brownie series”, which reportedly only stopped in the 80’s. Brownell also invented the “Folding Pocket Kodak”. This was the first folding camera to utilise roll film. He also later designed Kodak’s first “daylight-loading” camera, allowing photographers to load films outside of darkened rooms. Believe it or not, Brownell did not begin his career as a designer or inventor of cameras, or even as a photographer. Brownell was originally a rather excellent cabinet maker. His skills in cabinet making brought him into contact with the production of box cameras, and the rest, as they say, is history. But just like this history doesn’t start with cameras, it does not end with them. After a long and successful career with Kodak, Brownell went to on to work in motor production, establishing the F. A. Brownell Motor Co. He further went on to attain the Vice Presidency of the East Side Savings Bank, before retiring at the age of 78. Aside from being a designer, inventor, and businessman of some renown, Brownell is also described as social visionary because he offered his employees financial incentives to boost productivity. He apparently also provided them with a hospital and a library and meals on company property. In that regard, he was certainly ahead of his time. The hospital in particular is arguably a much earlier form of health insurance! I don’t know, if Brownell is to the camera what Turing is to the computer. I don’t know if that voice I heard was a voice of authoritative truth from the beyond, or a statement of opinion someone had voiced, or even just thought in time. I don’t even know if the word I heard was “Browning” or “Brownell”. It may even have been the “Brownie Camera”, which sounds close enough for the first term in my online search. “Browning Camera”. I do not even know if the man in the blue/green shirt was meant to be Brownell, or me, or someone else. What I do know is that photography, as we know it today, is heavily influenced by the work of Frank A. Brownell. We owe much of the modern camera to said work. I do know that looking him up online I found him mentioned, but a lot less than he should be. I do know that I had no idea who he was before I dreamt of him. These things that I know had me wondering if the dream came to me for a reason. For a purpose. Perhaps I was meant to be one of a few to help keep his memory alive. Until people remember him again and he gets his own Wikipedia page, at least. Hopefully a proper one and not a stub. Though at this point, even a stub would be a step forward, for the man described as “possibly the most influential camera designer of all time.”
https://abdallahalalfy.medium.com/frank-a-brownell-the-father-of-the-modern-camera-2ac965e63a20
['Abdallah Al Alfy']
2020-10-05 22:34:02.618000+00:00
['History Of Technology', 'History Of Culture', 'History', 'History Of Science', 'Photography']
2,328
Nvidia likely to lay down intel and amd
Nvidia to lav down Intel and AMD , and why this image ? as we all know what arm is arm (Acorn RISC Machine) is a company which produces RISC architecture based processor , now you may ask what is RISC it is reduced instruction set computer in this kinds of processors there are less number of transistors and are highly optimized set of instruction , now let me tell you one thing your phone’s processors are made by ARM that is the reason you are never much discriminated in playing any high graphics game on mobile phone but in pc you must have high end nvidia or the amd graphic card to render those graphics as they are made of CISC architecture (many of you may not have understood much now wait ill be making a separate post for this ) . BTW moving to our main topic of today is that why will nvidia lay other as they both comprise of same instructions of CISC , let me tell you it was till now but yesterday Nvidia just knock the doors of ARM and announced that they will be buying ARM as whole for $40 billion with this we can also expect to see Nvidia to debut in mobiles which can drastically change the world of rendering and gaming and any one from home can render high end graphical pictures videos easily . For more information stick on to this site- https://haptictechnology.blogspot.com/
https://medium.com/@yashkameshwar10/nvidia-likely-to-lay-down-intel-and-amd-9f252a199f60
['Yash Kameshwar']
2020-09-23 09:31:14.890000+00:00
['Nvidia', 'Processors', 'Technology News', 'Technology', 'Graphics']
2,329
More Ways to Iterate Through JavaScript Arrays
Photo by Andy Chilton on Unsplash There’re many ways to do things with JavaScript. For instance, there’re lots of ways to iterate through the items of an array. In this article, we’ll look at several ways we can iterate through a JavaScript array. The While Loop The while loop is a loop that’s fast. And we only need the run condition to run the loop. For instance, we can use it to loop through an array as follows: const arr = [1, 2, 3] let i = 0; while (i < arr.length) { console.log(arr[i]); i++; } In the code above, we have a while loop with the initial index defined outside as i . Then in the while loop, we defined the run condition as i < arr.length , so it’ll run if i is less than arr.length . Then inside the loop body, we log the items from arr by its index, and then increment i by 1 at the end of the loop. The Do-while Loop The do...while loop runs the first iteration regardless of the condition. Then at the end of each iteration, it’ll check the run condition to see if it’s still satisfied. If it is, then it’ll continue with the next iteration. Otherwise, it’ll stop. For instance, we can loop through an array as follows: const arr = [1, 2, 3] let i = 0; do { console.log(arr[i]); i++; } while (i < arr.length) In the code above, we have: do { console.log(arr[i]); i++; } Which iterates through the entries by accessing the arr ‘s entry via index i , then i is incremented by 1. Next, it checks the condition in the while loop. Then it moves on to the next iteration until i < arr.length returns false . Array.prototype.map The array instance’s map method is for mapping each array entry into a new value as specified by the callback function. The callback takes up to 3 parameters. The first is the current item, which is required. The 2nd and 3rd are the current array index and original array respectively. The callback returns a value derived from the current item. It can also take an optional 2nd argument to set the value of this inside the callback. For instance, if we want to add 1 to each number in the array, we can write: const arr = [1, 2, 3].map(a => a + 1); In the code above, we have a => a + 1 , which is used to add 1 to a , which is the current item being processed. Therefore, we get [2, 3, 4] as the value of arr . We can pass in a value for this in the callback and use it as follows: const arr = [1, 2, 3].map(function(a) { return a + this.num; }, { num: 1 }); In the code above, we have: { num: 1 } which is set as the value of this . Then this.num is 1 as we specified in the object. So we get the same result as the previous example. map doesn’t mutate the original array. It returns a new one with the mapped values. Photo by Athena Lam on Unsplash Array.prototype.filter The array instance’s filter method returns a new array that meets the condition returned in the callback. The callback takes up to 3 parameters. The first is the current item, which is required. The 2nd and 3rd are the current array index and original array respectively. The callback returns a value derived from the current item. It can also take an optional 2nd argument to set the value of this inside the callback. For instance, we can get a new array with all the entries that are bigger than 1 as follows: const arr = [1, 2, 3].filter(a => a > 1); In the code above, we called filter with a => a > 1 to only take entries that are bigger than 1 from the array it’s called on and put it in the returned array. Then we get that arr is [2, 3] . To pass in a value of this and use it, we can write: const arr = [1, 2, 3].filter(function(a) { return a > this.min; }, { min: 1 }); Since we assigned this inside the callback to: { min: 1 } this.min will be 1, so we get the same result as in the previous example. Conclusion We can use the while and do...while loop to loop through items until the given run condition is no longer true. The array instance’s map method is used to map each entry of an array to a different value. The mapped values are returned in a new array.
https://medium.com/swlh/more-ways-to-iterate-through-javascript-arrays-a1ff7cc46c3b
['John Au-Yeung']
2020-05-16 17:09:25.179000+00:00
['Technology', 'JavaScript', 'Software Development', 'Programming', 'Web Development']
2,330
Covid-19 May Have Started Before Dec 2019, Increasing Evidence Shows
Covid-19 May Have Started Before Dec 2019, Increasing Evidence Shows It may also explain its unusual early adaptation to humans, unlike other coronaviruses. Home vector created by freepik — www.freepik.com The first global identified case of Covid-19 was on December 26 in the Wuhan Hospital in China, where a respiratory physician suspected a new infectious disease owing to his previous experiences with the 2003 SARS outbreak. Then on December 31, Chinese authorities informed the WHO of pneumonia with an unknown cause. But as more data is collected over the year, increasingly more evidence suggests that Covid-19 might have started much earlier than December. The new CDC study in the U.S. The U.S. surveillance team detected the first case of Covid-19 on January 19: a 35-year-old man who returned from China. But even this was not the actual first emergence of Covid-19. In the next sample of 12 early cases of Covid-19 in the U.S., two of them had symptoms that started on January 14. Taking into account the incubation period — the time gap between virus infection and symptom appearance — of about 5–6 or 14 days, the novel coronavirus SARS-CoV-2 might have been circulating in the U.S. earlier than January 14. SARS-CoV-2 antibodies take 1–3 weeks to form following infection encounter…So, the true infection encounter in the CDC study might have even been three weeks before December 13. Maybe even as early as November, hinted a study from the CDC published a few days ago in the Clinical Infectious Diseases journal, titled “Serologic testing of U.S. blood donations to identify SARS-CoV-2-reactive antibodies: December 2019-January 2020.” In this study, researchers collected leftover sera from 7,389 donated blood samples from donors without suspected viral or bacterial respiratory infection. The CDC then performed antibody testing — with validated sensitivity and specificity — on the blood sera. Results detected antibodies specific for the spike protein of SARS-CoV-2 in 1.43% (106 out of 7,389) of samples. Of these 106 cases, 39 belonged to blood samples collected between December 13–16 from California, Oregon, and Washinton. The other 67 cases were sampled from December 30 to January 17. However, the study cautioned that none of the 106 infections qualifies as true positives or true Covid-19 cases, which can only be confirmed via a positive RT-PCR test on respiratory specimens. Another caveat is that whether these 106 infections were transmitted by traveling or community spread is unknown. Nonetheless, “The findings of this report suggest that SARS-CoV-2 infections may have been present in the U.S. in December 2019, earlier than previously recognized,” the study concluded. A concern the paper did not address is that SARS-CoV-2 antibodies take 1–3 weeks to form following infection encounter — the window period. This is because antibodies are made by B-cells that belong to the immune system’s adaptive arm, the second line of defense that requires time to activate. So, the true infection encounter in the CDC study might have even been three weeks before December 13. But this may be relatively rare given that the median window period for SARS-CoV-2 antibodies is 10 days. Looking at other countries Based on China government data the South China Morning Post examined, the earliest detected Covid-19 case is on November 17 in a 55-year-old person in Hubei. By the end of November, there were nine cases of Covid-19. This data corroborates a study published in The Lancet that describes a Covid-19 patient with symptom onset dated December 1 in China. But even in those nine Covid-19 cases in November, there’s insufficient evidence to pinpoint patient zero — the first carrier of the Covid-19 outbreak. So, it’s still possible that there were undetected cases of Covid-19 before 17 November 2019. Researchers in Lombardy, Italy also did a similar study as the CDC, which was published in the Tumori Journal with the title, “Unexpected detection of SARS-CoV-2 antibodies in the prepandemic period in Italy.” Herein, the study caught antibodies specific for the SARS-CoV-2 receptor-binding domain (RBD) in 11.6% (111 out of 959 persons) of blood samples, of which 14% were sampled during September 2019. This interests the WHO, who has contacted the authors for further investigation. Thus, “SARS-CoV-2 might have cryptically circulated within humans for years before being discovered,” researchers suspect. There’re two pre-prints analyzing wastewater samples for traces of SARS-CoV-2 genetic material. One pre-print from Santa Catalina, Brazil, found SARS-CoV-2 RNA in two independent sewage samples collected on 27 November 2019. This data implies that people in Brazil might have been infected and shed the virus before December. The other pre-print is even more outrageous: Researchers from Barcelona, Spain, detected SARS-CoV-2 RNA in a sewage sample gathered on 12 March 2019. But note that preprints are not peer-reviewed, and there’s a critique that contamination may have occurred during sewage sampling and analyses. Explaining the evolutionary leap The early SARS-CoV-2 circulation theory also helps explain many odd facets of the pandemic. For one, SARS-CoV-2 binds to the ACE2 receptor with efficiency at least 10-times higher than SARS-1. This is despite that SARS-CoV-2 genomes have been relatively stable in early 2020 with low mutation rates. In contrast, rapid genetic changes happened in the genomes of SARS and MERS when they first spillover into the human population, which stabilize overtime. Thus, “SARS-CoV-2 might have cryptically circulated within humans for years before being discovered,” researchers suspect. If this suspicion is correct, then SARS-CoV-2 may have completed its host-switching adaption in humans before December. This also explains why SARS-CoV-2 already has a very stable genome in early 2020 and why SARS-CoV-2 has an unusual binding efficiency for the human ACE-2 receptor. And it may also explain why attempts to pinpoint the intermediate host of SARS-CoV-2 have failed so far, given that the real intermediate host (if it exists) might not be among animals sampled in December or early 2020. Short abstract A new study from the U.S. CDC found SARS-CoV-2-specific antibodies in donated blood samples between December 13–16. Given that SARS-CoV-2 antibodies take 1–3 weeks to form, the actual infection in this study may be earlier than December 13. Indeed, recent government data detected Covid-19 cases from November 17 onwards in China. Further, a study from Italy has also found SARS-CoV-2 antibodies in blood samples sampled from September. Using wastewater samples, two preprints have also found traces of SARS-CoV-2 genes in samples collected in November in Brazil and March in Spain. These data suggest that Covid-19 may have jumped to humans before December. In fact, the theory of early SARS-CoV-2 circulation in humans helps explain some oddities of the pandemic. For instance, SARS-CoV-2 genomes were already stable in early 2020, which also enables highly efficient binding to the human ACE2 receptor. In contrast, SARS and MERS genomes underwent drastic genetic changes when they first adapt to humans. In sum, increasing clinical and theoretical evidence indicates that Covid-19 may have emerged earlier than presumed.
https://medium.com/microbial-instincts/covid-19-may-have-started-before-december-increasing-evidence-shows-61e280b0842f
['Shin Jie Yong']
2020-12-17 00:15:14.198000+00:00
['Life', 'Coronavirus', 'Ideas', 'Science', 'Technology']
2,331
A Strong Focus for ShareRing — The Travel Industry
Here is a recap of what was discussed in the most recent AMA with CEO and co-founder Tim Bos on April 2nd 2019. You can view the full livestream on our YouTube channel as well. ShareRing and The Travel Industry Starting in Melbourne, Australia as our ‘sandbox’ testing area for the ShareRing app, we came across a few influential pieces of information while working with the business service providers we are testing with. One big finding we came across, is that the travel industry is where we would be able to start seeing real volume in transactions. When we talk about the travel industry, we are diving into what our app will be capable of; such as hotel bookings, car rental, motorbike rental, ski gear rental, tours and events, and featured YouTube videos to enhance your experience in your travel destination. We want to become the ‘one-stop-shop’ app for tourism and travel related activities. Efforts are still being put into other industries, initially focusing on businesses in Melbourne, Australia. The majority of our focus will now be directed towards the travel and tourism industry, as this is a place we see ShareRing being able to scale up quickly and succeed in. The approach we have been taking when diving into the travel industry is a top-down direction. In marketing terms, this means that we have been directly talking to tourism bodies in the first few targeted countries, rather than going directly to the business service providers of those regions. We are discussing with organisations about adopting our technologies as well as working together to promote and on-board business service providers that operate in the selected region. The processes up to date have been going very well. Our initial targeted areas will be very much focused on South East Asia and Australia, with the initial roll-out being in a small tourism heavy location and not the entire country. We would expect any of our targeted countries to have millions of travellers visit annually. Of these millions, we would expect around 10% to potentially use the app, and this number could increase or decrease. From this same perspective, after research conducted we have found that most travellers tend to spend about $1,500 USD on their trip. We would expect about 20–30% of that spend to be done through the app as well. When this is rolled out, the numbers that you see on the blockchain are absolutely staggering. We are very excited to be focusing in on the travel and tourism industries, and are really excited to start pushing the product out. Benefits for every party include how easy the product will be to use. When the user initially downloads the app, they first upload their identification using the ‘One ID verification system’ which we have created. They will then upload their payment details, which remain secure and on their device. With this, we are removing the chance of credit card fraud and ID fraud almost entirely. When rolling out to the providers, the process will be very quick and expedited as we have the help from the tourism bodies. Some of the benefits these providers will see, is that they are being directly paid in their home currency, the transactions are secure and the fees they incur are very low (between 10–15%) of the actual rental cost, opposed to other competitors who take about 30%. We are also eliminating the chance for fake credit card transactions to occur, and reduce the overall risk these providers might come across when renting out their assets to travellers or tourists. Governments and tourism bodies will see the benefits through promotion with Smart Cities and analytics they will receive from the data of the app. Tourism bodies and governments thrive on analytics they receive, as they breakdown where tourists are going and why, and where they are spending their time and money. They then use these analytics to figure out how to better focus their promotions, and what areas they should be spending more time in to try and build up, to make the tourists and travellers experiences better when visiting their countries. Safety and security within the app are also two benefits the government and tourism bodies gain, allowing for happier visitors and less overall negative experiences. As part of this travel and tourism focus, we wanted to make sure our ethos as a company is part of the bigger picture and we give back to the communities we are entering. In this effort, we are creating the ShareRing Foundation. The ShareRing Foundation is a not for profit foundation that will initially be focused around the travel industry. Essentially, for every dollar spent in a country, we will then take a percentage and provide that into the foundation to give back to the country. Usually, when countries have an influx of tourists into a specific location or region, that location ends up in poorer condition. The foundation’s job is to help rebuild these locations and communities. This will be done by removing plastic and litter from the beaches, putting money back into the communities that are either disadvantaged or destroyed by travel. When someone is using our app, we want them to know that the money they are spending is going back into these communities, being spent to assist in the sustainability of their cities. We will release more information about the foundation soon. Non-travel related: Melbourne, Australia is the location that we will be showcasing the general rental aspects of ShareRing, and are using as a sandbox for expanding and roll-out production within other areas non-travel related. One thing you will find within the app, are the category specific workflows. Until now, we have used the same workflow for generic rental processes across most categories. Within the travel industry, this needs to change and variety is important. We have since started building specific workflows for specific categories. If you want to rent a hotel room, instead of us ‘speaking’ to every hotel, we will bring an aggregator on and you will then have the choice of multiple hotel rooms to choose from. This process to search for and rent that hotel room is one you would expect when booking a hotel room. Booking a car will have a different process compared to renting ski gear and so on. The team has been working very hard to create these specific workflows to make sure we can cater to as many different categories within the travel industry. You will start seeing a few things within the app that are not specific to the travel industry and these are those existing engagements we have with clients that are more around corporate car sharing and corporate car rental. We have released our block-explorer to our testers, and the test-net is running very well. The Masternode testers have tested the user interface we have rolled out and it is working on multiple different interfaces, is easy to use and works seamlessly. Thank you to our testers who have been working with us on this! The app user-guide will also be released soon, which will include a bit of a demo and screens of the app to better explain the processes. This guide however will be pre-travel focused, and we will release the travel related processes as soon as we have them built up and perfected. Exchanges: We are going through due diligence on a couple exchanges, and once we have everything sorted, we will make an announcement. We are currently negotiating with a few exchanges in terms of listing fees, listing time-frames, the listing process, pricing etc. Another exchange listing before the end of April is all dependent on those exchanges and the timing on when to list. A big reason why we are looking to list on another exchange is due to the fact that we need an exchange that allows us to integrate with APIs and our system, and to actually be able to purchase SHR (ShareToken) on behalf of the users, because as you know, we create transactions on the exchange by repurchasing SHR. DX Exchange was initially chosen because they have a USD pairing, they allow API purchases, and their API is quite fast. If it’s an exchange that doesn’t have a proper USD pairing, we would look at a tether pairing or ETH/BTC pairing, as we would need a way to transfer SHRP (SharePay, our fiat currency) into their exchange to do that trade. There is a lot of criteria that needs to be met for us to list with these exchanges, but the discussions are underway and we will let you all know more once we do. Questions from livestream: Are there plans to monetise the analytics data? In terms of analytics, there are two levels of them. The first is public data that will be on the blockchain, which anyone can look at and will be anonymised. The other analytics will have the opportunity to be monetised as well. For smart cities to succeed, analytics are key and we will do our best to monetise them and monitor transactions through the blockchain. Suggested adding tours that offer clean up through the app. The ShareRing Foundation is going to be doing a lot of things, not just giving money back into the communities. It is about creating opportunities for these communities as well, potentially funding eco-tours and eco-lodging. As the foundation grows we will be looking more into these types of things. When will app beta testing start? We are already testing the app, testing the rental process and the provider dashboard. When you develop something, you develop it pretty much in isolation so going out into the real world and testing the product, creates instances where you notice some processes might be harder then a non-app process and we want to improve that and fix that work flow. Things we are improving right now on the app, is for provider to start the rental instead of being done on the user side. The way bookings are done in terms of approving bookings, some providers want a request go to them, others want auto approval due to their stock numbers, and others want to integrate the technology with existing APIs to test the rental process. Ongoing improvements are always occurring, but we are very happy with the process so far. When can other Masternode holders start? We are more than happy to roll the testing out to other Masternode holders who have been pre-approved, just get in touch with us at [email protected] and we will hook you up with a Masternode to start testing. Any community meetups happening soon? I’m actually doing a meetup with the blockchain labs in two universities in Melbourne soon. I’d love to start actually organising other meetups in different countries I travel to. How are miniature Masternodes going? We are still just testing the test-net Masternodes. The next stage of testing will be around delegation to test the miniature Masternodes. Any other big partnerships? We do have a few big tests and pilots we are running at the moment, but have been asked to hold off on announcements as of yet, as they are very much in the test phase currently. App release to the public? We do not have a specific date for the release to the general public, as this is all pending bug fixes within the current beta testing. Any updates around EFTPOS? The LiveGroup relationship is going well. One thing we are looking at doing is framing this relationship around travel, as LiveGroup currently have a huge number of hospitality and travel companies that are using their products. That is something that will accelerate their process and get integration happening with them soon. Will I be able to purchase flight tickets from app? Yes, in the future you will be able to do this. The idea is for the app to become the one-stop shop within the travel and tourism industries. Does the focus on the travel sector mean you will be expanding more quickly into Asia? Yes, this means we will be expanding more quickly. The plan in terms of on-boarding each country is that we will have a team that goes to that country, hires local people, skills them up and then we proceed to roll out our processes. The local people will then be in charge of any promotions with the business service providers, working with the tourism authority in PR related things and so on. We will also be looking to get a rental property in each country that will remain as the ‘ShareRing apartment’. This apartment is one of the locations that the long term ShareRing holders will be able to rent, ICO holders can use and our staff will stay at when in the countries.
https://medium.com/sharering/a-strong-focus-for-sharering-the-travel-industry-91ddec5d8732
[]
2019-05-21 06:10:02.514000+00:00
['Blockchain Technology', 'Travel', 'Tourism', 'Travel App', 'Traveling']
2,332
Can We Clean Up Social Media?
Can We Clean Up Social Media? Can we make social media a force for good instead of so much bad? There’s an old story about the boiling frog, a parable that tells of a frog being placed into a pot with water, and the heat is slowly increased to cook the frog alive. I don’t know who on earth would be so cruel, but it’s an old-timey tale that raises a very cautious point: that when change happens slowly it’s possible for you to not notice the small, incremental changes — until it’s too late. The idea is, if you put a frog into a pot of boiling water, it’ll just jump out. If you put a frog into a pot of tepid, room-temperature water, you could slowly turn up the heat bit by bit until the frog was being cooked alive. While the parable probably wouldn’t hold up in real life, it’s quite apt at describing what social media companies and users have done to us collectively over the past couple of decades. It all seemed like fun and games with cute little buttons that helped you show your friends that you “liked” things, with company advertisements, with metrics that rewarded our responses. But in the years since these things have been first developed, social media has taken a very sinister turn and I think just about everyone knows it, though few of us are willing to admit it. I am willing to admit it, social media, as it stands today, is a cesspool of toxic, borderline abusive behaviors that extremely few people would engage in if these conversations were taking place in person. Anyone who spends a long enough time on Facebook or Twitter will end up on the bottom of a bullying dogpile that’s nothing shy of alarming and, for some people, devastating. It’s gotten so extreme, that suicide and homicides have been live-streamed before the world for all of us to see. Children are killing themselves on the internet as a response to online bullying. Self-esteems are plummeting, especially for young women who constantly compare themselves to the models they see on Instagram and other sites. When a 14-year-old girl from Miami Gardens committed suicide on Facebook Live in 2017, I thought that would be the end of it. I thought that’s where the line would be drawn and social media companies would finally take some responsibility. But alas, I’ve been proven wrong. The incentives have only gotten worse, more perverse, and have prioritized sensationalist content at the expense of our healthy public discourse. Boring stories don’t get clicks, likes, shocked react emojis, and don’t elicit other responses in quite the same way extreme ones do. For those not yet in the know, there are a lot of problems with how social media and the internet at large has been structured in terms of what it values. It tends to value things based on the responses of the users who are on the platform, but is the gut-level response to something truly a metric of quality? I think not and I’m not alone. Beyond this, the whole point of Google, Facebook, and other large social media behemoths’ business models are to direct your behavior as a user and try to guide you into a sale to a third-party merchant. In fact, you aren’t even a customer of a social media site, which is why users aren’t prioritized or even addressed. Your attention and your behavioral data are what’s being sold to advertisers. There’s no customer service line you can call, no managers that you can speak to, no recourse whatsoever when things go wrong and it harms you. Doing this almost requires finding the most reflexive and impulsive people in our culture by magnifying the most provocative content, amplifying it through the distribution channels which they have editorial privy too. Ever post a ton of cool stuff and feel like crap after no one “liked” it? Well, that might be because nobody saw the stuff you posted. This is called shadow-banning. Your posts likely took a back seat to stuff that would help platforms obtain more and more information as it pertains to what can be sold from the users who are your friends or followers. The truly dangerous thing of our age is that the internet has united formerly disjointed toxic and harmful people and has simultaneously given them political power. I think that almost everyone in the United States can agree upon this basic premise, this metapolitical statement, that no matter what your background or political affiliation, online bullying is a real problem and has probably affected your life if you use the internet. Someone has polled you, trolled you, and probably Rick-rolled you, all for laughs and a few kicks. The problem is, it’s all fun and games until someone gets hurt until joking around turns into malicious subversion and outright sadism, the kind that can tip the scales of an election and end up locking children up in concentration camps, ripping them from their mothers’ arms, until LGBTQ rights are under assault, until women’s rights are under assault, until the nation is coming apart at the seams and there’s no leadership in sight as a pandemic rages on killing hundreds of thousands of Americans. Never underestimate the power of toxic people in large groups. All of these problems we face today are all underscored by our social media habits and the companies that allow such nonsense to be considered permissible. These companies really need to reign it in, and I think we’d all do wise to switch to platforms that don’t engage in this kind of behavior until they can get their shit together. Medium is a great example of a place that doesn’t prioritize radical and radicalizing content for the sake of advertising revenue. Social scientist Jonathan Haidt, a bit of sensationalist himself, ironically, recently posited a rather good idea that I’m inclined to stand behind: the need to provide government-issued identification in order to set up a social media account. Real-life doesn’t have trolls that hide behind masks of anonymity and it’s the anonymity that provides a seeming layer of security for those who’d engage in nefarious tactics. It may seem intrusive, but for the good of the global dialogue, I’m inclined towards a world where we had to submit a government-issued ID to create accounts and they began clamping down on multiple account usage. Bullying, especially violent and vitriolic language and forms of expression, should be curbed and there would be no way of doing so as long as bullies could create additional accounts in just a few minutes and pick up right where they left off. Right now, the internet at large is a very unhealthy place because of social media. Any attempts at healthy, wholesome conversations are quickly derailed by deranged, often aggressive bad actors who want nothing more than to watch the world burn and people they view as different from them to suffer. They have to be dealt with and disallowed from using the platforms until they can be civil. The internet creates a sort of mental firewall that makes people infinitely more confident than they would be saying something to a person in real life. Men will slander and threaten women, women will gang up on and bully men in large groups, men will upload their ex’s nudes for the world to see, people will post the most racist and evil nonsense possible in hopes of intimidating minorities, and let’s not forget that nations are using social media as a tool to undermine democracy itself. It’s only gotten worse since 2016 because the companies didn’t rigorously clamp down on such unsavory behavior when it happened, nipping it in the bud. Social media is quickly becoming the new smoking, something we use for social points at the rapid and radical destruction of our own health. I’ve written about this here. One idea that the big players of the social media game have been kicking around that we really ought to consider is the removal of the “like” buttons and other reaction metrics, at least visibly. Perhaps it would be best if we had “like” buttons but hid the amount of total “likes” given, that way, companies could still have a real-time metric to consider what gets prioritized in feeds, but users can’t see the likes and thus can’t tailor the content to specifically get more likes and show off through subversive, bad behavior like dunking on helpless teenagers, destroying their self-esteem. Real-life dialogue doesn’t take place with a bunch of people pressing “like”, especially not after the initial reaction where likes can accumulate over time. It incentivizes cheap digs and shameless dunking on people. It’s destroying our communication entirely and by pretty much every available metric, our society as well. The costs of getting this wrong are higher than most people think. Allow me to refresh your memory. Trump isn’t just a fluke president, he’s the first president elected by internet trolls and social media bad actors, by memes of hate and vengeance, by aggression channeled into cyberspace from distant and isolated parties who couldn’t maintain a social life with their insufferable behaviors and attitudes, dangerous people who’ve now found other dangerous people who will help them commiserate, strategize, organize, and inflict pain. Make no mistake: Trump is a weapon, he’s a weapon of the hateful, the spiteful, the losers who hate themselves and want the rest of the world to suffer as they suffer from their own self-inflicted torments. He’s the weapon of those aforementioned toxic and violent individuals who want nothing more than to see the world burn. These people will weaponize every bad behavior, bug, glitch, and feature possible to turn it toward the achievement of their own political aims which usually involve the harm of some other people. In personal relationships, we call these people “abusers” but since it’s remote, or, as I call it, “sadism-by-proxy”, we mistakenly think it’s just a series of defective bugs in the system of the internet; but it is no bug, it’s a feature and they’re profiting off of it while influencing it and fueling it with filter bubbles and a laissez-faire approach to destructive parties that is morally atrocious and absolutely unforgivable. Let’s put pressure on these companies to get this right — our collective mental health is at stake. In the interim, here’s how to partially unplug:
https://medium.com/flux-magazine/can-we-clean-up-social-media-a00eb53e9739
['Joe Duncan']
2020-07-19 23:20:31.111000+00:00
['Technology', 'Politics', 'Mental Health', 'Society', 'Culture']
2,333
Must-have bedroom gadgets for the winters
While you take those long and cozy naps this winter, we’d suggest you go for some smart bedroom gadgets to make the most of that sleep. It could be a smart sleep aid or a thermostat or even five-layer smart mattress, this list of bedroom gadgets are a perfect match for the winters. Winter is here, and you want to snuggle in bed every morning. That definitely calls for more winter gadgets for the bedroom. So we thought of making this post-Christmas Sunday special for you by curating some of our favorite bedroom gadgets for the winter. Related: Best smart home gadgets of 2020 You could start by going for a smart five-layer mattress to add some extra cozy comfort. Or perhaps opt for a smart sleep aid that will help you sleep better. Nevertheless, this winter is going to be more fun and relaxing with these winter bedroom gadgets to accompany you. Honeywell UberHeat Ceramic Personal Heater Bring the heat with you anywhere with the Honeywell UberHeat Ceramic Personal Heater. Featuring a compact and modern design, this portable gadget delivers powerful heat wherever you need it. Ideal for your work desk, bedside table, or any tabletop, the personal heater effectively heats up your personal space. In addition, the UberHeat Ceramic Heater is easy to use. Simply choose between two constant heat settings or adjust the thermostat to make it most comfortable for you. Eight Sleep Pod Pro Five-Layer Mattress Say goodbye to noisy alarm clocks that suddenly wake you, starting your day with shock and a thumping heart. The Eight Sleep Pod Pro five-layer mattress features gentle vibrations at chest level to wake you without any sound. And, if that’s not enough, it also comes with an abundance of other incredible features. For example, it includes ambient sensors that measure room temperature, humidity, and weather. It’ll then react accordingly, so you drift off at the optimum sleeping temperature. Or you can set your desired sleeping temperature-with scheduled times-in the app. tado° Smart Radiator Thermostat V3+ Home Heating System The tado° Smart Radiator Thermostat V3+ home heating system connects to your phone so you can adjust the heating from the app. This allows you to set heating or cooling schedules, as well as receive insights and tips on improving your home climate. You can even adjust individual room temperatures on the go. The tado° home heating system lets you plan your temperatures based on your routine. Therefore, you’ll improve your home comforts and reduce your energy bill. Once installed, you can even control it via voice commands, including Google Assistant, Amazon Alexa, and Apple HomeKit. Wyze Thermostat Smart Heating & Cooling System Controlling the temperature in your home has never been easier than with the Wyze Thermostat smart heating & cooling system. It works with voice assistants, including Amazon Alexa, so you don’t even need to get up to make it warmer or cooler. Alternatively, use the Wyze App to manage the thermostat from anywhere. The app also includes helpful insights, including your usage and ways to reduce wasted energy. The Wyze Thermostat isn’t just for tech-savvy individuals: you can adjust the temperature from the device itself on the wall, just like a standard thermostat. Philips SmartSleep Snoring Relief Band If you struggle with snoring, it likely doesn’t bother just you. Snoring can affect everyone you live with, which is why you’ll want the Philips SmartSleep Snoring Relief Band. This useful gadget can actually alert you before you begin snoring, so you never disturb anyone. In fact, it uses clinically proven technology in a small sensor that you wear on a strap around your torso. The Philips SmartSleep Snoring Relief Band delivers a gentle vibration when you need to turn from your back to your side. Sony LSPX-S2 Wireless Glass Sound Speaker Designed for parties and chilling at home, the Sony LSPX-S2 Wireless Glass Sound Speaker spreads music in a 360-degree direction. Therefore, you can place it anywhere in the room, and it’ll deliver exceptionally clear sound. In fact, the three actuators vibrate the glass tube, which turns the entire surface into a speaker. This enables a precise sound quality from every position. And the sound is high-quality, thanks to the High-Resolution Audio that provides a higher sampling rate than used in CDs. ChiliSleep OOLER Advanced Sleep System Your body is designed to sleep in a climactic rhythm defined by nature. And the ChiliSleep OOLER advanced sleep system supports this rhythm. Letting you sleep as nature intended, this luxurious sleep system complements your circadian rhythm. In fact, the OOLER sleep system uses temperature technology to signal and maintain sleep. You’ll re-align with your ancestral sleeping style using this cooling mattress pad. Unlike fans and wicking fabrics, this device lets you sleep when it’s colder and darker. Then, it encourages you to wake when it’s warmer and brighter. AROMEO Sense Smart Sleep Aid Imagine falling asleep to a dimming sunset light, sedative woody aroma, and gentle music. When it’s time to wake up, you are greeted by a glowing sunrise light, uplifting citrus aroma, and energizing music. Much better than your alarm clock, isn’t it? Meet AROMEO Sense, a gadget that creates an ideal environment for sleeping, waking up, and relaxing using a synergy of aroma, light, and sound therapy. It’s a smart mood light and an essential oil diffuser complete with a mindfulness app. It’s so easy to get started. AROMEO Sense comes with three easy-to-use preset modules-Sleep, Relax, and Focus. Xenoma e-skin Sleep & Lounge Smart Pajamas The Xenoma e-skin Sleep & Lounge smart pajamas come with a hub that tracks your everyday movement to detect any falls. This smart hub then alerts loved ones so they can seek the appropriate care. But, the hub does much more than this. It also monitors your body motion during sleep to help you improve your sleeping habits. When connected to a smart home, this hub adjusts the temperature of the air conditioner while you’re asleep. It’ll also provide you with proactive exercise advice during the day. Muse S Brain Sensing Headband Meditating is hard, and it can take years to perfect the practice. If it’s something you’re struggling to get better at, the Muse S brain sensing headband is for you. This headband provides real-time feedback on your brain activity, heart rate, breathing, and body movements to help you meditate more efficiently. In fact, you can even use the soothing voice guidance feature for Go-To-Sleep support. And we all know how important a good night’s sleep is for health and productivity. Furthermore, the meditation headband features tracking for EEG, PPG, an accelerometer, Pulse Oximetry, and a gyroscope. Which one of these bedroom gadgets would you bring home this winter? Share with us in the comments below. Want more tech news, reviews, and guides from Gadget Flow? Follow us on Google News, Feedly, and Flipboard. If you’re using Flipboard, you should definitely check out our Curated Stories. We publish three new stories every day, so make sure to follow us to stay updated! The Gadget Flow Daily Digest highlights and explores the latest in tech trends to keep you informed. Want it straight to your inbox? Subscribe ➜
https://medium.com/the-gadget-flow/must-have-bedroom-gadgets-for-the-winters-7f7a97aded15
['Gadget Flow']
2020-12-28 06:29:13.976000+00:00
['Gadgets', 'Home Improvement', 'Smart Home', 'Winter', 'Technology']
2,334
How often do you find your self saying “ I wish my smartphone battery could last longer_ ”…
How often do you find your self saying “ I wish my smartphone battery could last longer_ ” Smartphones could be so frustrating especially when you start having issues with speed, battery life, storage space, etc. I have always been a hardcore lover of smartphones. You know anything that could show off my tech-savvy was a go for me. But then I couldn’t seem to find a perfect smartphone that could fit my taste. It’s either it has a little battery life or a very limited storage space or the camera wasn’t a badass. All these were my problems until I came across the Dicon 5.0 smartphone. (The unicorn of smartphones). Imagine you having a smartphone that has a massive storage system that you don’t seem to run out of storage space, or a phone that has an incredible battery life you could literally go two days without charging it. I respect your time so I would cut the chase and go straight to the point. The Dicon 5.0 smartphone is a phone every hard working Nigerian deserves. It gives you a durable battery life that you wouldn’t have to worry about low battery alerts. And that’s not all, the 150GB storage system allows you to store files of great multitude without fear of running out of space. The Internet speed is topnotch and you can access the internet within micro seconds. The Dicon 5.0 smartphone unleashes the hidden photography potential in you when you rewrite the rules with the quad-camera you could easily replicate the skills of a professional photographer. You don’t want to miss out on this. We offer a diverse range, variety of colors, unbeatable prices and services you would never forget. How much can you pay for a smartphone that would improve your tech savyness? Get the Dicon 5.0 smartphone for 150$ 25% discount now. For every order, you get a customized solar power bank. Limited stocks available. Order closes in the next 24 hours after which the prices goes back to 250$ which is the original price . Don’t miss out. Click this link http://wa.me/2349022005142 to place an order now!
https://medium.com/@atunureomamuyovwe/how-often-do-you-find-your-self-saying-i-wish-my-smartphone-battery-could-last-longer-ca2a09c447ce
['Atunure Favour']
2020-12-22 16:15:29.730000+00:00
['Dicon', 'Smartphones', 'Photography', 'Technology', 'Copywriting']
2,335
Why critical infrastructure is vulnerable to cyber-attacks
Why critical infrastructure is vulnerable to cyber-attacks A new generation of malware is attacking the assets that keep modern society safe and functioning Photo by Jack B on Unsplash A new generation of malware is specifically targeting the industrial automation and control systems (IACS) used in critical infrastructure. These systems include the supervisory control and data acquisition (SCADA) technology and human machine interfaces (HMI) that are at the very heart of the assets that keep modern society safe and functioning, affecting everything from food and water to manufacturing plants and power installations. Probably the best-known cyber-attack on critical infrastructure took place in Ukraine in 2015, when hackers successfully infiltrated the electric utility’s SCADA system. Key circuit breakers were tripped, and the SCADA system was turned into a “brick”, causing a system-wide power blackout. It left nearly a quarter of a million people without electricity, in the middle of winter, for up to six hours. Critical infrastructure around the world continues to be at risk. Last October, reports from India eventually confirmed, following several denials, that hackers had infiltrated the country’s biggest nuclear power station, at Kudankulam in the southern state of Tamil Nadu. According to the virus scanning website VirusTotal, the hackers had managed to infect at least one computer with the so-called DTrack spyware before the breach was detected. Criminals in India had previously planted the DTrack spyware in ATM machines to steal card numbers and other personally identifiable information (PII). It is feared that this time the perpetrators may have obtained a large amount of data from the nuclear plant, which could be sold to terrorists for nefarious purposes, such as sabotage or stealing radioactive material. Meanwhile, according to reports, at least one oil installation in the Middle East is among the victims of a new kind of ransomware. As you might expect, the Ekans malware works by encrypting data and leaving a ransom note. The Duuzer malware used against South Korean manufacturing plants in 2015 worked in a similar way. What is new and more dangerous about Ekans is that it specifically targets industrial control systems. It blocks software processes that are specific to IACS, which could prevent operators from monitoring or controlling operations. The consequences could be devastating for human lives and for the environment. IT vs. OT Many power stations and industrial plants are not equipped to deal with these threats. A key issue, according to a recent IEC Technology Report, is that cybersecurity is too often understood only in terms of IT (information technology). Those responsible for security often overlook the operational constraints in sectors such as energy, manufacturing, healthcare or transport. The growth of connected devices has accelerated the convergence of the once separate domains of IT and operational technology (OT). From a cybersecurity perspective, the challenge is that unlike business systems, IACS are actually designed to facilitate ease of access from different networks. That is because industrial environments have to cope with different kinds of risk. Where IT security focuses in equal measure on protecting the confidentiality, integrity and availability of data — the so-called “C-I-A triad” — in the world of OT, availability is of foremost importance. Priorities for OT environments focus on health and safety and protecting the environment. In the event of an emergency in order to be able to protect personnel or to minimize the impacts of natural disasters, it is therefore vital that operators can receive accurate and timely information and can quickly take appropriate actions, such as shutting off power or shifting to backup equipment. Protecting SCADA systems SCADA systems, which are used to oversee electric grids as well as plant and machinery in industrial installations, often rely on “security by obscurity”, reflecting the ingrained mindset that since no one knows or cares about their communications systems or their data, they don’t need to protect it. However, SCADA systems can now have widespread communication networks increasingly reaching directly or indirectly into thousands of facilities, with increasing threats (both deliberate and inadvertent) potentially causing serious harm to people and to equipment. The retrofitting of appropriate and effective security measures has therefore become quite difficult for these SCADA systems. In the world of IT, for example, intrusion detection and prevention systems (IDPSs), are on the frontline of defence against malware. IDPSs are usually software applications that eavesdrop on network traffic. Depending on how they are configured, IDPSs can do everything from reporting intrusions to taking actions aimed at preventing or mitigating the impact of breaches. The challenge with SCADA systems is how to distinguish between normal data and potentially intrusive data that could cause harm. “If the intruder uses well-formed protocol messages, the IDPS may not recognize it as an intrusion,” explains smart grid cybersecurity expert Frances Cleveland, who is the convenor of IEC Technical Committee 57 Working Group 15 that develops IEC 62351 standards for power system operations. “The best solution is for SCADA systems to use security with their communication protocols,” she says. “Security does not necessarily mean encrypting messages, but at least adding authentication and authorization as well data integrity checking, while still allowing packet-inspection of the messages themselves which can help IDPSs determine if invalid data is being passed.” International standards and conformity assessment International standards provide solutions to many of these challenges based on global best practices. For example, IEC 62443, is designed to keep OT systems running. It can be applied to any industrial environment, including critical infrastructure facilities, such as power utilities or nuclear plants, as well as in the health and transport sectors. The industrial cybersecurity programme of the IECEE — the IEC System for Conformity Assessment Schemes for Electrotechnical Equipment and Components — tests and certifies cybersecurity in the industrial automation sector. The IECEE Conformity Assessment Scheme includes a programme that provides certification to standards within the IEC 62443 series. In an ideal world, power stations and other critical infrastructure would be secure-by-design. In addition to security standards for key communication protocols, IEC 62351 provides guidance on designing security into systems and operations before building them, rather than applying security measures after the systems have been implemented. The thinking is that trying to patch on security after the fact can at best be only a quick fix and at worst comes too late to prevent the damage being done. A holistic approach A recently published IEC report on cybersecurity recommends prioritizing resilience over other more traditional cyber-defence approaches. The report says that achieving resilience is largely about understanding and mitigating risks, as well as being able to detect and cope with security events when they happen. There is no way to prevent them completely. Even secure-by-design systems, although safer, require continuous and pervasive monitoring. IEC Standards for cybersecurity emphasize the importance of applying the right protection at the appropriate points in the system, while paying attention to safety, security and the reliability of processes. It is vital that this process is closely aligned with organizational goals because decisions about what steps to take to mitigate the impact of an attack can have operational implications. “Resilience is not just a technical issue,” warns the IEC report, “but must involve an overall business approach that combines cybersecurity techniques with system engineering and operations to prepare for and adapt to changing conditions, and to withstand and recover rapidly from disruptions”.
https://medium.com/e-tech/why-critical-infrastructure-is-vulnerable-to-cyber-attacks-d39f5449c799
['Mike Mullane']
2020-06-24 09:17:58.861000+00:00
['Critical Infrastructure', 'Risk Management', 'Scada', 'Technology', 'Cybersecurity']
2,336
The Rise of Commercial IoT in 2020
IoT vendors have primarily focused on industrial and fleet management use-cases since operators are willing to pay a high premium to insure their assets i.e. both expensive equipment and workforce. In these heavily regulated industries, the technology investment cost is trivial compared to the cost of non-compliance. On the other end of this spectrum is consumer IoT that has a relatively low barrier to entry and is a huge market, look at Alexa and Ring. Between these two bookends is the massive commercial market that includes retail, real estate, restaurants, entertainment, hospitality, and adjacent industries. Why has Commercial IoT been a laggard? The biggest problem is that ROI on pure IoT driven business use-cases such as asset tracking, IT automation, and incident management is not compelling. ROI remains the primary metric influencing purchasing decisions in the commercial sector. As a result, commercial businesses preferred to hack together IoT prototypes for smaller tactical wins. Bad connectivity is another issue and hence they cannot afford a completely cloud-hosted model to run operations. Lastly, there is no business justification to invest and grow a data science team to analyze the volume of data generated by the IoT infrastructure. Why is Commercial IoT ripe for growth in 2020? E-commerce continues to rise and in the last decade has more than doubled its share of retail sales. As a result, brick and mortar businesses have to level up their game but they lack visibility into in-store customer journeys as compared to their digital-native counterparts. However, with innovations in both hardware and software, these capabilities are now both accessible and economically viable for commercial applications. Over the past decade, industrial-strength sensors, that can work through barriers such as walls, in-combination with mesh-networking will address the client-side connectivity problems. Also, the ease of installation and lower energy consumption requirements further reduce the total cost of ownership. Three software innovations: Artificial Intelligence (AI), Blockchain, and Augmented Reality (AR) have enabled new business use-cases at lower costs that strengthen ROI. Machine Learning can be used by product vendors effectively to derive and predict patterns and insights and takes off the burden of hiring a data science team on the operator side. These same AI models can be executed on the edge, reducing the dependence on a high bandwidth internet connection. Computer Vision enables checkout-free in-store experiences and improves security and safety conditions. Blockchain will not only help with supply chain optimization but will enable new value-added services through smart contracts such as M2M payments. Finally, AR can enable new in-store experiences for customers safely and engagingly. What are some leading indicators for this rise in 2020? Since 2018 there has been a steady resurgence in digital native businesses moving into physical spaces for a truly omnichannel experience. Over the last decade, traditional businesses have been expanding their digital footprint to also own the omnichannel experience. The businesses that will finally win the omnichannel game will be the ones who can seamlessly transition customer experiences from online to offline and back online. In this regard, commercial physical spaces are evolving into community spaces which also implies less pressure of sales and more focus on community. Lululemon and other activewear brands have introduced free activity classes at their stores, shared office spaces of the likes of WeWork offer community meetups and commercial malls are evolving into experiential and entertainment environments. However, the business would still need to track engagement and conversion and tie it into their omnichannel strategy, which can only be enabled through IoT. Operational costs are rising, causing the retail apocalypse. IoT enabled services can bring in automation and greater operational visibility to reduce these inefficiencies. At Qopper, we are excited about the possibilities for Commercial IoT adoption in 2020.
https://medium.com/qopper/the-rise-of-commercial-iot-in-2020-cf7c2f5dd73b
['Jeevak Kasarkod']
2020-01-12 07:43:24.896000+00:00
['Retail Technology', '2020', 'Smart Buildings', 'Internet of Things', 'Community Engagement']
2,337
3.14 Components of the Hut34 ‘Data Economy Stack’
Happy π day to one and all! To celebrate, we wanted to update you on 3 and a bit components which compose what we’re calling a ‘Data Economy Stack’ — currently being battle tested for release in the Hut34 Labs. Read on! Enterprise workers beavering away generating data. Value is captured by HutX and traded internally and externally.— Photo by Alex Kotliarskyi on Unsplash The Hut34 ‘Data Economy’ Stack All three (and a bit) of the case studies we’re about to walk through utilise the underlying Hut34 stack — the toolkit we spent a good 18 months building, and which is now approaching it’s v2 release. Here is an overview of the components which make the magic happen; The Hut34 Wallet : Onboard users to the ETH ecosystem in one click by abstracting private keys to OAuth. 500 users and counting experiencing what we believe to be the best onboarding process available. Super quick, easy, and with what we think is the lowest friction in the world. HutX Data Protocol : HutX is how we bind data to value. It’s a data object which contains a 0x protocol object which settles the value transaction onchain, and additional parameters being the data model, the data location, the data hash, and the data itself in either encrypted or unencrypted form. Expect to hear far more on this as we deliver solutions to our first enterprise clients and tweak some features in preparation for release. Colossus & StationY : Colossus is our micro-service and smart contract suite which allows users to derive additional encryption keys from their ETH private key — so RSA public / private keypairs can be generated, bound to the user’s ETH address, and subsequently read from the blockchain in order to encrypt data uniquely at the point of settlement of a HutX trade. StationY is our signals collection app which we’re providing to enterprise clients to turn their stakeholders and communities into data armies. These two are a killer pairing. 0.141. Node.js driven ‘serverless’ arcitecture : The ‘little bit extra’ which we’ll sign off on π day with puts us squarely in the middle lane of current enterprise best practice. Reams has been written about the benefits of serverless arcitectures, and we’re experiencing first hand the remarkable benefits of being able to scale both to zero to control costs, and also to scale up almost limitlessly to generate the compute necessary to deal with large data sets, and also to harness tools like BigQuery to return meaningful results from queries against the entire blockchain in seconds. The above is really a quick overview of our current efforts; expect to read more detail about both the tools we have built, and the exciting opportunities we’re delivering on to clients we can’t wait to tell you all about! Onwards and Hutwards! The Hut34 Team.
https://medium.com/the-hut34-project/3-14-components-of-the-hut34-data-economy-stack-cd03d98fcb3b
['Peter Godbolt']
2019-03-15 02:37:34.827000+00:00
['Data', 'Data Science', 'Encryption', 'Blockchain', 'Enterprise Technology']
2,338
COVID-19 Impact on Global Open-Ended Equity Mutual Funds Market,Top key players : ICICI Prudential Technology Fund, TATA Digital India Fund
COVID-19 Impact on Global Open-Ended Equity Mutual Funds Market Research Report 2020–2026 Open-Ended Equity Mutual Funds Market research report is the new statistical data source added by Lexis Business Insights. In 2019, the global Open-Ended Equity Mutual Funds Market size was xx million US$ and it is expected to reach xx million US$ by the end of 2025, with a CAGR of xx% during 2020–2025. The Report scope furnishes with vital statistics about the current market status and manufacturers. It analyzes the in-depth business by considering different aspects, direction for companies, and strategy in the industry. After analyzing the report and all the aspects of the new investment projects, it is assessed the overall research and closure offered. The analysis of each segment in-detailed with various point views; that include the availability of data, facts, and figures, past performance, trends, and way of approaching in the market. The Open-Ended Equity Mutual Funds Market report also covers the in-depth analysis of the market dynamics, price, and forecast parameters which also include the demand, profit margin, supply and cost for the industry. The report additionally provides a pest analysis of all five along with the SWOT analysis for all companies profiled in the report. The report also consists of various company profiles and their key players; it also includes the competitive scenario, opportunities, and market of geographic regions. The regional outlook on the Open-Ended Equity Mutual Funds market covers areas such as Europe, Asia, China, India, North America, and the rest of the globe. Note — In order to provide more accurate market forecast, all our reports will be updated before delivery by considering the impact of COVID-19. Get sample copy of this report @ https://www.lexisbusinessinsights.com/request-sample-178173 Top key players : ICICI Prudential Technology Fund, TATA Digital India Fund., Aditya Birla Sun Life Digital India Fund, Franklin SBI, Nippon, and and DSP The main goal for the dissemination of this information is to give a descriptive analysis of how the trends could potentially affect the upcoming future of Open-Ended Equity Mutual Funds market during the forecast period. This markets competitive manufactures and the upcoming manufactures are studied with their detailed research. Revenue, production, price, market share of these players is mentioned with precise information. Global Open-Ended Equity Mutual Funds Market: Regional Segment Analysis This report provides pinpoint analysis for changing competitive dynamics. It offers a forward-looking perspective on different factors driving or limiting market growth. It provides a five-year forecast assessed on the basis of how they Open-Ended Equity Mutual Funds Market is predicted to grow. It helps in understanding the key product segments and their future and helps in making informed business decisions by having complete insights of market and by making in-depth analysis of market segments. Key questions answered in the report include: What will the market size and the growth rate be in 2026? What are the key factors driving the Global Open-Ended Equity Mutual Funds Market? What are the key market trends impacting the growth of the Global Open-Ended Equity Mutual Funds Market? What are the challenges to market growth? Who are the key vendors in the Global Open-Ended Equity Mutual Funds Market? What are the market opportunities and threats faced by the vendors in the Global Open-Ended Equity Mutual Funds Market? Trending factors influencing the market shares of the Americas, APAC, Europe, and MEA. The report includes six parts, dealing with: 1.) Basic information; 2.) The Asia Open-Ended Equity Mutual Funds Market; 3.) The North American Open-Ended Equity Mutual Funds Market; 4.) The European Open-Ended Equity Mutual Funds Market; 5.) Market entry and investment feasibility; 6.) The report conclusion. All the research report is made by using two techniques that are Primary and secondary research. There are various dynamic features of the business, like client need and feedback from the customers. Before (company name) curate any report, it has studied in-depth from all dynamic aspects such as industrial structure, application, classification, and definition. The report focuses on some very essential points and gives a piece of full information about Revenue, production, price, and market share. Open-Ended Equity Mutual Funds Market report will enlist all sections and research for each and every point without showing any indeterminate of the company. Reasons for Buying this Report This report provides pin-point analysis for changing competitive dynamics It provides a forward looking perspective on different factors driving or restraining market growth It provides a six-year forecast assessed on the basis of how the market is predicted to grow It helps in understanding the key product segments and their future It provides pin point analysis of changing competition dynamics and keeps you ahead of competitors It helps in making informed business decisions by having complete insights of market and by making in-depth analysis of market segments TABLE OF CONTENT: 1 Report Overview 2 Global Growth Trends 3 Market Share by Key Players 4 Breakdown Data by Type and Application 5 United States 6 Europe 7 China 8 Japan 9 Southeast Asia 10 India 11 Central & South America 12 International Players Profiles 13 Market Forecast 2019–2025 14 Analyst’s Viewpoints/Conclusions 15 Appendix Get Up to 20% Discount on this Premium Report @ https://www.lexisbusinessinsights.com/request-sample-178173 If you have any special requirements, please let us know and we will offer you the report as you want. About Us: Statistical surveying reports are a solitary goal for all the business, organization and nation reports. We highlight huge archive of most recent industry reports, driving and specialty organization profiles, and market measurements discharged by rumored private distributors and open associations. Statistical surveying Store is the far reaching gathering of market knowledge items and administrations accessible on air. We have statistical surveying reports from number of driving distributors and update our gathering day by day to furnish our customers with the moment online access to our database. With access to this database, our customers will have the option to profit by master bits of knowledge on worldwide businesses, items, and market patterns Contact Us: Lexis Business Insights Aaryan (Director- Business Development) US: +1 210 907 4145 UK: +44 7880 533158 6851 N Loop 1604 W San Antonio, TX 78249 [email protected] www.lexisbusinessinsights.com
https://medium.com/@ekonzane2402/covid-19-impact-on-global-open-ended-equity-mutual-funds-market-top-key-players-icici-prudential-23a62fc4223a
[]
2020-12-11 06:30:13.635000+00:00
['Mutual Funds', 'Banks', 'Equity', 'Fundraising', 'Banking Technology']
2,339
Reducing generation gaps with MINGLE…
THE PROBLEM: “Boomer: a term used humorously to refer to an old person (born before the 1970s) who has closed-minded opinions or has been out-of-touch with the latest ideologies, cultures, and technologies…” All of us have seen in TV shows, movies and even in real life how old people are left behind in every society due to their outdated understanding of the world because of which they are not able to mingle with the younger generation. They feel lonely and depressed and think that there is nothing that they can do about it, as this is what old age is, this is what their fate is: sad and lonely. Being a “boomer”, slowly they lose the number of topics that they can have conversations over with younger people, and forcing one just becomes boring. This irritates the elderly sometimes as there is no road that shows them the way to get over this problem even if they want to. There are no books, no articles, no newspapers that target this problem. So, what to do? THE SOLUTION: “Age is an issue of mind over matter. If you do not mind, it doesn’t matter…” The remedy is simple. Just like water, the elderly will have to mould themselves according to their surroundings, and this is where MINGLE comes into picture. MINGLE is an education app that coaches the user’s mind to work according to current trends, cultures and ways of life through which they would be able to get along with the gen z, without any confusions. It will train their IQ in such a way that their understanding of the world will increase from the 70s up to the present. This will require a good amount of teaching that will cover all the topics: starting with the basics of using electronic items like computers and smartphones, to more advanced topics like understanding memes and rap songs. Not being able to cope up with the fast pace growth of the 21st century is not anyone’s fault. We understand that there are a ton of other responsibilities that are much more important than following the latest trends. So no matter when they decide to join in, it is never too late. Our goal is simple: to heal the damage that time does, and, to make people of all ages comfortable with their present surroundings. THE WORKING: “Never forget how wildly capable you are…” First, the app will take a test to identify the amount of work that will be required to be done on the user’s present knowledge so that it matches to that of a 2020s 18-year-old. This milestone is chosen because in any day and age it is the teenagers who have the most energetic and skilled brains. To achieve this goal, the course will consist of a wide variety of topics that will be covered as per the grasping power of the user. All the possibly helpful topics will be covered so there is no need to doubt on the working of our app. Topics to be covered initially: · Change in education system, communication, dressing, language, grooming · Art(dance/songs/movies) · Relations (family, marriage, dating) · Nature (farming techniques, pollution, wildlife, land discovered) · Space (aliens, planets, astronomy vs astrology) · Lifestyle (inflation, food, homes, vehicles, vices) More topics will be introduced to the user according to their progress. MARKET DEMAND: “You don’t have to be pushed. The vision pulls you…” Demand of this product is expected from four types of customers: · The elderly: to get along with the latest generation · The creators making entertainment pieces like movies: to live in any era of their choice · The researchers: gaining in depth knowledge of our world’s history · The curious heads: people who just want to quench their thirst of knowing about this world TARGET CUSTOMERS: “Satisfaction is a rating. Loyalty is a brand…” Initially, only the elderly will be our target audience as they will benefit the most out of our app. We will target above mentioned categories as well, right after we achieve success in our first goal. MARKETING PLAN: “Good marketing makes the company look smart. Great marketing makes the customer feel smart…” We plan on starting with building a website for the company, and using social media platforms like YouTube, Instagram, Facebook, Twitter, LinkedIn, Snapchat, Telegram, for no cost marketing. After gaining some genuine feedback we will start collaborating with the content creators on the mentioned platforms to spread our message. PRODUCT PRICING: “Price is what you pay. Value is what you get…” The app will be completely free of cost for every customer. Although we plan on adding premium versions for those who are willing to go the extra mile, which will include features like live doubt solving sessions, interaction with industry experts, annual meets, and extra material for better learning. This version will be based on a monthly subscription of Rs.500, or, on an annual subscription of Rs.5,000. REQUIRED KNOWLEDGE/EXPERTISE: “To win in the marketplace you must first win in the work place…” This is the most important part of our concept and is also the most exciting. We plan on hiring people from majorly two age brackets- People between 18–25 years of age — will provide the latest content for teaching purposes. People between 50–60 years of age- will be experienced enough to know how to relate knowledge of older generations to the current generations to make it easier to understand. ESTIMATED BUDGET: “Budgeting is the first step towards financial freedom…” An approximate of Rs. 1 lakh will be required for our company which will be divided into 2 main parts: Salaries of employees, and, The development and quality assurance of the app . MILESTONES AND TIMELINE: “There is no GIANT step that does it. It’s a lot of little steps…” Here’s a rough idea for the timeline of our project- Planning (concept and working of the app): 2 Months Implementing (making the app and website): 9 Months Testing (gathering feedback, problems and solving them): 2 Months Launching (listing and marketing of the app) : 2 Months RISK ANALYSIS: “The biggest risk is not taking any risk…” We expect that there is a very small chance that things might go wrong in the testing phase as some people may be a bit repulsive for this app as this is a new concept and gaining trust in this new concept will require some adaptation. Still, we will be ready is this happens and we’ll handle it in the wisest possible way. WHY MINGLE? “When you catch a glimpse of your potential, that’s when passion is born…” We don’t want to boast, but our concept is pretty unique for the market and thus we have only one competition for our app. US. We strive to be the best version of ourselves and make this product a revolution for those who don’t want to give up, for those seek knowledge , and for those who make their own future…
https://medium.com/@kizashi-kiz/reducing-generation-gaps-with-mingle-6b130c59c743
['Vaibhav Nagar']
2020-12-20 18:32:16.552000+00:00
['Technology', 'Knowledge', 'Old Age', 'Latest News', 'Teaching And Learning']
2,340
A Primer on Genetic Circuits
The Central Dogma of Genetics Adapted from BioNinja DNA Almost each one your cells contains DNA, which is just a long string of molecules called nucleotides that look like this: Adapted from Wikipedia There are 4 types of nucleotides that make up DNA, each one characterized by its nitrogenous base component: adenosine, thymine, cytosine, and guanine — aka A, T, C, and G. These bases come in binding (or complementary) pairs — A will only bind with T, C with G. A specific sequence of these nucleotides is called a gene. Genes are kind of like commands for a computer; they tell the cell what protein to make. RNA RNA is very similar to DNA — except it’s only one stranded and it contains uracil nucleotides instead of thymine. RNA can also form more complicated machinery like ribosomes, which we’ll bring up later. An enzyme (basically a molecular machine) called RNA polymerase will ‘read’ a gene nucleotide by nucleotide. While it does this, it collects free complementary nucleotides to string together a strand of messenger RNA, or mRNA — sort of like making an inverse copy. Proteins Proteins are big molecules that perform some kind of function. They’re made up of smaller molecules called amino acids, which look like this: From Sielc | “R” isn’t an element, but is a placeholder for various other types of molecular structures. Amino acid sequences are very specific, and they’re determined by genes. RNA machines called ribosomes will read the nucleotide sequence of mRNA and use it to string AAs together into a strand (aka a polypeptide). How? Every amino acid corresponds to one or more three-nucleotide sequence s— aka a codon. Amino acids will interact with one another, all while being influenced by their chemical environment. This leads to the polypeptide folding into a 3-D structure — a protein. Proteins can fight viruses, induce movement, turn other proteins on or off. If you can control proteins, you can control the cell.
https://medium.com/the-work-in-progress-blog/a-primer-on-genetic-circuits-88958bc0fff9
['Murto Hilali']
2020-12-24 19:46:12.445000+00:00
['Biology', 'Science', 'Technology', 'Podcast', 'Genetics']
2,341
We need strong governance to make sure the internet is available, secure, and safe for all
By Michael Kende. Published 12 September 2020 on GS News. International Geneva has long been a hub for internet governance. Back in 2003, Geneva hosted the first international conference that addressed the topic, the World Summit on the Information Society (WSIS). Since then, the internet has grown exponentially in size and importance, as has been shown vividly during the COVID-19 crisis. Internet governance is increasingly important, to bridge the digital divide, address cybersecurity, and establish online rights. It requires cooperation and coordination between a variety of stakeholders and thus is well suited to the strengths of International Geneva. The Fondation pour Genève has released a report this week that I authored as part of a new series focusing on centres of expertise in International Geneva. Recent events have proved the series to be somewhat prophetic. The first report, released right before the Covid-19 pandemic, covered the role of Geneva as a centre for global health. This second report focuses on the role of Geneva as a centre for governance of the internet and is being released as the internet has proven central to societies’ responses to the crisis. International Geneva is built around the three founding pillars of the United Nations: Development, Peace and Security, and Human Rights. These pillars now all include important online issues and correspond well with many of the key internet governance questions. As a result, there are three corresponding clusters of internet governance in Geneva: Digital for Development. The digital divide drove the emergence of internet governance in Geneva, beginning with the WSIS. While it is important to bring people online, it is also important to recognise that Internet access is a means to achieving broader development goals. Geneva has been at the forefront of these efforts, including developing e-commerce, the rules for the resulting digital trade, and planning the future of work in a global digital world. Digital Trust. The vulnerability of online activities is increasingly apparent. Enhancing cybersecurity and promoting cyberpeace is a vital issue being addressed by a number of organisations across the Lake Geneva region. These initiatives are not just trying to prevent harms, but have begun to promote digital trust, to develop confidence in sensitive online services. Digital Rights. Human rights issues are firmly centred in Geneva, encompassing humanitarian issues as well. A significant amount of work has focused on how to apply human rights in the online sphere, including privacy and freedom of expression. In addition, work has been undertaken on how to leverage technology to deliver humanitarian aid, while also mitigating the downsides of technology. Geneva is the natural home for this work to take place. These clusters are driven by internet policy expertise within the UN agencies, Permanent Missions, and non-governmental organisations. The clusters are strengthened by academic and research institutions and venues for capacity building, all in turn supported by active Swiss policy. The report covers the range of activities in Geneva, including how the internet governance clusters have helped address online issues arising from the Covid-19 crisis. International Geneva continues to grow its role as a centre for internet governance, with new initiatives arriving that bolster the existing clusters, such as the Swiss Digital Initiative and the CyberPeace Institute. Geneva also plays host to a number of conferences and events — large and small — focused on a wide variety of digital issues. While International Geneva is a natural home for internet governance, it is not the designated home. The digital landscape is changing quickly — new technologies are raising new questions; governments are beginning to take a more pro-active role; and there is a risk that the internet is fragmenting across countries. International Geneva is an ideal location to address these issues. It is a neutral space for different blocs of countries to discuss issues of common concern, and for discussion of government approaches. Indeed, policy questions surrounding artificial intelligence are emerging and being addressed in a transversal way in Geneva. The pandemic has provided an important — if unwanted — reminder how central the internet has become to economies, societies, and our personal lives where it is available, secure, and safe. All stakeholders should build on the current clusters to ensure that the future of internet governance in Geneva is realised, to progress an inclusive digital for development, promote digital trust, and ensure digital rights for everyone, everywhere. Michael Kende is a senior fellow and visiting lecturer at the Graduate Institute, Geneva, a digital development expert at the World Bank Group, and senior adviser to Analysys Mason. He is the former chief economist at the Internet Society.
https://medium.com/geneva-solutions/we-need-strong-governance-to-make-sure-the-internet-is-available-secure-and-safe-for-all-cccce64a8b0f
['Gs News']
2020-11-24 14:57:36.674000+00:00
['International Relations', 'International Geneva', 'Opinion', 'Technology', 'Internet Governance']
2,342
QuarkChain Participated in The 2020 International Blockchain Academic Conference Hosted by BUPT
QuarkChain Participated in The 2020 International Blockchain Academic Conference Hosted by BUPT QuarkChain Follow Oct 20 · 2 min read The 2020 International Blockchain Academic Conference, hosted by BUPT(Beijing University of Posts and Telecommunications), was held from October 10 to October 11, 2020. This conference aims to provide in-depth communications and discussions on blockchain technology fields such as blockchain applications and services, blockchain scalabilities, blockchain securities and crypto algorithms. Dr. Qi Zhou, the founder and CEO of QuarkChain, was invited to participate in the conference remotely and gave a live broadcast on the topic of “High-performance Blockchain Infrastructure Database Optimization”. At his presentation, Dr. Zhou shared an in-depth and detailed introduction on the blockchain infrastructure database optimization: One hot topic in the blockchain area is to improve the single-chain performance in terms of transactions per second (TPS). Most of the ongoing projects are focusing on improving the consensus so that the network can reach a consensus or finalize a block in a couple of seconds. Some of the projects claim that, under their test environments, their networks can reach thousands of TPS. However, most of these tests use a small state, i.e., a small number of accounts in the blockchain database, while the most popular blockchain such as Ethereum has more than 100M accounts nowadays. The more the accounts are in the database, the lower the performance of the network can be. Dr. Zhou use the production data from Ethereum and perform a test to show that, even without the cost of consensus, with 100M accounts, it is almost impossible to achieve 1000 TPS stably using a traditional ledger model such as Patrica Merkle Tree. To address the performance bottleneck of the database, Dr. Zhou leads the QuarkChain Tech Team and proposes a FastDB ledger model. Using Ethereum production data, Dr. Zhou demonstrates that, without the cost of consensus, FastDB is able to achieve more than 10000+ TPS without sacrificing security. The QuarkChain mainnet is the platform for theoretical verification and landing of these technologies. The 2020 International Blockchain Academic Conference is a series of academic conferences celebrating the 65th anniversary of Beijing University of Posts and Telecommunications. The conference attracted Chinese and foreign researchers and participants in the blockchain field to attend, including JD, Hyperledger, Huobi, Baidu, and other corporations and academics. QuarkChain got the opportunity to show the infrastructure design to well-known digital information scientists and blockchain experts, and gained more recognition from researchers and experts.
https://medium.com/quarkchain-official/quarkchain-participated-in-the-2020-international-blockchain-academic-conference-hosted-by-bupt-43d5208a9d13
[]
2020-10-20 23:46:02.577000+00:00
['Blockchain', 'Quarkchain', 'Blockchain Technology', 'Blockchain Development', 'Conference']
2,343
Beginners Guide to Firebase
Photo by Myriam Jessier on Unsplash Firebase has been something I have been working with for some time and the platform has become my quickest go-to for creating web applications the notion of making your backend a service has quickly become a household idea. Removing the thought process of having to figure out how to store and configure data has opened up many new ideas on how your product can work. Firebase is generally best described as a platform with services and products for every need you may have in the web development ecosystem. Firebase Firebase Database Real-Time Database Databases, in general, are one of the most important parts of any application as it holds our data and Firebase has two great answers for this need. One being the real-time database the real benefit being the ability to make changes on the fly and all platforms will be notified and update accordingly requiring no refresh. The standard JSON format is used to store data. Firestore Database The firestore service is similar to the real-time database with the distinction the Cloud firestore has more features and advanced query functionality. This makes it great for scaling as your application grows. Firebase Authentication Authentication is handled through many integrations with other social platforms. These social logins make it very easy to utilize the security and authentication of their respective platforms. Firebase Authentication gives support for iOS, Android, Web, C++, and unity. Firebase Hosting Fairly self-explanatory this service provides a hosted development service. This allows you to deploy your application in a couple of straightforward steps instructed by the visual interface on the website. This removes the need of server creation and installing security certificates. Firebase Cloud Functionality The last and perhaps the most important service or piece we as developers will interact with the most is the backend SDK(Software Development Kit). This SDK enables you to create CRUD enabled backend fairly quick. The benefit here is that you can create your own backend logic and deploy it as cloud functions that you make calls to in your code similar to making HTTPS calls to an API. Firebase functions provide us with very unique triggers that we can call and use when data is inserted, changed, or deleted in the database. A simple example would be using the onWrite trigger prompting the submission of an email when someone makes a purchase much like getting an order confirmation from Amazon. This same system works both ways as the same trigger to notify you of the purchase will also be added to the database of orders for whatever company you purchased from setting up the order process to get your purchase delivered to you. onWrite was the example used above but firebase provides many triggers to use as needed that can be run for a host of processes depending on inputs. Examples being authentication triggers for new user creation or storage trigger when someone adds a picture to their profile or sets it as their profile picture. Conclusion Finishing up this week's blog was a quick dive into the concepts of Firebase. Next week will probably be a demo of using AWS or Firebase to get anyone started on working in the respective environment.
https://medium.com/@stevenks17/beginners-guide-to-firebase-b5a3309ff182
['Steve K']
2020-12-07 02:40:40.332000+00:00
['Development', 'Cloud Computing', 'Firebase', 'Technology', 'Programming']
2,344
Car Blower Motor Works Intermittently - Maintenance Guide
The world we live in today is known by many names. Well, the most famous one is "The World Of Technology". Almost all of our daily tasks are performed by machines. Like humans, sometimes the machines don't perform well. These issues require an immediate solution as otherwise, our world will stop. The famous and most common five steps to solve any problem are: • Define the problem • Gather information • Generate possible solutions • Evaluate ideas then choose one • Evaluate While working with machines we do the same, detect the issue and evaluate the solution. Here we will deal with a car blower that's working intermittently. Car Blower Motor A car blower motor is a motor present inside the engine of the car. It is a fan that runs the heating and air conditioning system in a car. It is a smooth running fan. It is responsible for cooling your vehicle or heating it. Issues Of Car Blower Motor If you are having issues with your car blower working then let me share with you some knowledge. A few days ago, the heating system in my car wasn't working properly. The time warm blows in and the very next moment a cool breeze comes from nowhere making me shiver. In winters like this, it's quite difficult surviving without the heating system. So, I figured that the issue is with my car blower's working. It also noticed that my car engine was making weird noises too. Hence I decided to get rid of this solution. But first, we have to learn about the proper working of a car blower. There are two main types of blower motors: single-speed motors and high efficiency electronically communicated motors (ECM), also known as variable-speed motors. A single-speed blower is standard in many older furnaces and only runs on two speeds: ON at 100% or OFF at 0%. Just like a regular light switch, it only has on or off, with no settings in between. A variable-speed blower on the other hand, runs continuously at a lower output and lower electrical usage. It can adjust the speed and air volume to perfectly match the desired heating and cooling air needs of the facility. It is similar to the gas pedal of a car because it can adjust speed based on demand. These blowers constantly monitor data coming from the system and can make adjustments for dirty filters or blocked vents by increasing airspeed. How To Solve The Issues Of Car Blower Motor If you don't understand cars or find it difficult just visit a mechanic. The most common problem is that the blower motor will not work or will work only on one speed. To diagnoses this problem, a common circuit tester is necessary. Check the fuse and the relay first. They are located in the fuse block in the driver's side compartment. Check the fuse for power and make sure it is good. If the fuse is good, check the relay by turning the ignition key and pulling the relay out of its socket. Check to make sure there is the power to two of the terminals in the relay receptacle. If not, and there is only one with power, turn the key to "Off" and check again. If there is still only one with power, the fan control switch is bad. If there was no power with the switch off, there is a problem with the power wire from the fuse to the relay. If the fan works OK but some or all positions of the level control do not work and the air cannot be controlled, one or more doors are not functioning properly. If one of the doors does not open, there will be no heat. Remove the glove box to gain access to the heater and AC housing. Start the engine and turn on the heat and set the fan on high. Rotate the knob to regulate the position of the air exchange while watching the motors on the housing. The motors can be seen moving the arms on the doors if they are operating properly. Replace the one that does not move. If none of the motors moves to pull the connector off the closest one. Check for power at the connector while repositioning the knob on the control panel. If there is no power at the connector, replace the control head. Maintenance Of Car Blower Motor A bad working blower not only effects the interior of your car but this might also affect the operation of the vehicle. And you should get to the mechanic immediately. The three basic signs of the issue are weak airflow, noises and smoke or smell. The car blower motor is supposed to last as much as the vehicle does. Due to the harsh conditions that the blower motor has to operate in, it will usually end up having repair issues. Maintenance Programs for blowers can be grouped into three categories: routine, quarterly and annual maintenance. Routine Maintenance is the process of setting a schedule to inspect components that are considered to be a leading indicator of potential failure. • The Routinely monitoring has the following things: 1.Bearing and Lubricant Condition 2.Shaft Seal Condition 3.Replace Filters 4.Checking Air Flow • For Quarterly maintenance: 1.Check fan blades for cracks, missing balance weights and vibrations 2.Then inspect for obvious signs of dirt and debris buildup on the fan and/or fan blades 3.Check and record/chart motor amp draw readings. This proves motor performance and proper belt tension. Charting results can alert you to a problem that may not be visually evident. 4.Then clean fan blades and motor of dust. • For Annual maintenance: Following things are checked to learn the condition of motor: 1.Bearings 2.Rotor/Stator 3.Belt 4.Brush/Commutator 5.Motor Mount 6.Motor Temperature Control 7.Bearing Housing 8.Bearing Frame and Foot 9.Bearing Frame 10.Shaft If you do not maintain your car blower motor or if it’s not in a good condition, it will have to go for a replacement. For professional blower motor replacement, most homeowners pay $350-$700. Written by: Arooj Fatima Picture source: google images
https://medium.com/@arooj9507/car-blower-motor-works-intermittently-maintenance-guide-895199329f0
[]
2020-12-26 11:59:15.222000+00:00
['World', 'Technical Analysis', 'Problem Solving', 'Mechanical Engineering', 'Technology']
2,345
Skin for Robots and Why It’s Crucial
Skin for Robots and Why It’s Crucial Photo by Lyman Gerona on Unsplash Good old C-3PO is, according to Star Wars, a stunning example of how humanoid robots could look like in the future. But this poor guy has a huge handicap: his skin! Skin is the human’s largest organ, so why are we giving it that little relevance when designing robots? Sorry Star Wars, this is not how humanoid droids will look like in the future! The relevance of Skin for Humans Skin is crucial for every being, but only a few scientists seem to realize its importance when it comes to a robot’s appearance. Besides giving living things the ability to interact with the environment in a sensing way, temperature regulation is one of the most important things our skin is made for [1]. Sweating is a brilliant mechanism nature designed for creatures, while local sweating is a progression humans can count on nowadays. Another great feature is the ability to provide a barrier against the external environment. The skin prevents us from water loss and protects us from foreign chemicals and micro-organisms [2]. This can be achieved due to various skin-layers with different functions. Of course, our body gets strength and stiffness, too, thanks to the largest organ of the human body. Benefits for Robots Human skin is composed of several layers. Each layer has a unique structure and function, but when it comes to design robots, it often has just a cosmetic purpose. To enable robots to sense their environment, scientists at TU Munich developed artificial skin cells for the robotic purpose [3]. In addition, a self-organizing network builds a kinematic model of the robot, and the skin was attached to it. With multi-modal sensor signals, scientists try to design safer robots that are capable of identifying contact as well as pre-contact interactions. The image was taken by Astrid Eckert, TUM The H-1 robot shown in the image above is equipped with 1260 cells and more than 13000 sensors. The cells themselves can measure: vibrations (acceleration sensor) pressure (capacitive force sensors) pre-touch (proximity sensor) temperature (temperature sensor) Then, the microcontroller samples and filters sensor data on each cell and prepares the information for distribution. Implementing these hexagonal skin cells can not only make humanoid robots huggable. Even robots’ feet are equipped with sensors to make them more responsible for uneven floors and thus improves their ability to walk enormously, as shown in the TED Talk. Scientists working on e-skin figured out similar needs for artificial skin [4]. Various types of sensors need to be distributed over different areas. Appropriate placement of the sensors is important, similar to the different areas of sensitivity on a human body. Sensors need to be embedded within a soft and stretchable material while the components are fast response and low-power. Nevertheless, strong processing capabilities are necessary to handle large generated data sets. The skin itself should benefit from a set of different sensors, like touch, temperature, gas, and electrochemical sensors. Challenges to Overcome One of the biggest challenge Gordon Cheng’s team has to deal with is handling the amount of data the hexagonal skin cells are producing. Multi-modal tactile signals of over 1200 cells need a large communication bandwidth and a lot of processing power. Different implementation frameworks allow a reduction of information load but a redundant cell network is still crucial for its operation. In essence, applying more cells is not necessarily the better choice. Scientists and engineers still need to find a trade-off between benefits skin cells can enable to important robotic parts while areas with lower importance need to compromise. Conclusion Human skin is made out of cells, so why shouldn’t we try to apply skin-cells for robots too? A metal skin like C-3PO’s one is obviously not made for sensing. To overcome this problem, scientists are working on similar approaches. Cheng’s team created a self-calibrating artificial skin made out of hexagonal cells, which can work autonomously. The biggest advantage of this skin would be the improved security aspect with various features provided by different sensors. Robots could finally sense their environment and prevent harming humans (or even themselves) thanks to pre-contact detection. Even though some features of the human skin are not covered in the current field of research yet, first concepts lead to plenty of new opportunities and can enable human-robot interaction more safely.
https://pub.towardsai.net/skin-for-robots-and-why-its-crucial-d9ffd542ace4
['Florian Geiser']
2020-12-16 19:02:27.444000+00:00
['Sensors', 'Robotics', 'Technology', 'Data Science', 'Science']
2,346
How I Overwhelmed the New MacBook Air With the Apple M1 Chip.
How I Overwhelmed the New MacBook Air With the Apple M1 Chip. Thomas Underwood Follow Dec 7 · 6 min read Stressing out the new Macbook Air I drove 3 hours round trip to get the new MacBook Air the day after I ordered it. It was either that or wait 3–4 weeks for delivery. I don’t know about you, but I was too excited to start playing with this new laptop and putting it through its paces. I have had several MacBooks over the years. Being a contractor (programmer) I have worked on fully loaded versions of 2017, 2018, and most recently the 2019 MacBook Pro with an i9 and 32GB of ram. Like many people, I have been intrigued by the performance claims and glowing reviews for the new Apple M1 chip along with the outlandish battery life. That got elevated to new levels of hype watching different YouTube videos of the MacBook Air being able to drive five 1080p and 4K monitors on the new Apple integrated graphics built into the M1 SoC (system on a chip) which includes the processor, ram, gpu, and other Apple hardware features all in one package. The hype is real When I strolled up to the window at the Apple store the salesperson was excited to give me the new MacBook Air (16GB with 1TB SSD) and tell me about how incredible the performance was. He said that he and the other employees had been throwing 4K video editing content and other stress tests at the MacBook for hours and hadn’t been able to crash it or cause the computer to become unresponsive. I took that as a challenge to see if I could max out the new M1 SoC with my day-to-day work-flow. I wasn’t surprised when I was able to do exactly that. Recently, in another one of my posts, I decided to try and build a high-performance image hosting service. The basic premise is that I need to be able to load thousands of photos (thumbnails) almost instantly. I don’t want to wait for buffering, loading screens, or downloads. See my article below if you are interested in reading more about that project. One of the tasks that I mention in my previous article is that I have to process images and reduce their size by scaling down each one into a thumbnail of 250x250 instead of loading the full resolution image. This reduces the image size roughly by 1/10th of the original size and allows me to load thousands of image files seemingly instantly without loading screens. I am able to do this by writing this lazy Node.js code. Lazy Create Thumbnails Code Any astute Node.js developer will probably say “that isn’t a very efficient script” — and you would be right. That is the point in testing out the performance of the new 8-core M1 SoC and 16 GB of ram. I wrote something that I would generally do in a hurry to batch process large amounts of data within a reasonable amount of time. If the code isn’t performant enough for what I am doing then I would begin to optimize it for my needs. It reminded me of one of the old adages one of my professors would always say “Who cares if you write a poorly performing program. In 2 years it will run twice as fast when the next generation processors come out.” He is definitely right when it comes to server-side code, but in the front-end space, we have to be as efficient as possible to prevent terrible user experiences like unresponsive pages. For this stress test, I was processing about 7,000 images at once asynchronously with Node.js and converting them to 250x250, and writing them to the SSD. A batch of images to process into thumbnails You will see below that the new MacBook did fairly well. It made it to about 5,000 images before the performance began to drop exponentially.
https://medium.com/swlh/how-i-overwhelmed-the-new-macbook-air-with-the-apple-m1-chip-d8cf011729a0
['Thomas Underwood']
2020-12-23 12:06:08.865000+00:00
['Front End Development', 'Technology', 'Nodejs', 'Programming', 'Apple']
2,347
InterValue’s HashNet Consensus: A Major Step Forward in the Development of Blockchain Technology
For any public blockchain to be safe and efficient, the consensus mechanism is the key point to consider. A good consensus mechanism ensures the security of the network and the distributed data. It also helps avoid malicious attacks. In addition, a good consensus mechanism is crucial in order to have the community participating effectively in network activities, especially by incentivizing the community to act autonomously and to enter a virtuous circle. Blockchain technology is still in its infancy and researchers are still exploring its potentialities. The current level of progress does not yet allow any actual large-scale commercial application. The InterValue project uses a DAG-based Gossip consensus mechanism with self-designed shards. The consensus mechanism features a variety of technologies including DAG, BA-VRF, layering, and sharding. The InterValue project aims to create the infrastructure of a “Global Value Internet” and solve the functional problems of current blockchain infrastructures. These problems include network congestion, high transaction fees, potential quantum attacks, absence of communication anonymity, low cross-chain interoperability and excessive storage requirements. The optimization and improvement of protocols and mechanisms at various levels will enable the achievement of this “Global Value Internet”. As an infrastructure of the blockchain 4.0 era, InterValue serves as foundation for a wide range of value transfer applications. At the same time, it provides a multi-level platform for the development of all kinds of Dapps and a viable technology frame for building a Global Value Internet. W Finance had the opportunity to interview the InterValue project team. In order to help the public understand InterValue’s technical strengths, W Finance has conducted in-depth discussions on how the InterValue consensus mechanism stands out among its peers. The following is the interview transcript: How does InterValue’s two-layer consensus mechanism solve the problems of the current blockchain space? InterValue noticed that most of the existing blockchain technologies are unable to adapt to the requirements of large-scale commercial applications. The main reason is that their consensus mechanism does not provide a good trade-off between decentralization and scalability, as Bitcoin and Ethereum show us. Ethereum has a good degree of decentralization, but the transaction speed is lower; EOS has a higher transaction speed, but the degree of centralization is higher. The main innovation of the two-layer consensus mechanism is the separation of the shards during the process of establishing consensus. Specifically, full nodes (on the top layer) are responsible for periodically dynamic shards composed of local full-nodes (lower layer) who work on reaching consensus. Local full-nodes achieve consensus on the transactions within one single shard, and the global ledger is synchronized between the nodes by running the “gossip” protocol. The main advantage of this design is that DAG and BA-VRF protocols are used between full-nodes to ensure the fairness and decentralization of the shards. The bottom part of full-nodes uses the DAG consensus to achieve high transaction throughput, and the number of shards is unlimited. At present, the InterValue team has implemented the HashNet consensus mechanism in InterValue’s TestNet 2.0, and asked the Ministry of Industry and Information Technology’s Tyre Lab to test the V2.0. The performance test gave the following results: Pure performance test exceeded: 2.4 million TPS across 10 shards, Actual transaction performance: 200,000 TPS across 10 shards Wide geographic distribution test: 420,000 TPS Post-signature verification performance test: 100,000 TPS across 10 shards Single transaction confirmation time: less than 3 seconds. Malicious data has no impact on transaction performance, the system’s fault tolerance algorithm met the expectations. According to the CAP theory, it is impossible to have a safe decentralized and highly scalable public chain. As the world’s first practical blockchain 4.0 project that supports large-scale applications, how does InterValue balance decentralization, security, and scalability? How to achieve a good balance between decentralization, security, and high scalability is an issue that has been discussed at the beginning of this project. InterValue’s HashNet consensus sharding is a key factor in achieving this balance. HashNet adopts a double-layer gossip topology framework to form a distributed accounting system through the method of “intra-shard autonomy and inter-shard collaboration”. Full nodes are responsible for maintaining node topology and shards. The local full nodes are responsible for transaction verification, transaction consensus, transaction storage, data consistency, and maintenance. A good balance between decentralization and scalability is achieved by decoupling shards and consensus. In order to improve system security, InterValue uses anti-quantum hash and signature algorithms, transactional anonymity protection and other means to protect transaction security and user privacy. InterValue transactions have high concurrency and fast transaction confirmation. This enables to quickly build an ecosystem for different application scenarios. What do you think is the difference between the consensus mechanism of the blockchain 4.0 compared to blockchain 3.0? The main advantages of InterValue’s HashNet consensus mechanism include: (1) High transaction throughput, which is expected to support the deployment large-scale commercial applications and has achieved a peak transaction speed of over 1,000,000 during various tests. (2) A better balance between decentralization and scalability through a two-layer consensus mechanism. We have noticed that DAG’s data structure can result in a high TPS main network. How does InterValue guarantee the ultimate consistency of node data within the main network? In short, when InterValue’s last transaction is fully confirmed, how many confirmations do you need? How many nodes does it need to go through? How to solve the problem of malicious nodes? The consistency of the InterValue node data is essentially ensured by an asynchronous PBFT algorithm. The node synchronizes the generated transactions through the gossip protocol, and verifies the consensus of the transaction consistency by verifying the path that the transaction is witnessed. After InterValue’s last transaction has been confirmed, it has to be approved by at least 2/3 of the local full nodes within the shard. When the number of malicious nodes is less than 1/3, the consensus will not be affected. In order to prevent malicious local full-node collusion within the shard, InterValue periodically replaces the local full nodes in the existing consensus with other nodes called “candidates”. In order to prevent double-spending transactions, each light node can only send a transaction request to a local full node according to the result of the ID suffix matching. The data is sorted according to its consensus timestamp, thereby ensuring the authenticity of the data. In order to prevent nodes from maliciously synchronizing large-scale data to cause service unavailability problems, each connection automatically limits the number of events it sends. As far as the security of the public chain is concerned, there is still no public blockchain that everyone is satisfied with. InterValue may well be the next unicorn chain. Does InterValue’s unique self-evolving ecosystem reflect its consensus mechanism advantage? Yes. The top-level goal of the InterValue ecosystem is to be able to build applications in fields such as digital currencies, distributed social platforms, distributed storage, divergent contracts, and other non-financial areas. In addition, it can promote the development of cross-chain communication and multi-chain convergence. From the perspective of financial and non-financial applications, InterValue’s advantage lies in its strong scalability and high TPS, which are allowed by HashNet consensus mechanism. From the perspective of cross-chain communication and multi-chain convergence, the two-layer architecture mode of the consensus mechanism enables interactions with other public blockchains. This connection is made through full nodes. The entire main network of InterValue is based on the DAG data transmission mode. Full nodes are selected through a DPOS mechanism and the local full nodes through a combination of POS+POW+POB+POO. Besides, it features Byzantine fault tolerance. InterValue has many technical advantages when it comes to building commercial public chains. How can the team achieve the most advanced technology and make the most of the technological advantages? When the team started to work on the InterValue project, its original goal was not to go after the public’s hopes but to truly innovate. Blockchain is essentially a distributed ledger system. The challenge of the public blockchain is to achieve the best balance between decentralization, scalability, and security. Due to decentralization, scalability, and security constraints, it is necessary to comprehensively consider the specific design difficulties at each level, and not just focus on a specific point. In the process of developing InterValue, the team has encountered various specific problems. The current technical solutions have been achieved through continuous development. In the follow-up development process, InterValue will continue to keep up with the technology frontier and improve the security, scalability, and decentralization of the system in many ways. The consensus mechanism is crucial and is the core of the blockchain technology. It largely determines the degree of mutual trust between the nodes of the system. In addition, it also determines the trust of users in the data on the blockchain. For a given blockchain system, whether the design of the consensus mechanism is good and bad directly determines the system’s work efficiency, operating costs, security, stability, and even directly determines the value of the system. For a given public blockchain project, the most important part is the research. Undoubtedly, the InterValue team did it and implemented a unique consensus mechanism. Every in-depth tech-related discussion highlights their enthusiasm and determination to develop the blockchain 4.0. W Finance believes that InterValue will leverage on HashNet consensus to impress people and attract users. We hope that InterValue becomes the leader of the public blockchain space in the future.
https://medium.com/intervalue/intervalue-introduces-the-blockchain-4-0-era-c24716ce8102
[]
2018-08-24 08:35:49.993000+00:00
['Blockchain', 'Inve', 'Bitcoin', 'Technology', 'Intervalue']
2,348
Privacy … NOW!. Your pragmatic field manual to privacy…
Privacy … NOW! The Problem I had a conversation recently with a very intelligent person whom I respect greatly and has deep knowledge of encryption and privacy-enhancing technology (PETs). We were discussing the idea of digital privacy in the modern age during a dinner meeting and she stated sardonically, “Privacy’s dead.” I tended to agree. Over the past few weeks as I have socialized in my industry, continued to study and research the various topics I am interested in, and walked around conferences throughout the country, I gave deeper thought to this topic. Perhaps it was inspired by conversations with industry friends about the notion, or perhaps it was inspired by my listening to Bitcoin Privacy & OpSec, or perhaps it was inspired by the small army of large-font-imprinted-on-plain-black-hat clad evangelists wandering around at crypto conferences, or perhaps a combination of the above. Regardless, I came to the firm realization that Privacy is NOT dead … It’s just unreasonably difficult to ensure internet privacy for the average person. And now, I realize, that my very intelligent and knowledgeable colleague is more ‘complacent’ than ‘right’ when it comes to this assertion. Though I don’t blame her. Maintaining any substantial degree of privacy whether it be digital or not, is a difficult, inconvenient, time consuming and often expensive task. “Privacy is the power to selectively reveal oneself to the world” — A Cypherpunk’s Manifesto The Good News: The notion of data self-sovereignty and digital privacy have become mainstream discourse in many circles due to the increasing awareness for encryption-based technologies over the past several decades. Most notably, this ethos was brought to mainstream awareness over time by the Cypherpunk community and memorialized by Eric Hughes’ A Cypherpunk’s Manifesto. So much so, that there are now broad communities, mature projects, dedicated teams and noble companies committed to helping individuals (re)claim some degree of privacy (this is a good thing!).
https://medium.com/the-capital/privacy-now-1f27bf7b3afe
['Alex Rabke']
2020-02-23 00:44:19.466000+00:00
['Privacy', 'Encryption', 'Security', 'Cryptocurrency', 'Technology']
2,349
Re-imagining the wealth management stack
It is an exciting time to be working in the wealth management industry. There is a seismic change underway with new technologies that make it easier and more affordable for firms to both offer services to clients who historically would not have been able to access their advice, and to look after their clients’ interests better. We think this presents a huge opportunity for forward thinking companies and individuals who are willing to embrace the change and challenge the way things have ‘always been done’ by validating and adopting best of breed emerging solutions in their firms. This post goes into the background of how we have got to where we are today, and why change is happening now. We will outline our thinking in more detail in subsequent articles on specific pieces of the puzzle, but our working thesis is that: the future stacks powering wealth business will not rely on one-to-one integrations, but be flexible open components connected by an underlying data infrastructure, some firms will choose which components to prioritise (planning, risk, construction etc) so any point solutions must be both simple to plug & play and leverage channel partnerships, most firms will have 2–3 core systems and will rely on these providers to have selected the best in class components and include them as part of their suite, the direction of travel is towards truly bespoke portfolio construction and advice so this must be supported by tech providers, adviser relationships will remain key, so the role of technology should be to support and not displace them. Why do people use wealth managers and advisers? Before we dive into the detail of some of the new innovations shaping modern wealth management, it’s useful to first take it back to basics and ask the question — why do people use advisers and wealth managers in the first place? The simple answer would be to achieve their financial goals…. But it’s a much more complex question than it seems. Every person or family has different goals and requirements, as well as different levels of financial literacy. This means that while one family might need help with basic planning and saving goals, another might need complex cross-jurisdictional tax advice for multi-currency earnings and hedging. Layering on top of this the fact that everyone has different risk profiles (how comfortable you are with the trade-off between possibly losing money to earn potentially higher returns), knowledge and experience, country specific regulations and fiduciary duties a manager has to understand where their clients sit on this spectrum — it gets complex very quickly. This is also just the first step. A manager then needs to review these goals and construct an investment and savings plan by reviewing the many millions of stocks, bonds, ETFs, mutual funds and myriad of other investment products available. They then must work out which are the best, check for tax implications, put them into a portfolio with the optimal mix (based on both goals and risk appetite), and keep track of this whenever there is a change in markets or of the clients circumstances or preferences. Phew! No wonder most advisers won’t help a client unless they have over $100,000 available to invest. On an ‘hours worked’ basis, they would be out of pocket helping anyone with less. How do advisers currently cope? This is the natural next question and, in our view, goes a long way to explaining why there are such large and publicly scrutinised advice gaps and industry conflicts. In fact, according to research by McKinsey, the annual revenues generated by intermediation in the wealth segment are more than double all market infrastructure (trading, clearing, settlement etc) combined. A simple Google news search shows you how much of a focus this is for the press, regulators and governments alike — links here and here. To solve this, wealth managers and advisers need to be able to do more with less. This is where technology can help. At present, most advisers and wealth mangers mainly rely on 3–4 core systems (usually a CRM, execution platform, portfolio management / reporting engine) but have multiple other point solutions which feed into these for specific tasks. These might include client onboarding, annual suitability, cash flow forecasting, tax planning etc. Another key piece is a tool for general financial planning which can feed into either the onboarding process, or ongoing management depending on the tool chosen. © Illuminate Financial — non-exhaustive illustration of the wealth stack Precisely how this fits together will depend on the firm and providers chosen but, generally speaking, while there are often specific integrations between tools (for example a risk score provider that integrates with a portfolio management platform; or a chatbot that can feed into a CRM client portal) these are largely one-to-one, meaning data is trapped in silos. Why are simple questions so hard to answer? Due to the current wealth stack architecture, advisers often re-key information manually from one system to another to get a complete view of their client’s wealth. This is clearly not ideal. Not only does this risk errors, it soaks up time that could otherwise be spent better managing relationships and makes seemingly simple questions very hard to answer. Am I on track to reach my goal? What had been the biggest drag /contributor to my performance? What fees have I paid and where? What income have I generated? Can you show me the split of my holdings? If stock markets fall 20% what will happen? What if I only want to invest in companies that are socially responsible? Being able to respond to these questions quickly and in real time intuitively feels like something that an adviser should be able to do. The reality is that with current systems the way they are — it takes days of work in excel. Change will not happen overnight but given the increasing scrutiny from regulators, fee pressure, advances in technology, and new entrants in the direct to consumer landscape; the scene is set. What has changed in the direct to consumer market? Whether or not you believe the B2C fintechs making headlines will survive the test of time, they have undeniably tapped into something and show the time is now. Their combination of lower fees, easy online access, promise of ethical investments and reach into previously un-served segments of the market have attracted a new breed of customer in droves and piqued the interest of VC investors and incumbents alike. The latest market map from CB Insights gives you an idea of how much is going on. Although we do not believe that these direct offerings will ever displace human wealth managers, we do think the competition has re-set the entry bar and consumer expectations. Micro-savings, robo portfolios, personalisation of goals, online portals, ESG screens can become new tools advisers and managers can add to their arsenal as they help their clients. How can a bank or a large established business adopt? There is no easy answer, but we believe there is a huge opportunity for B2B companies who can help the industry re-architect itself with a new breed of data architecture that connects information and allows the simple questions to be answered. With the right infrastructure and tools in place, advisers and managers will be able to cut down on the admin, grow their businesses and get back to doing what they do best — managing relationships and helping clients understand their finances.
https://medium.com/illuminate-financial/re-imagining-the-wealth-management-stack-466d16c904a5
['Katherine Wilson']
2019-06-12 14:38:43.022000+00:00
['Enterprise Technology', 'Startup', 'Venture Capital', 'Wealth Management', 'Fintech']
2,350
Please Don’t Judge White People by Xbox Live
I could deal with the name-calling and being called every slur in the book. But I had a harder time with it when there were Black people in the game. It packs a different punch when you drop a slur on someone who is actually a minority. The moment players sensed the voice of a Black person, they immediately went in on the racial slurs, the n-word, calling people monkeys — nothing was off the table. Anyone reading this who has been on Xbox Live (or the PlayStation Network) is nodding their head right now. I’m not someone who demands perfect political correctness — but man — if there is a line, gaming lobbies have long since passed it. “Locker Room Talk” has nothing on a Call of Duty lobby. It wasn’t confined to teen boys being naughty. There’d be straight up rednecks saying it, which takes it to an entirely different level. My friend Mike “Solo Dolo”, who is Black and played with me, was a really good sport about it. He’d just let it roll off of him. He was also really skilled at the game, which was part of the reason he’d catch slurs. It’s frustrating though. You can be having a fun game with a lobby full of people from all backgrounds. Everyone is joking around having a good time. Then two adolescent little monsters join the lobby and start shouting into the mic, ruining the whole vibe. It just makes things awkward, like it’s a clan meeting all of a sudden. Could I have called it out more? Sure. I’ve certainly tried on occasion. But an unsupervised 13-year-old with a mic isn’t the most negotiable human being you’ll deal with. It usually ended with the boy reminding me that he banged my mom the night before. For which I was eternally thankful. I’m not some perfect angel, sent here to virtue signal. I’ve made some ill-advised, selfish decisions in my life. Though I haven’t done any of the race stuff, I’ve certainly been an asshole and have my share of regrets. Heck, I used to internet troll as a young teen. Perhaps that is what bothers me so much. I see a bit of myself in these kids. When I was 13-years-old, around 1996, the internet was just starting out. I used to get this thrill from going on internet forums and posting a comment that said, “A new study was released. It demonstrates that men who need glasses are inferior and less desirable to women. My condolences to those affected.” Then, I’d sit there in glee as angry comments streamed in. Source: pic by imgflip. I had this feeling of adultlike power. I could get a reaction out of grown men. There’s definitely a bit of that going on with Xbox Live. Meanwhile, my trolling came full circle as, only a year or two later, I had to get glasses. I can’t sit here and say these gamer kids/adults aren’t racist. It’s hard to make that case when they are shouting the n-word into a mic every day. I also can’t stop them from doing it. I’ve even tried the, “You’re giving us a bad name, dude.” conversation, before I was christened, yet again, with the honorary homo title. Most of the Black guys I’ve played with, to their credit, were super cool about it and just muted the people. It was, however, tough to see when it was younger Black kid. They’d get upset at times. It sucks that they’re going into adulthood with this impression of faceless white kids shouting the n-word at them. It would be nice if these platforms started cracking down more. I get that they have a high volume of games and it might be hard to manage — but it’s enough already. It’s been going on for over ten years. There are ways. For example, there was a game, Day of Infamy, where the entire lobby could vote to mute a player, and it would suspend their talking privileges for a time. It turned it into a public shaming. Also, online platforms are starting to get reputation systems, where you get a rating. Source: Xbox Live. But the system is flawed. It just puts people with bad ratings into a lobby together. Instead of banning them, they’re given their own cesspool to interpolinate. One can’t help but wonder if it’s even a good idea to let kids play on these online servers at all. I do hope these gaming systems step it up. And again, please don’t judge white people by Xbox Live. That place is a stain. It’s to the point that it’s just embarrassing.
https://medium.com/super-jump/please-dont-judge-white-people-by-xbox-live-ba45bc1fbc48
['Sean Kernan']
2020-07-29 13:32:25.465000+00:00
['Technology', 'Self Improvement', 'Gaming', 'Games', 'Humor']
2,351
Voice Tech Trends in 2021
2020 has been a real “annus horribilis” for the entire global economy and for each one of us. The COVID-19 pandemic has affected every aspect of our lives, crashing businesses, and having a significant impact on Conversational AI. AI-powered chatbots and virtual assistants were at the forefront of the fight against Covid, helped screen and triage patients, conducted surveys, provided vital Covid-related information, and more. And quite naturally, we saw more conversational agents in telemedicine — from FAQ-chatbots and virtual consultants to chatbot-therapists — that made health services more accessible to people who can’t leave their homes (and there were billions of them in the spring of 2020). Needless to say, conversational agents combating the virus are becoming a long-term trend, and next year we will see more solutions complying with the ‘less touching, more talking’ rule. Despite the difficulties, consumer data show that hearables ownership by US adults has risen by about 23% during that period while voice assistant use through hearables grew by 103% from 21.5 million in 2018 to 43.7 million in 2020. The data show that hearables and voice assistant adoption are complementary technology trends. The voice and speech recognition market is expected to grow at a 17.2% compound annualized rate to reach $26.8 billion by 2025. From this perspective, we can conclude that voice UX became a real pragmatic innovation. In the coming year, we expect to see more technically-driven smart devices and more support for users. Assistants are expected to become proactive and distinguish users in order to bring personalized content. This especially applies to kids’ content, where certain restrictions are necessary. In 2021 we will definitely see: Voice assistant in a mobile app Voice in mobile apps is the hottest trend right now, and it will stay so because voice is a natural interface. Natural interfaces are about to become obsolete and even displace swiping and typing. Voice-powered apps increase functionality, saving us from complicated navigation, form-filling, overlaid menus, support, etc. They make it far easier for an end-user to submit their request — even if they don’t know the exact name of the item they’re looking for or where to find it in the app’s menu. Pretty soon, users won’t just appreciate the greater functionality and friendliness of a voice-powered mobile app, they’ll anticipate it. Outbound calls and smart IVR powered with an NLU It’s not about hollow-hearted cold calls. These are the smart solutions that will replace agents in call centers pretty soon because they are effective, performant, and easy-to-customize. There are more and more companies offering such services and it seems like a reasonable guess that this is where the calling sales are moving. Voice Cloning Or a voice replication technology. Machine learning tech and GPU power development commoditize custom voice creation and make the speech more emotional, which makes this computer-generated voice indistinguishable from the real one. You just use a recorded speech and then a voice conversion technology transforms your voice into another. Voice cloning becomes an indispensable tool for advertisers, filmmakers, game developers, and other content creators. Voice assistants in smart TVs Smart TV is an obvious placement for a voice assistant — you don’t really want to look for that clicker and spend some more time clicking when you can use your voice to navigate. All you need to do is press and hold the microphone button and speak normally — there’s no active listening, and no need to shout your commands from across the room. With a smart assistant on your TV, you can easily browse the channels, search for the content, launch apps, change the sound mode, look for the information, and many more, depending on the TV model. Smart displays Last year, we said that smart displays were on the march because they, pretty much, expanded voice tech’s functionality. Now, the demand for these devices remains high because smart displays showed a huge improvement over the last year as more customers preferred them over regular smart speakers. In the third quarter of 2020, the sales of smart displays hit 9.5 million units. In other words, it grew by 21% year-on-year As a result, the market share of this product category rose to 26% from 22% last year. Therefore, we expect there will be more customized, more technologically advanced devices in 2021. Smart displays, like the Russian Sber portal or a Chinese smart screen Xiaodu, are already equipped with a suite of upgraded AI-powered functions, including far-field voice interaction, facial recognition, hand gesture control, and eye gesture detection. What’s next? Voice for business In 2021 we will definitely see more solutions to improve business processes — voice in meetings and voice for business intelligence. Voice assistants will be highly customized to business challenges, integrated with internal systems like CRM, ERP, and business processes. Furthermore, more SMBs see enterprises making profits and in 2021 there will definitely be more companies looking for voice-first solutions. Games and content More games, learning, and entertaining content are expected since tech companies like Amazon, Google, and other voice-first tools’ developers push their builders to the market. Advertising via smart speakers and displays is a great chance to promote a product. So, communications and entertainment market majors like Disney Plus or Netflix partner up with new tech platforms to become a first-mover. 2021 will bring us more partnerships like these, more games, and education skills from third-party developers. Voice in the gaming industry When talking about Conversational AI and gaming, one cannot fail to mention text-to-speech (TTS), synthetic voices, and generative neural networks that help developers create spoken and dynamic dialogue. Now it takes a lot of time and effort to record a voice for spoken dialogues within the game for each of the characters. In the upcoming year, developers will be able to use sophisticated neural networks to mimic human voices. In fact, looking a little bit ahead, neural networks will be able to even create appropriate NPC responses. Some game design studios and developers are working hard to create and embed this dialogue block into their tools, so pretty soon we will see first games built with dynamic dialogues. Multimodal approach More and more developers come to the conclusion that a device ecosystem and multimodal approach are much needed. A voice assistant may simultaneously live on your mobile phone, smartwatches, smart home, and smart TV. Obviously, ‘1 assistant, few devices’ is the right approach here. Interoperability On the contrary, another idea could go pretty strong in 2021 — bundling multiple virtual assistants in a single device, if that’s practical for a user. A year ago, Amazon launched a project that could allow Alexa to be bundled together with other virtual assistants in a single device. It made sense, for customers could choose which voice service would best support a particular interaction. And although this could be a great marketing moment for Amazon, which could gather smaller marketeers; some of the largest players in the space like Google, Apple, and Samsung abandoned the idea for obvious reasons. Still, we’ll see where this idea goes pretty soon.
https://medium.com/voiceui/voice-tech-trends-in-2021-34dfbf726a6d
['Anna Prist']
2020-12-22 14:23:21.489000+00:00
['Trends', 'Tech', 'AI', 'Technology', 'Artificial Intelligence']
2,352
Best Libraries and Platforms for Data Visualization
In one of our previous posts we discussed data visualization and the techniques used both in regular projects and in Big Data analysis. However, knowing the plot does not let you go beyond theoretical understanding of what tool to apply for certain data. With the abundance of techniques, the data visualization world can overwhelm the newcomer. Here we have collected some best data visualization libraries and platforms. Data visualization libraries Though all of the most popular languages in Data Science have built-in functions to create standard plots, building a custom plot usually requires more efforts. To address the necessity to plot versatile formats and types of data. Some of the most effective libraries for popular Data Science languages include the following: Some of the most effective libraries for popular Data Science languages R The R language provides numerous opportunities for data visualization — and around 12,500 packages in the CRAN repository of R packages. This means there are packages for practically any data visualization task regardless the discipline. However, if we choose several that suit most of the task, we’d select the following: ggplot2 ggplot2 is based on The Grammar of Graphics, a system for understanding graphics as composed of various layers that together create a complete plot. Its powerful model of graphics simplifies building complex multi-layered graphics. Besides, the flexibility it offers allows you, for example, to start building your plot with axes, then add points, then a line, a confidence interval, and so on. ggplot2 is slower than base R and rather difficult to master, it pays huge dividends for any data scientist working in R. Lattice Lattice is a system of plotting inspired by Trellis graphics. It helps visualize multi-variate data, creating tiled panels of plots to compare different values or subgroups of a given variable. Lattice is built using the grid package for its underlying implementation and it inherits many grid’s features. Therefore, the logic of Lattice should feel familiar to many R users making it easier to work with. RGL rgl package is used to create interactive 3D plots. Like Lattice, it’s inspired by the grid package, though it’s not compatible with it. RGL features a variety of 3D shapes to choose from, lighting effects, various “materials” for the objects, as well as the ability to make an animation.
https://medium.com/sciforce/best-libraries-and-platforms-for-data-visualization-b986a43aee3f
[]
2019-06-25 16:28:13.750000+00:00
['Big Data', 'Technology', 'Data Visualization', 'Data Science', 'Programming']
2,353
Best travel Apps to roam the earth
In this new post I will talk about a few Apps/websites that will make your life much easier while you are traveling. Apps can make travel so much easier Some apps are (more than) wideley known some others aren’t. There is a few basic needs a smartphone can fullfill on the road, most notably: Sleep: Agoda => Part of booking.com but it is more straightforward and most of the time even cheaper HostelWorld => Best ressource for backpacker places around the world Airbnb => The reference, very convenient, make sure you check the reviews and have good reviews yourself! Beware, you may become addicted to this! Couchsurfing => THE best travel website, I met so many great people in there and live very cool experiences as well! Give it a try, don’t be picky and stay at people’s home for free! Communicate Whatsapp => Perfect for friends and family, you can do video chat as well Skype => Perfect to call a regular landline over the Internet, it saved my life when I had to sort out visa issues in Australia from Indonesia Travel long and short distances SkyScanner => I always check there first before going to a specific airline website Dated presentation for a very handy website Seat61 => The best knowledge center for train travel, you will find everything train related! Meet new people Couchsurfing => on the « event » part of the website you can always join a group going to a bar or something else, with my wife we did it a couple of time and its a great way to meet locals and other travelers Find your way If I had to only keep one App! Maps.me => The secret GPS App you always wanted! the entire world offline for free ! Use it wandering the streets, hiking and driving, you just need to download the area or the country ahead of time with wifi Photos/Videos Google Photos => very important! Back up all your photos and videos as soon as you are hooked up on wifi, free and unlimited storage LiPix => Can do great collage for social media posts Read Ibooks => Dim the light and read along! Most of the book I read in the past 6 years were read using this app, save space in your pack as well as the forrest! Stay informed Conseils aux voyageurs => Specific info provided by the french governement about every country in the world; you can find equivalent from your governement TravelFish.org => Great website with specific information about Southeast Asia Travelindependant.info => very interesting tips and tricks for independant travelers Entertainement Spend hours waiting on the bus or in a plane playing with Trilogic, Monument Valley or Leo’s fortune Bonuses: You will find deals you won’t believe! Rent a car with Skyscanner => The best way to rent a car! Yes, Skyscanner has a rent-a-car section and you will find the best deals ever on their website, highly recommended To be understood Much more efficient than sign language! Icoon => I don’t speak the language and the local don’t speak yours, this apps is for you! hundreds of icon to fulfill your basic needs, find a hotel, a hospital, a bank, ask an extra blanket etc… Hitchhiking => hitchwiki.org Best ressource for hitchiking, best spot in every major cities, tips and tricks, highly practical
https://medium.com/@fabienmathieu_62894/best-travel-apps-to-roam-the-earth-12c1d81ea52a
['Fabien Mathieu']
2020-09-01 08:46:26.165000+00:00
['Technology', 'Travel', 'Efficiency', 'Apps', 'Smartphones']
2,354
Blockchainable Industries
Yes, I am aware that ‘blockchainable’ isn’t a real word…but what fun is blogging if you can’t make up your own words…? This content was originally part of Blockchain Questions you were too shy to ask, but edited due to brevity concerns. It attempts to provide an overview of the disruptive nature of Blockchain technologies and the industries that will be most affected by it. (In addition to blockchain, if you are specifically interested in cryptocurrencies, check out ‘What is Bitcoin’s true valuation’?) The key question to ask Not all distributed data problems require a blockchain solution. Relational databases, Enterprise Messaging Systems and several existing technologies already address a lot of problems that are currently being proposed as ‘blockchainable’. A hallmark of whether blockchain technology can be ‘applied to a problem domain’ or not, revolves around a simple question: Does your use case require that all parties see the same set of data at the same time ? (This can be called the ‘data consensus (trustful data consensus)’ requirement). As a simple example, if you are one of many suppliers of mangoes to HEB (Kroger for those that don’t live in the great state of Texas), and HEB claims that one batch of mangoes is ‘infected’, your first defensive response would be ‘show me the mango trail’. You (and every other mango provider) would need to see the ‘trail’ for that particular batch to ensure that it did (or did not) belong to your shipment. In order for this audit trail to be presented to all parties, you would need an assurance that: a) Everyone was seeing the exact same set of data ( ‘all sets of eyes on the data at the same time’) b) No part of the data, anywhere along it’s entire history, had ever been tampered with. A blockchain provides the two guarantees listed above by using publicly visible data in conjunction with a variety of consensus mechanisms (Proofs of Stake, delegated PoS, Proofs of Work). However, while a) is still possible without the use of blockchain technologies, it turns out that b) is next to impossible without blockchains. To solve a) (all users see the same set of data at the same time), consider a simple web UI built around a centralized database that has stored every shipment from every supplier. The web UI is accessible to everyone at the same time; either through a browser or heck, you can even get everyone in the same room and show them the data displayed in the web app. However, there is no guarantee, except the one that you provide, that the data wasn’t tampered with, either while storing it or while transmitting it (to your browser, for example). There is also no easy visibility into the entire audit trail of that data (maybe for DBAs, but potentially not for public consumption). Blockchain technology is unique in that each mango transaction that ever occurred shares the following attributes: An accurate and verifiable transaction trail that is publicly visible to all parties. The trail extends all the way back to the origin of the shipment. This trail is based on cryptographic principles and is immune to counterfeiting or spamming attempts. Enforcement of legal and financial contracts without any intermediaries (people). For example, HEB releases payment automatically to the supplier on verifying (through the blockchain), receipt of the shipment. There was no human ‘checking off a list’, there was no warehouse supervisor or any other intermediary involved in the release of the funds. Just as there are no humans in the middle, there are no centralized servers. That is, the entire database is stored in decentralized nodes, akin to bitcoin network nodes. This network is maintained by the ‘bookkeepers’, who are compensated in the blockchain’s native currency. This leads to a ‘self supporting’ economic system — no need to buy expensive server farms and hire a team of DBAs! Equity Markets Back in 2015, Nasdaq realized the need to do away with paper stock issued certificates. For Pre-IPOs, rather than provide paper certifications, Nasdaq launched a project called LINQ. LINQ is an example of a digital representation of a physical asset; in this case a piece of paper. LINQ issues pre-IPO stocks digitally and stores them on a blockchain ledger, thereby avoiding any risk of tampering. Note that this isn’t just a digital version of a piece of paper. This IS the piece of paper. There is no further need for the piece of paper since you have digitized the underlying asset. This has a lot of implications; simply put, if you tore your piece of paper in half, it would not represent half a stock, correct? However, once digitized, that is EXACTLY what you can do, you can split your digital asset in half — and it represents EXACTLY half of the stock! Real Estate — Digitization of Assets Think about your land deed. It is a paper document. If you wanted to gift half your land to your son and the other half to your daughter, you would have to go through a complicated legal process, because the asset representing your land (the deed) was not digitized. If it were truly digitized, the entire land would be represented in binary (just like bitcoin), and you could simply send half a ‘landcoin’ to your son and half to your daughter. No other paperwork required! Such is the power of digitizing assets and storing them on the blockchain. Startups such as Bankex and Polymath are working on Proof of Asset type of protocols. Utilities — Power Generation and Distribution Siemens micro grid is an effort to turn rooftop solar panels into producers and distributors of electric power. Blockchain based ledgers will track each such power transaction. The microgrid example is but one of many IoT device based data captures that will be stored on a decentralized ledger. Several startups including Walton Chain and VeChain as well as larger industry leaders, including IBM and Hyundai, are investing heavily into IoT on the blockchain technologies. Healthcare and Medical Insurance Patients are required to fill out multiple paper forms, which are scanned, then shared with doctors across hospitals. In all the transformation and transmission, data often gets lost, or is incorrectly associated. Claims processing takes forever and costs insurance companies more than 20% of their total operating costs. In a blockchain-based insurance market, this data could reside with each user (self sovereign data) Imagine for a moment that all this patient data was stored with the patient herself. The patient’s ‘wallet’ would consist of public health information (age, gender…) and private health information (conditions, medications etc.). The patient could provide the doctor with a key to their wallet disclosing the information pertinent to the doctor. The doctor would then recommend a procedure by signing a document, using their own wallet key. The insurance company could easily validate that a REAL doctor (doctor’s cryptographic key) saw YOUR medical data (your wallet key) and recommended a specific procedure! Insurance Markets Everything that applied to the medical insurance scenario above can be applied to auto insurance and other forms of insurance as easily. Dating, Social Media and The Identity Problem The biggest issue in all social media is that of identity. How do you tell whether a profile is real, doctored or simply a bot? Blockchain based solutions to this are straightforward. A variety of Oracles could be used to verify identity. The simplest is a live photograph and facial recognition — which could serve as the primary Oracle. Secondary oracles could be linked facebook accounts. If all your links are real people, with verified identities, chances are that you are not a bot. Once verified, your verified identity can be stored on a public ledger, available in lieu of a Driver’s License or other paper ID. Several crypto projects are tackling the identity problem. My favorites are lifeId, uPort and of course, my own solution to the identity problem, Chained Identity (details forthcoming). Finance and Currency Trading Obviously, crypto currencies, as we know them, are the primary success story in the blockchain world. From time to time, I meet folks who still say: I am into blockchain, but not really into crypto….. That’s like saying, I’m into V8 engines , but not really into cars…. Cryptocurrencies have led to some of the biggest advances in blockchain technology. The list includes sidechains, lightning network and payment channels, Segregated Witness, Tangle, Hashgraphs….To say that you follow blockchain but not crypto is saying that you are not abreast of some of the most noteworthy advances in blockchain technology. And while crypto enthusiasts have already labeled bitcoin as ‘proof that it works’, there are still those on the fence about whether or not bitcoin constitutes a real currency. Microsoft, Expedia, Whole Foods and several large online e-tailers already accept bitcoin as payment. Yes, you can buy your next Enterprise SQL Server or your next Quinoa cereal, all using bitcoin! In my native Austin, some of the largest real estate deals in 2017 took place in bitcoin. Several countries are already launching their own cryptocurrencies as alternatives, maybe even replacements, for their traditional paper currency. A soccer player recently preferred to be purchased in BTC instead of Euros. So, for a lot of real world users, it already is a currency. It may also replace Gold as a store of value, once the price of bitcoin stabilizes somewhat. Entire Countries on Blockchain? Smaller countries such as Estonia have adopted blockchain and crypto-currencies en-masses. Medical records, government contracts, real estate deeds — just about everything in Estonia happens on the blockchain. Estonia is now known as the blockchain nation and several smaller European nations are headed in the same direction. Some countries (Sweden noticeably), are also considering replacing their paper currency with a crypto version. How Blockchain is Tackling Corruption By eliminating potential middlemen, who are at the root of most corruption scandals, here are some ways in which Blockchain is reducing corruption. The transit of goods and services, would be transparent and captured on the chain, so as to avoid any misuse. The flow of charity donations, development funds and other monies to third world countries can be traced all the way to the local, point of disbursal. There would not be a way for funds to be siphoned off, without being reflected in the underlying chain. Campaign contributions for politicians could be traced to their original sources just by following the blockchain ledger. There couldn’t be a way to get ‘dark money’ or under the table transactions. The illicit sale of weaponry from one nation to another could be tracked. The passage of a weapon, from a manufacturing facility in the U.S., all the way to rebel forces in a rogue nation, could all be traced on an open blockchain ledger. Novel Blockchain Uses, Digital Passports and more.. While IoT, Currencies, Equity Markets and Insurance are the most talked about blockhainable industries, there are several novel use cases that are being worked on as we speak: — A Blockchain for Gun License Control — A Blockchain for research and copyright protection — And my favorite, a digital passport (stored on the blockchain), which could, potentially, replace paper passports! While all of these novel use cases will require initial manual intervention and validation, in the long run, they can be disruptive (imagine the entire passport industry ecosystem going out of business due to digital passports) Where blockchain is not a good fit Centralized Databases have earned their spot in the I.T. industry. This is due to their ACID transactional abilities; being able to take a set of distinct operations and group them together as one block. For e.g.. — as a user confirms their online order, the ‘inventory’ of the item should automatically decrement by 1. These two operations need to go hand in hand; Conversely, if the user cancels their order, the inventory should automatically increment by 1. If there was any ‘gap’ between these two operations, another order for the same item could be placed, leading to an inconsistent inventory. Relational databases are experts at making sure that such ‘grouped operations’ either pass as a group or fail as a group. If the order placement fails, then the inventory change also fails — simultaneously! Blockchain ledgers, with their built-in delays are not suited for such ‘grouped operations’. In fact, any single operation, can take up to an hour to reach consensus (confirmation) on the bitcoin network. There are speedier workarounds such as the lightning network and payment channels; but there is still no possibility of matching the ACID capabilities of relational databases. Summary, and ushering in ‘User Ownership’ of Data A key criterion is the Consensus Question: ‘Does the same set of data need to be visible to the same set of eyes? And, is there a mechanism to trustfully provide this visibility to everyone, without any tampering? In addition to solving the ‘all eyes on the same data at the same time’, a blockchain excels at data ownership by users (e.g. mango suppliers). Users can take ownership of their own data, through the use of digital signatures. If users can be ‘nudged’ to own and maintain their own data (securely, using encryption), this ushers in a whole new economy around the data (more specifically, around the transfer of value that involves that data). So, to those still wondering if blockchain will revolutionize the world as we know it, it seems to be already doing that. Perhaps on a larger scale than anyone imagined. Next Steps? Is your company starting an internal blockchain initiative? If so, you will not want to miss this specialized blockchain presentation, with content created specifically for decision makers (CIOs, CEOs and startup ecosystems) Free Newsletter — Sign up for our Emerging Tech Strategies Newsletter(free) — The latest on Containerization Strategy, Public Cloud Strategy and Blockchain Strategy Also Read Blockchain Questions you were too shy to ask Public Key Infrastructure (read this post to understand why PKI was as key an enabler for bitcoin, as was blockchain technology)
https://medium.com/blockchain-novel-use-cases-and-adoption/blockchainable-industries-790a223248e6
['Anuj Varma', 'Physicist', 'Technology Architect']
2019-08-26 00:53:15.253000+00:00
['Cryptocurrency', 'Identity', 'Blockchain Technology', 'Blockchain', 'Insurance']
2,355
The AWS Shell
Anyone developing applications and infrastructure in AWS will at some point make use of the AWS Command Line Interface (CLI), either interactively at a shell command line, or by integrating the CLI into shell scripts. While it is a powerful avenue to access, create and manage AWS resources, it can get cumbersome to remember all of the possible commands and arguments for each of the services needed. We have relatively simple commands like $ aws ec2 describe-subnets which doesn’t need any arguments to retrieve a list of subnets for the default VPC in your account. On the other hand, there are a large number of CLI commands which require one or more arguments to get a response and the data you are interested in. The AWS Shell is a GitHub project which creates another interactive interface, which is capable of guiding what you need to do next for the command. aws-shell is a python based interface, easily installed using pip. pip install aws-shell After installing aws-shell, the first execution takes a little longer as the autocomplete index is built for all of the AWS commands. The autocomplete index is important as we shall see in a moment. In addition to the autocomplete index, a complete documentation set is also indexed and displayed at various times. Before you can use the shell to access AWS resources, you must configure your AWS access and secret key in the same manner as you would for the AWS CLI. Start aws-shell, and enter configure as the command. aws> configure AWS Access Key ID [****************NEWP]: AWS Secret Access Key [****************w7NK]: Default region name [us-east-1]: Default output format [json]: aws> All of the commands you would normally use in the CLI are available in aws-shell, except no more typing that aws command, and you get autocomplete to help you execute the command successfully. Profile Support One thing I find tedious about the CLI is using profiles to change which access and secret key I am using. This is much simpler to take advantage of in aws-shell. First, we need to set up a profile if we don’t already have one. Let’s look at what we have configured already (output has been formatted to fit the view). aws> configure list Name Value Type Location ---- ----- ---- -------- profile <not set> None None access_key ****NEWP shared-credentials-file secret_key ****w7NK shared-credentials-file region us-east-1 config-file ~/.aws/config Now, let’s add a profile called test, and then list the credentials we have configured: aws> configure --profile test AWS Access Key ID [None]: ........ AWS Secret Access Key [None]: ........ Default region name [None]: us-east-2 Default output format [None]: json aws> configure list Name Value Type Location ---- ----- ---- -------- profile <not set> None None access_key *****NEWP shared-credentials-file secret_key *****w7NK shared-credentials-file region us-east-1 config-file ~/.aws/config What? Where is our new profile? To see the new credentials, we have to specify the profile argument. aws> configure list --profile test Name Value Type Location ---- ----- ---- -------- profile <not set> test manual --profile access_key *****YAMY shared-credentials-file secret_key *****C7iv shared-credentials-file region us-east-2 config-file ~/.aws/config aws> Just as we can use this profile with the AWS CLI like: $ aws ec2 describe-instances --profile test which gets tedious very quickly, we can set the profile we want to use in the aws-shell. Using Profiles With your profile created, you can either start aws-shell with a profile definition using the command $ aws-shell --profile test Alternatively, you can also set and change your profile from within the aws-shell. aws> .profile Current shell profile: no profile configured You can change profiles using: .profile profile-name aws> .profile test Current shell profile changed to: test aws> .profile Current shell profile: test aws> As you know from working with the AWS CLI, changing profiles changes the access and secret key used to execute the commands in the associated account. Using the aws-shell The aws-shell takes over the terminal window, and displays some key sequences at the bottom, which can be toggled to suit your preference. The options which can be toggled and the key sequences are: F2 — Turn “fuzzy” on or off F3 — Change the key layout between emacs and vi F4 — Multi-column — provide command and sub-command hints in one or two columns F5 — Turn Help on or off F9 — set the focus F10 — exit aws-shell Fuzzy Search “Fuzzy” refers to fuzzy searching for the commands you type. This means you can get to the command you want without typing the full name. For example, we can type EC2 drio, and aws-shell shows ec2 describe-reserved-instances-offerings as the first option as drio are the first letter of each of the words in the command. Similarly, typing r53 shows the list of Route53 commands. Pressing the ‘F2’ key to turn off fuzzy searching, means you must type the command, sub-command, and options exactly. This feature is best left enabled. Keys This alters the key bindings used by aws-shell. The choices are vi and Emacs Multi-Column When aws-shell shows the list of commands, you can control if it is a single or multi-column list. It is a personal preference, but using multi-column with commands which have a long list of sub-commands can make it easier to find what you are looking for. Help By default, aws-shell displays help on the command and sub-command as you type them. If you find this distracting, you can disable the help display by pressing F5. Working in the aws-shell Aside from several features specific to the aws-shell, executing commands is like working in the AWS CLI, with the benefit of not having to remember precisely the name of the commands, sub-commands, options, etc. This can be a time saver. There are several other useful commands offered by aws-shell, called dot commands as they are prefixed by a . before the command. The .profile command allows changing the profile, meaning the access and secret keys used to execute the CLI commands. We saw this command earlier in this article. It is possible to change the working directory using the .cd command. aws> .cd invalid syntax, must be: .cd dirname aws> .cd ~ aws> !pwd /Users/roberthare aws> .cd /tmp aws> !pwd /private/tmp aws> It isn’t possible to see your current directory using a dot command, but this leads to the next feature, executing shell commands directly within aws-shell by prefixing the command with a !. Not only can we execute arbitrary shell commands within aws-shell, but we can use pipes (|), to send the output of the aws-shell command to a shell. aws> ec2 describe-subnets --output table | grep CidrBlock || CidrBlock | 172.31.48.0/20 || || CidrBlock | 172.31.80.0/20 || aws-shell keeps a history of all commands executed in the file ~/.aws/shell/history, so you can see what commands you have executed. There is no history command per se in the aws-shell, but you can take advantage of another feature to see and interact with the list. The .edit command retrieves your shell history in an editor, allowing you to both view your command history, and create a shell script from the aws-shell commands. The last dot commands are .exit and .quit, which have the same effect as pressing the F10 key, that of ending your aws-shell session. Conclusion If you spend any amount of time interacting with the AWS CLI, then you know how tedious it is always having to type those three extra letters. It doesn’t sound like a big deal, but if you are like me, occasionally you forget the “aws” part of that long command line. The aws-shell makes it simpler to interact with the AWS CLI, especially with the dynamic display of sub-commands options. References The AWS Shell The AWS CLI The AWS CLI Command Interface About the Author Chris is a highly-skilled Information Technology AWS Cloud, Training and Security Professional bringing cloud, security, training and process engineering leadership to simplify and deliver high-quality products. He is the co-author of more than seven books and author of more than 70 articles and book chapters in technical, management and information security publications. His extensive technology, information security, and training experience makes him a key resource who can help companies through technical challenges. Copyright This article is Copyright © 2020, Chris Hare.
https://labrlearning.medium.com/the-aws-shell-1792361a0c89
['Chris Hare']
2020-01-14 05:16:49.005000+00:00
['Aws Cli', 'Technology', 'Python', 'System Administration', 'AWS']
2,356
Games on the blockchain move into a new era
Gods Unchained, by Immutable — (October 2019) For two years gaming on the blockchain was nothing more than creating new tokens and trading them. Two years ago Cryptokitties was the most popular decentralized game in crypto. Ever since that time blockchain games have been stuck at breeding and trading collectible items. Two years later blockchain games have finally evolved. Games provide actual gameplay, network can handle serious amounts of player action and these new games just look so much better. Gods Unchained is one of the biggest blockchain games of the moment, and its leading blockchain games into a new era. It all started with Cryptokitties When developer Axiom Zen launched Cryptokitties in October 2017, it was just a fun experiment with early blockchain technology. The crypto industry was at the beginning of an enormous bull run and every day the value jumped up a few percent. Cryptokitties users breed and trade unique digital kittens. The game became way too popular for something that was meant as a fun experiment. The game became a hype, and some kittens were sold for prices into six figures. A genesis kitten was sold for 117 thousand dollars, while many other cats were also sold at car values. This is the news that hit mainstream media, but at the same time the increased hype came with even more traffic. Ethereum couldn’t handle the kitties By December there were so many transactions, that the game was clogging the entire Ethereum network. On December 4th Cryptokitties was responsible for almost 12 percent of all transactions on the Ethereum network. The amount of pending transactions on the network had multiplied almost ten times within just a few days. It caused a fear that the entire Ethereum blockchain would fail, even before it could fully live up to its promise. The Cryptokitties team worked with Metamask and other Ethereum developers to come up with quick solutions for scaling the game. Miners had already increased their fees, and as a result the birthing fee for a kitten doubled to 0.002 ETH. At the same time the developers tried to communicate more clearly about waiting times, network congestion and transaction fees. It became very clear that scale was the major challenge for Ethereum developers, if they ever wanted to reach true mainstream adoption. Sending a few ETH, BTC or LTC is fun and can provide traders some good revenue. But trading is not the same as mainstream adoption. When millions of people need to pay their groceries, phone bills and cinema tickets using cryptocurrencies, the network needs to be able to deal with millions of transactions per minute. Decentralized platforms need to deliver the speed and scale from the most popular internet applications. Scaling solutions are a driving force towards mainstream adoption of blockchain technology and cryptocurrency payments. Cryptokitties VS Gods Unchained Cryptokitties clogged the Ethereum network with its 4.7 million transactions. Game studio Immutable transferred six million cards in just three days, but the Ethereum network wasn’t impressed. Nobody noticed that the blockchain has seen more digital collectibles transferred in history. Fees didn’t go up. Waiting times didn’t increase. The blockchain games revolution set in motion by Gods Unchained started unnoticed. Only those who dive into blockchain analytics would’ve witnessed these record breaking transactions. These were possible because of Immutable putting multiple token transfers into one batch. As a result they minted six million cards and transferred them to their owners, without putting pressure on the blockchain. The only moment Gods Unchained caused slight troubles, was went it launched its open marketplace on November 22nd. According to Coin Metrics some blocks required $30 of fees. After a few hours things went back to regular sub $1 levels. Gamers can trade Gods Unchained cards on the in-game marketplace and on OpenSea. Ever since the marketplace opened, over 52 thousand cards have been sold. The most expensive cards go for over $1000, while most cards are just a few dollars. In less than a week the trading card game had over 2100 ETH in trading volume. As a result it has the number one spot on the OpenSea marketplace. The biggest sale on the marketplace is a mythic card worth 31 thousand dollars! Even though the Ethereum blockchain still has lots of work to do when it comes to scaling, it’s moving in the right direction. Gods Unchained is already a bigger product than Cryptokitties ever was, even though its user base is much smaller. Currently approximately ten thousand addresses own Gods Unchained cards, while almost ninety thousand addresses have kitties. Gods Unchained just launched. Immutable wants to attract millions of gamers, while currently attracting just a few thousand. The true blockchain revolution for Gods Unchained hasn’t even started yet. However, to deal with millions of users something will need to change.
https://medium.com/swlh/games-on-the-blockchain-move-into-a-new-era-297cb95d5cdd
['Robert Hoogendoorn']
2019-12-08 06:07:23.529000+00:00
['Tokenization', 'Blockchain Technology', 'Blockchain', 'Ethereum', 'Blockchain Gaming']
2,357
The Age of Automation Disruption-Benefits of Using AI Technology
The world has fast been moving in the direction of Digital Transformation, understanding the requirements of the market in advance and developing a robust system that is predictive and scalable on-demand is the need of the hour. Continuous testing is inevitable and accelerated delivery is the key. Artificial Intelligence in Quality Assurance can help achieve that. Benefits of Using AI Technology Better Test Planning For QA experts a lot of their time goes into planning test case scenario to help them launch the product with confidence and the same process is followed every time a new version is released. AI test automation tools save time by crawling every screen and generating and executing the test scenarios. Researched Build Release Artificial Intelligence in QA makes it possible to examine similar apps, their contribution to the market and how was it a success. Understanding the market requirements helps build test cases to ensure the app doesn’t break upon the introduction and execution of specific goals. Timelines Incorporating disruptions in the testing process helps developers speed up the development of the product. AI is capable of sorting through the log files, scan codes and detect errors within seconds, manual scanning takes up significantly more time, besides AI does not burnout and thus yields better results. AI can adapt to new functions and can immediately identify bugs arising out of code change. The role of AI in Quality Assurance and testing will also be seen in the changing of testing tools, the tests will be enhanced with AI-powered visual verification, which will give out a range of different outcomes. Testers can use AI in Quality Assurance to achieve various tasks.
https://medium.com/@techfireflys/the-age-of-automation-disruption-benefits-of-using-ai-technology-e0b118771daa
[]
2020-12-23 05:31:26.074000+00:00
['Digital Transformation', 'Quality Assurance', 'Testing Tools', 'Ai Technology', 'Automation Disruption']
2,358
How the Blockchain Can Solve Amazon’s 400 Million Dollar Problem
Counterfeits in the Online Marketplace Photo by Wei Pan on Unsplash Counterfeiting, defined as [making an] imitation of something else with [the] intent to deceive, has become prevalent in today’s online marketplace. This is because there are so many fake products added every minute that large e-commerce retailers, such as Amazon, are unable to keep up and verify their authenticity. From 2018 to 2019, there has been a 40% increase in counterfeits online. For small companies, this is a large problem. When S’well, one of the most recognized brands in the world for their reusable water bottles, was just a start-up running out of an apartment, they too faced counterfeit issues. Here is an excerpt from a Medium article written by Stephanie Clifford that discusses how Sarah Kauss, the founder of S’well, found her first copycat: Kauss and her [partner] were heading to S’well’s factory in China when they stopped for a couple days’ vacation in Hong Kong. Kauss saw there was a trade show and insisted on stopping by. When she arrived, it appeared that S’well had a significant presence at the show, with bottles displayed in a case and a ribbon flaunting an award it had apparently won. “A man came over to me and gave me his business card, very properly, and said he was from S’well,” she says. His card had S’well’s logo on it, with the little TM for ”trademark.” The problem: Kauss at that point was running S’well from her apartment. It had no presence in Asia. Nor did it have a sales rep there. And it had no employees besides Kauss. She had barely gotten the company off the ground, and her bottles were being knocked off. In a globalized marketplace, it proves difficult to enforce intellectual property rights when someone can sell imitations overseas with a low risk of being punished. Kauss’s experience is a clear example of how widespread the knockoff epidemic has become. Why This Is a Problem for Amazon Photo by Jp Valery on Unsplash “When a fellow says it ain’t the money but the principle of the thing, it’s the money.” — Artemus Ward Amazon desperately wants to combat its counterfeiting crisis for one major reason: vendors are leaving them and Amazon is losing money as a result. In the past month, Nike, one of Amazon’s most popular vendors, decided to leave the platform because fake listings were superseding their content on search results. On e-commerce platforms, SEO is key for vendors who want to grab eyeballs and drive sales. With fake listings on Amazon, the platform breeds a sense of distrust and an aversion to building brand loyalty. In the words of Jeffries analyst Randy Konik, “Amazon is just a traffic aggregator that reduces friction in consumption… it doesn’t build communities.” On top of all this, Amazon itself has been accused of knocking off popular products. Last year, home and kitchen brand Williams-Sonoma sued Amazon by claiming the tech giant’s private-label furniture division copied one of their chair designs.
https://medium.com/swlh/how-the-blockchain-can-solve-amazons-400-million-dollar-problem-7a266c2c211e
['Pronoy Chaudhuri']
2019-12-06 21:01:03.266000+00:00
['Money', 'Business', 'Ecommerce', 'Blockchain', 'Technology']
2,359
Everything you always wanted to know about quantum-inspired algorithms
By Juan Miguel Arrazola Here’s an idea for the next Hollywood blockbuster. The setting: A global arms race is underway. World governments and corporations have recruited an elite group of scientists, supported by millions of dollars in funding, to develop an advanced technology that will change the world as we know it. The stakes are high: the winners will gain immense wealth and power. Who will emerge victorious? The plot: Somewhere in Texas, a university professor is suspicious: could world dominance be obtained using cleverness instead of these sophisticated machines? He meets an 18-year-old student interested in mathematics, and tasks her with this quest. After working alone for years, she sends global shockwaves by showing that what hundreds of scientists were aiming to achieve, she could do all along using only her laptop and her brain. Companies go bankrupt, governments topple, and researchers are left looking for new jobs. Truth is stranger than fiction. The storyline above is inspired by a true story that captured the attention of the quantum computing community: the “dequantization” of quantum algorithms for linear algebra, ignited by Ewin Tang’s algorithm for recommendation systems. See Scott Aaronson’s blog post for a portrayal of the true events. It truly made a good headline. But in the aftermath of the events, there was a state of relative confusion. How do these new quantum-inspired algorithms work? Are they any good in practice? Are quantum algorithms for linear algebra dead? The aim of this post, which accompanies our technical paper, is to answer these questions. If you’re short on time, here’s the executive summary: In practice, the performance of quantum-inspired algorithms is better than their complexity bounds would suggest. They can perform reasonably well compared to existing classical techniques, but only provided that stringent conditions are met: low rank, low condition number, and extremely large dimension of the input matrix. These conditions almost never occur simultaneously in practice. By contrast, there exist many practical datasets for linear algebra problems that are sparse and high-rank, precisely the type that can be handled by quantum algorithms. Whether quantum algorithms for linear algebra will ever be impactful in practice remains to be seen, but if they fail to do so, it almost certainly won’t be because they are outperformed by these quantum-inspired algorithms. If you’re curious to understand quantum-inspired algorithms better, don’t go anywhere! The rest of this blog post is for you. We’ll give a short and simple explanation of the algorithms; they turn out to be quite easy to understand. We’ll also comment on their performance: how do they improve on previous classical methods and when are these improvements relevant. In the end, we’ll give you a quick lesson on how to use our publicly-available code to implement the algorithms yourself. Let’s start! How do quantum-inspired algorithms work? We’ll focus on the algorithm for solving linear systems of equations. A very similar description applies to the algorithm for recommendation systems. We are given an input matrix A and vector b, and the goal is to solve the linear system of equations Ax=b. Quantum-inspired algorithms tackle this problem in three main steps: Compute an approximate singular value decomposition of A using the Frieze-Kannan-Vempala algorithm. The outputs of the algorithm are approximations to the singular values and singular vectors of A. Use sampling techniques to estimate the coefficients of the solution vector x in the basis of singular vectors of A. Sample entries of the solution vector in such a way that large entries appear with high probability. That’s really all there is to it. For recommendation systems and other linear algebra problems, the only differences are what we mean by solution vector and how its coefficients are defined. Let’s take a look at each step in more detail. The FKV algorithm This is the core, the essence, the magic, of quantum-inspired algorithms. The idea is simple, performing a singular value decomposition of a large matrix is very hard work. Could we somehow get away with doing it for a much smaller matrix? The result of FKV is to show that such laziness can indeed be rewarded if the input matrix has low rank. All we have to do is to randomly select a small number of rows and columns of a large matrix and rescale them by multiplying by an appropriate constant to build a smaller matrix. If we perform a singular value decomposition of the smaller matrix, it will directly give us an approximation of the singular values and a method to compute approximate singular vectors of the larger input matrix. The trick is that rows and columns should be sampled in such a way that the ones with the largest entries are selected with high probability. If the rank of the input matrix is low, the matrix we build can be much, much smaller, so performing operations on it will take a lot less time. The price to pay is that we obtain approximations, not exact values. That’s often a worthwhile tradeoff. Crucially, this technique only works if the original matrix has low rank, otherwise too much information would be lost in shrinking to the smaller matrix. Below is an illustration of the method. In this example. we select four rows at random, rescale them, and stack them together to form a new, fat matrix (more columns than rows). From this we select four columns at random and similarly rescale and stack them to form the final smaller matrix. Schematic illustration of the FKV algorithm for approximate SVDs. We randomly select rows of an input matrix A, rescale them, and stack them together to form a new matrix R. We then sample columns of R, rescale and join them together to form a smaller matrix C, whose singular values approximate those of A. The novelty of quantum-inspired algorithms are the two remaining steps. In the original FKV algorithm, the approximate singular values and vectors can be used to compute the coefficients and the entire solution vector in linear time, i.e., very quickly. However, for gigantic, truly enormous matrices, linear time is not fast enough. Quantum-inspired algorithms come to the rescue by providing a method to estimate coefficients and sample from the solution vector in sublinear time. This rescue is actually a rescue only for matrices of extremely large dimensions (where linear time is too slow) and which also have low rank. Coefficient estimation There are two ways to compute the average of many values. A first method is to add all of them together and divide the result by the total number of values. An alternative approach is to select a few values at random, then compute their average using the first method. Of course, in the second case we get only an approximation of the true average, but in many cases that is good enough and can be done in significantly less time. This is the strategy of quantum-inspired algorithms: the coefficients of the solution vector are inner products between vectors, which can be expressed as the average of a collection of numbers that depend on the entries of the vectors. Instead of directly computing this average (which takes linear time) we select a few of the values at random and compute their average instead. The question is, how many values do we need to sample to get a good approximation? It turns out that the number of samples grows with the precision of the approximation, the rank, and the condition number of the input matrix. In practice, this estimation method is only faster than a direct calculation of the coefficients if (i) precision, rank, and condition number are small, and (ii) the input matrix has a colossal dimension. Sampling from the solution vector Here’s a challenge. You are given a large list of numbers. The task is to sample some of them at random, with one condition: the larger the number, the more likely it is to be selected. How would you do it? Well, here’s a very simple strategy. First, choose some of the numbers according to whatever rule you like, for example selecting each with equal probability. Second, from the ones you have selected, keep the large ones and discard the small ones. That’s it! In practice, it pays off to be slightly more sophisticated: we keep each number with likelihood proportional to its value, such that large numbers are kept with high probability and small numbers with low probability. This is the basis of a technique called rejection sampling, which is employed in quantum-inspired algorithms to sample from the solution vectors. Once the approximate singular value decomposition has been obtained and the coefficients have been estimated, it is possible to query any entry of the solution vector in sublinear time. We achieve the final goal of the algorithm — to sample from the solution vector — by selecting these entries at random and using the rejection sampling technique described above to sample large vector entries with high probability. This step of quantum-inspired algorithms scales better than coefficient estimation, so the conditions are less stringent for it to be worthwhile. However, the price is high: we typically prefer to have a full description of the entire vector instead of a sample of its large entries. As before, using sampling instead of a direct calculation only becomes worthwhile if the input matrix has an extremely large dimension. Practical performance In our technical paper, we implemented and tested quantum-inspired algorithms on a variety of problems. We found that they can give good answers for matrices of dimension 2^50 and rank 3; remarkable! We also observed the decay in performance as the rank and condition number is increased, and obtained reasonably low errors when applying them to portfolio optimization and movie recommendation problems. Overall, these tests support the theoretical analysis: quantum-inspired algorithms can perform well, but only provided that stringent conditions are met: low rank, low condition number, and extremely large dimension of the input matrix. These conditions seldom occur simultaneously in practice. It is still insightful (and fun) to implement and test the algorithms. This is why we decided to make our code publicly available for anyone to use. You can find it here, on Xanadu’s github page. We designed the code with simplicity in mind. All you have to do is to specify the inputs and algorithm parameters. The code will then return the requested number of samples from the solution vector. Here’s an example program: Go ahead and give it a try! Thanks for reading until the end!
https://medium.com/xanaduai/everything-you-always-wanted-to-know-about-quantum-inspired-algorithms-38ee1a0e30ef
[]
2019-06-19 21:22:34.488000+00:00
['Technology', 'Quantum Computing', 'Machine Learning']
2,360
MPSC परीक्षा की तैयारी कैसे करे?
MPSC परीक्षा की तैयारी कैसे करे? HOW TO PREPARE FOR MPSC EXAM.... All about information Hindi language.. https://bit.ly/37MTuXp
https://medium.com/@barveroshan28/mpsc-%E0%A4%AA%E0%A4%B0%E0%A5%80%E0%A4%95%E0%A5%8D%E0%A4%B7%E0%A4%BE-%E0%A4%95%E0%A5%80-%E0%A4%A4%E0%A5%88%E0%A4%AF%E0%A4%BE%E0%A4%B0%E0%A5%80-%E0%A4%95%E0%A5%88%E0%A4%B8%E0%A5%87-%E0%A4%95%E0%A4%B0%E0%A5%87-101ff52af4d0
['Barve Tips']
2020-12-26 05:05:54.611000+00:00
['Education Reform', 'Education', 'Educational Technology', 'Mpsc Coaching Classes']
2,361
There is no Internet Savior: To fix the Internet’s problems, we’ll need to work together
@talks: Solutions to the Social Dilemma — A virtual discussion series How many extremely knowledgeable and relentlessly creative Internet experts does it take to unscrew a lightbulb? If that lightbulb is the Internet’s problems — quite a lot. On December 3rd, we invited three notable figures in the data privacy landscape to join us for a fascinating, even inspiring, discussion about the future of the Internet. Among these were: Enoch Liang of Andrew Yang’s Data Dividend Project (DDP). A California-based lawyer and entrepreneur, Enoch started the Data Dividend Project with Andrew Yang to help consumers collectively exercise their data rights and bargain with tech companies. Dr. Jennifer King, Director of Consumer Privacy at the Center for Internet and Society, Stanford Law School. A recognized expert in information privacy, Dr. King examines the public’s understanding and expectations of online privacy and the policy implications of emerging technologies. Our very own Kevin Nickels, CPO of The @ Company. For Kevin, The @ Company represents the embodiment of a passion that he has been pondering for more than a decade — technology that empowers individuals to control their digital selves. He views this as a vitally important solution needed to address some of society’s most vexing problems. In this hour-long conversation, we touched upon recently passed policy (California Prop 24), data privacy as a collective action problem, and potential technology solutions. We’ve included some highlights from our conversation below. There are no 5 easy steps to protect your online privacy According to Dr. King, articles advertising 5 easy steps to protect your online privacy are missing the point. They perpetuate the false belief that data privacy is a personal issue: If you make a few small tweaks to your lifestyle, then you can reclaim control over your data. Dr. King maintains that this couldn’t be farther from the truth. In our current Internet, individuals can’t control how they interact online. Unless they want to become “electronic hermits,” the average person must “take it or leave it,” having no choice but to relinquish their right to digital privacy. Nickels adds, “Security at the enterprise-level today is an illusion of security.” Behind every data security system, there is a human administrator that holds the keys to the database. Who’s to say that this person might not one day decide to exploit everyone’s data for nefarious purposes? Everyone has to work together “This problem is broader than all of us individually can tackle,” says Dr. King. “We need to think of other solutions that allow us to collectively wage power and gain control over data.” Enoch agrees. In his eyes, the solution to the Social Dilemma is multifaceted, much like the four legs of a chair. These four legs include: 1. Consumer awareness “Step one [to rebuilding the Internet] is realizing that you can’t control something that you don’t know about,” says Dr. King. Many people are unaware of the ways their data is mishandled or at risk. Research shows that most people are discontented when they discover the repercussions of practices like data-driven advertising or social media algorithms. Documentaries like Netflix’s The Social Dilemma are part of the movement to educate consumers about the impacts of technology. (The documentary has since then garnered criticism about its focus on the “prodigal tech bro,” but the point made on how people’s data is being used to manipulate them is well-taken.) The Data Dividend Project (DDP), spearheaded by Andrew Yang, also falls within this category. “Its goal,” says Liang, “is to educate consumers as to what’s happening to their data, how much money technology companies are making off of it, what rights they do have. Those individual rights are much more effective if they’re bundled into collective rights and collective action is taken.” 2. Technological solutions: Building a network of trust In an Internet that conditions people to behave in certain ways without them even realizing it, the question Nickels asked himself was, “‘How do we encourage online behavior that engenders a network of trust, where interactions are based on trust?’” The answer that he and fellow co-founders Colin Constable and Barbara Tallent landed on was the @protocol. By encouraging everyone to ask for permission before using someone else’s data, the @protocol makes it possible for people to build trust with others online. (Learn more about how this network of trust is built.) 3. Regulation and policy “Tech gets free reign for 10 or 15 years until some scandals happen and people in Congress start waking up,” says Liang, “but it’s too little, too late. Pandora’s box has already been open for 10 or 15 years.” 4. Enforcement “Law and regulations are no good on the books if there isn’t any enforcement,” says Liang. One way he sees this being upheld is through DDP, which encourages people to lay claim to their data. “If you had a Yahoo account, the settlement there was $125 million plus. If you were a resident of Illinois and had a Facebook account, the settlement there was $650 million,” Liang says. “How many of us actually laid claim to that? The average claims rate for a class actions settlement is somewhere in the range of 3–5%. DDP has been trying to help consumers navigate that claims process and help you realize that there is money out there, and you should be laying claims to it.” The talk gave us a lot to think about, especially regarding the future of the Internet. Sure, the Internet isn’t perfect, but as Internet Optimists we’re daring to believe in a better Internet. If you weren’t able to attend, no worries! Watch the full Zoom session here. — At The @ Company we are technologists, creators, and builders with one thing in common: We love the Internet. You could go so far as to call us Internet optimists. Though we acknowledge that the Internet has deep flaws, we believe that we can extract all its goodness without sacrificing our privacy, time, and control over our digital identities. We’ve committed ourselves to the creation of a more human Internet where privacy is a fundamental right and everyone owns their own data. Let’s say goodbye to the fear and paranoia caused by data breaches and unsolicited online surveillance. With the power of the @protocol, we’re resolving these long-standing issues with a spirit of exploration and fun. Learn more about us here.
https://medium.com/@atsigncompany/there-is-no-internet-savior-to-fix-the-internets-problems-we-ll-need-to-work-together-196c28023ade
['The']
2020-12-24 01:22:05.409000+00:00
['Andrew Yang', 'Internet of Things', 'Data Dividend', 'Technology', 'Data Privacy']
2,362
Connected Health is Essential to Accelerate Progress Against Cancer
by Barbara K. Rimer, DrPH, Chair, President’s Cancer Panel and Dean and Alumni Distinguished Professor, UNC Gillings School of Global Public Health In 2011, when President Obama appointed me, Owen Witte, MD, and Hill Harper, JD, to the President’s Cancer Panel, we were determined to select and frame topics for exploration that would be actionable and make a difference for the cancer community in the United States and beyond. As we viewed challenges in cancer prevention and care, we saw the great but unrealized potential of connected health, a term that captures the ways technology is changing how we manage health. We have witnessed incredible advances in technology and connectivity over the past 20 years, particularly during President Obama’s time in office. They have transformed most aspects of everyday life, perhaps most powerfully in how we manage our health. Increasingly, many of us look to the Internet when we need health or medical information. New technologies help millions and millions of people around the world monitor physical activity and diet, manage medications, and support one another in navigating cancer and other diagnoses and working toward health goals. The access we now have to unprecedented amounts of health data has changed how we make health-related decisions. All of these ways we use technology to facilitate the efficient and effective collection, flow, and use of health information, or connected health, are essential to support health management and the delivery of quality, patient-centered care, particularly for cancer. Today, the Panel released our latest report to President Obama, Improving Cancer-Related Outcomes with Connected Health, which focuses on the development and use of technologies to promote cancer prevention, enhance the experience of cancer care for patients and care teams, and accelerate progress in cancer research. The Panel is charged with monitoring the National Cancer Program — which includes all public and private activities focused on preventing, detecting, and treating cancers and on cancer survivorship — and reporting to the President of the United States on barriers to progress. It’s a huge responsibility. In workshops we held to gather input for this report, we heard all-too-common stories from patients who report being unable to access or share their own health information or get potentially harmful errors corrected in their health records. They told us about having to contact hospitals to get hard copies of records as required for cancer consultations and how difficult this process can be, especially when most of the information is contained within electronic health records as required by law. Health informatics experts described health IT systems that do not “talk” to each other, leaving health information and data critical to patient care trapped in silos. Clinicians shared their frustrations with using poorly designed electronic health record systems. From researchers, we heard about barriers to collaboration and the need for a central location to access, share, and extract knowledge from the mountains of clinical and genomic data we now have the capacity to collect. Alongside these challenges, we also heard about many exciting examples of technologies that illustrate the potential of connected health, many of which we highlight in the report. But the availability of technologies themselves is only one part of the picture. They should be user-centered — designed with care to meet the needs of people who use them. There is still a lot of unfinished business ahead to break down persisting technological and cultural barriers to data sharing so that all partners involved in a patient’s care have timely and equitable access to the information they need to make decisions. We also must increase substantially the number of people who have access to the Internet, particularly broadband access, which is still lacking for millions in the U.S. Collaboration among researchers, clinicians, and other stakeholders must become the norm to expedite scientific discovery and development of new methods to prevent and treat the disease. Vice President Biden has spoken articulately and forcefully about these critical needs. Better systems are within our grasp if we reach for them. Optimism for the future, about which President Obama has written so eloquently, reminds us that we need to put today’s advances to work harder tomorrow to cure disease and address the health inequities that persist in the United States. The Cancer Moonshot amplifies that optimism. It celebrates decades of public health and medical advances we’ve made against cancer and recognizes that we are entering a new, exciting era. It builds a launch pad for this next phase by bringing together the brightest minds and most committed spirits to collaborate to find new ways to prevent and treat cancer. As we further our understanding of the many diseases that comprise cancer and harness the power of emerging technologies and scientific discoveries, we can push the boundaries of what we never imagined possible. I’m grateful that the Cancer Moonshot has illuminated the important work being done by people — patients, caregivers, clinicians, researchers, advocates, health IT developers, and countless others across the globe — to eliminate cancer as we know it. Ensuring that all have the information and tools needed to accelerate the pace of progress against cancer is what connected health is all about.
https://medium.com/cancer-moonshot/connected-health-is-essential-to-accelerate-progress-against-cancer-3b833c2029f6
['President S Cancer Panel']
2016-11-15 19:18:32.344000+00:00
['Health', 'Health Technology', 'White House', 'Cancer', 'Cancer Moonshot']
2,363
Waves Enterprise platform updated
The update features a licensing scheme, a new data crawler, anchoring functionality and other critical improvements. The most recent network update from Waves Enterprise features several major enhancements. Licensing Firstly, we are introducing licensing for Waves Enterprise proprietary technologies. Any newly-created private network will be able to operate in test mode without a license until reaching block height 30,000, which roughly corresponds to a two-week period. After that, a license must be purchased to continue operation. The following types of license are available: Trial license for testing the platform and technology Commercial license Non-commercial license Mainnet license, valid only for use of technology on the Waves Enterprise mainnet. This is free but requires the node to hold a certain amount of WEST tokens in its balance. Licenses can be purchased for a one- or two-year period, as well as for indefinite use. In future, the option of renting a license for the period of use will also be added. Upon license expiration, the node will no longer be able to create transactions, but will still be able to read data. Dramatic increase in containerized smart contract speeds Version 1.2 also sees an updated protocol for communication between a node and a container holding a smart contract. The REST communication protocol has been replaced with binary gRPC, enabling an increase in contract execution speeds of up to 70 times. As a result, business processes involving an industrial load of billions of transactions per year can now be served. Data crawler Another element of the update is a new data crawler — a tool responsible for transporting data from the blockchain to another service. Thanks to a revamped engine, the new crawler works faster and is more stable and user-friendly. Detailed description of errors facilitates deployment and adjustment of the data crawler. Private data exchange in the client Waves Enterprise corporate client now fully supports working with private data within the interface. Previously, this functionality was available only through node API. Private data exchange with the blockchain can now be conducted via a convenient interface, which also facilitates the creation and editing of groups for sending/receiving private data. The private data exchange option is available only to node owners. Anchoring The update also features anchoring functionality, which enables interconnection of two or more networks, so that data written to one blockchain can be copied to others. As a result, the level of decentralization increases, thereby increasing data security. Our solution offers anchoring between Waves-like networks, such as Waves Platform and Waves Enterprise, and private networks based on Waves technology, facilitating higher transaction speeds than Bitcoin-powered anchoring solutions. Anchoring functionality can be turned on in the configuration file of any node, as well as in the Waves Enterprise client app. You can find out more about the anchoring interface here. Other improvements Version 1.2 also sees a revamped admin interface for the authorization service. User addition and account creation are now available in a single interface. Integration of user management systems has also become more convenient. In addition, the node nucleus has been optimized, including mechanisms for block creation and network layer operation. The level of resources required for maintaining the same throughput has consequently been cut in half. In turn, this will lead to reduced operational costs and hardware requirements. Finally, we have updated our documentation, making the versioning option available.
https://medium.com/waves-enterprise/waves-enterprise-platform-updated-93cc09d283db
['Waves Enterprise']
2020-03-17 09:04:28.522000+00:00
['Smart Contracts', 'Blockchain Network', 'Blockchain Technology']
2,364
Build yourself a simple CRM from scratch in PHP and MySQL
Customer Relationship Management (CRM) is a system that manages customer interactions and data throughout the customer life-cycle between the customer and the company across different channels. In this tutorial, we are going to build a custom CRM in PHP, which a sales team can use to track customers through the entire sales cycle. We’ll be creating a simple CRM system for salespeople to: Access their tasks View their leads Create new tasks for each lead Create new opportunity Close a sale Sales managers will be able to: Manage all customers Manage sales team View current sales activities Download Demo Files Building Blocks of a CRM Here is a list of the essential components of the CRM: Leads : initial contacts : initial contacts Accounts : Information about the companies you do business with : Information about the companies you do business with Contact : Information about the people you know and work with. Usually, one account has many contacts : Information about the people you know and work with. Usually, one account has many contacts Opportunities : Qualified leads : Qualified leads Activities : Tasks, meetings, phone calls, emails and any other activities that allow you to interact with customers : Tasks, meetings, phone calls, emails and any other activities that allow you to interact with customers Sales : Your sales team : Your sales team Dashboard : CRM dashboards are much more than just eye candy. They should deliver key information at a glance and provide links to drill down for more details. : CRM dashboards are much more than just eye candy. They should deliver key information at a glance and provide links to drill down for more details. Login: Salespeople and managers have different roles in the system. Managers have access to reports and sales pipeline information. System Requirements PHP 5.3+, MySQL or MariaDB phpGrid Create CRM Database We will start by creating our custom CRM database. The main tables we will be using are: contact — contains basic customer data — contains basic customer data notes — holds information collection from Contact by sales people. — holds information collection from Contact by sales people. users — information about sales people The Contact table contains basic customer information including names, company addresses, project information, and so forth. The Notes table stores all sales activity information such as meetings and phone calls. The Users table holds login information about users of the system such as usernames and passwords. Users can also have roles, such as Sales or Manager. All other tables are lookup tables to join to the three main relational database tables. contact_status — contains contact status such as Lead and Opportunity. Each indicates a different stage in a typical sales cycle — contains contact status such as Lead and Opportunity. Each indicates a different stage in a typical sales cycle task_status — the task status can be either Pending or Completed — the task status can be either Pending or Completed user_status — a sale person can be Active or Inactive — a sale person can be Active or Inactive todo_type — a type of task either Task or Meeting — a type of task either Task or Meeting todo_desc — description of a task such as Follow Up Email, Phone Call, and Conference etc. — description of a task such as Follow Up Email, Phone Call, and Conference etc. roles — a user can be either a Sales Rep or a Manager Complete Database Schema Diagram A database schema is the structure that represents the logical view such as tables, views, or primary and foreign keys of the entire database. A database schema includes entities and the relationship among them. It is a good practice to have one primary key for each table in a relational database. A primary key is a unique identifier for each record. It can be the social security number (SSN), vehicle identification number (VIN), or auto-increment number. This is a unique number that is generated when a new record is inserted into a table. Below is the database diagram of our simple CRM. The key symbol in each table represents the table primary key. The magnifying glass indicates foreign key linking another table in the database. Sometimes we call it the “lookup” table. install.sql Once you have an understanding of the database table structure, find the “install.sql” script in the db folder and use a MySQL tool such as MySQL Workbench or Sequel Pro to run the SQL script. It should create a new relational database named custom_crm and its database tables. A Side Note on ZenBase The CRM application is also one of the many application templates readily available at ZenBase (built on the top of phpGrid) for anyone — with or without coding skills — to use and customize for their own needs. Setup phpGrid Our CRM contains many datagrids. The datagrid is a spreadsheet-like data table that displays rows and columns representing records and fields from the database table. The datagrid gives the end-user ability to read and write to database tables on a webpage. We can use a datagrid tool from phpGrid. We use a tool instead of building them from scratch because developing the datagrid is usually tedious and error-prone. The datagrid library will handle all internal database CRUD (Create, Remove, Update, and Delete) operations for us with better and faster results with little code. To install phpGrid, follow these steps: Unzip the phpGrid download file. Upload the phpGrid folder to the phpGrid folder. Complete the installation by configuring the conf.php file. Before we begin coding, we must specify the database information in conf.php , the phpGrid configuration file. Here is an example of database connection settings: PHPGRID_DB_HOSTNAME — web server IP or host name — web server IP or host name PHPGRID_DB_USERNAME — database user name — database user name PHPGRID_DB_PASSWORD — database password — database password PHPGRID_DB_NAME — database name of our CRM — database name of our CRM PHPGRID_DB_TYPE — type of database — type of database PHPGRID_DB_CHARSET — always ‘utf8’ in MySQL To learn more about configuring phpGrid, check out phpGrid complete installation guide. Page Template Before we can start building our first page of the CRM, it is a good practice to make the reusable page items such as header and footer. The page will comprise of a header, menu, body and footer. We will start by creating a reusable page template. head.php This is a basic HTML5 template header. It includes a link to a custom stylesheet that will be created in a later step. menu.php Notice the usage of $_GET['currentPage'] . Each page will set a value which will highlight the name of the current page on the top menu bar. Include the following code in style.css for menu styling (minified). It will transform the above, unordered list into a menu. footer.php Simple closing body and html tags. Complete Page Template This is the complete page template. The main content will go after Section Title . CRM Main Pages Are you still with me? Good! We can now finally develop the first page in our CRM. Our CRM for the sales team members has four pages: Tasks Leads Opportunities Customers/Won Each page indicates a different stage in a typical sales cycle. Sale People Page Design Mockup Here’s our CRM design mockup for the sales people. Tasks Page When a sales team member logged in, the first page he sees is a list of current tasks. As you may recall, our Notes table holds all the sales activity information. We can create a datagrid and populate it from the Notes table using phpGrid. The Tasks page main content is a datagrid. The following two lines will give us a list of tasks of the current sales person. The first line creates a phpGrid object by passing the SELECT SQL statement, its primary key — ID , and then the name of the database table - notes . , and then the name of the database table - . The second and the final line calls display() function to render the datagrid on the screen. Check out the basic datagrid demo for more detail. Leads Page The leads page contains list of current leads that the sales person is responsible for. Each Lead can have one or many Notes. We will use the phpGrid master-detail feature for that. We also need to use set_query_filter() to display only the leads, Status = 1 , and only for the current sales person. Contact status table Opportunities Page A Lead becomes an Opportunity once it is qualified. The Opportunities page is similar to the Leads page. The only difference is the filtered status code in set_query_filter is Status = 2 . Customers/Won Page Customers/Won has the Status = 3 . Similar to Leads and Opportunities, Customers/Won can also have Notes. That’s all there is to it for sales people in our simple CRM. Manager Dashboard Photo by Eaters Collective on Unsplash The sales manager will have access to all records in the sales pipeline as well as the ability to manage sales team and customer data. We will have a single web page with tabbed menu similar to the phpGrid tabbed grid demo. Manager Dashboard Design Mockup My Sales Reps Main content Each tab represents a table in the CRM database. $_GET['gn'] will store the table name. It dynamically generates the datagrid based on table name passed. It’s very easy to integrate jQueryUI Tabs with phpGrid. Please refer to the phpGrid Tabbed Grid demo for more information. My Sales Rep Page Since a sales manager needs to quickly find out whom a sale person is working with, we added a detail grid $sdg populated from contact table and link with the master grid. sales_rep is the connecting key in contact table to the id that is the foreign key in users table. Remember the users stores all of our sales people information. Screenshots CRM — Sales Screen CRM — Manager Screen Live Demo CRM Sales Rep Screen | CRM Managers screen Need to Write Even Less Code? If you are new to programming and are not yet comfortable with coding, you may want to check out ZenBase that is built on the top of the phpGrid. The CRM is but one of the many application templates readily available at ZenBase for anyone — with or without coding skills — to use and customize for their own needs. Complete Source Code on GitHub
https://medium.com/free-code-camp/building-a-simple-crm-from-scratch-in-php-58fef061b075
[]
2018-07-26 00:03:02.916000+00:00
['PHP', 'CRM', 'Technology', 'Programming', 'Web Development']
2,365
Eugene Podkletnov’s Impulse Gravity Generator
Eugene, let’s start out with a layperson’s summary of the “impulse gravity generator” that you published a paper on with Dr. Giovanni Modanese. Can you explain this experiment and its goals for us? Tim, this experiment is a device I’m calling the “impulse gravity generator”, which utilizes a Marx generator discharge through a superconducting emitter in a high-magnetic field to create a wave in time-space with properties very close to gravitational waves. The similarities are apparent enough that we’re almost positive it actually is a form of gravity. Dr. Eugene Podkletnov, research scientist at Tampere University of Technology Our experimental apparatus is complicated, but the principle is simple. We have a typical high voltage discharge, typically up to 2 million volts, and sometimes as high as 5 million volts. We have a superconducting emitter, which has a two layers — the first layer is made of superconducting material, the other is a normal conductor. We discharge the voltage through the emitter in the presence of a high-intensity magnetic field, which leads to a very interesting phenomenon. I can only describe it as a gravitational impulse that propagates at high speeds over large distances without losing energy. These impulses can be directionally projected in any direction in space, and they exert a large force on any object in the path of propagation. We haven’t uncovered the mechanism yet to explain how this force is generated, but we understand the engineering principles used to generate & control it. I should point out that the ‘impulse gravity generator’ is very different than the rotating superconductor experiments you conducted in the 1990's. In this new experiment, you’re using a stationary apparatus rather the spinning disc in your previous research, right? Yes, that’s absolutely correct, but if we compare the rotating disc and the impulse gravity generator, the principle is the same because we’re creating a high density electric field in both materials. The rotating disc experiment produces a diffuse, low-intensity effect over a long period of time, whereas the impulse gravity generator creates a tightly focused, high-intensity effect of brief for a very short period of time — usually only 60 or 70 nanoseconds. Despite the short duration of the effect, the beam we’re generating is able to knock down objects in the beam’s path, and under certain conditions it’s even possible to make holes in brick walls and even deform metals. So it’s a very powerful tool. Impulse Generator: Configuration for 2 experimental variants from Podkletnov’s papers. Can you describe the force you’re generating in more detail? I’d like to better understand what you’re doing to generate the tremendous amount of force you’re describing. The force of the impulse depends entirely on the structure of the superconducting emitter and the voltage that we apply to it. Given the materials & voltages we currently have available, we can obtain large impulses capable of punching holes in thick concrete walls, and we’ve also been able to demonstrate deforming metal plates with a thickness of a couple of inches. The impulse deforms metal in the way that a hydraulic press might do it, but the pulse-duration is very brief, so we’ve been discussing a system utilizing several Marx generators to give a series of impulses that we believe will improve the overall effect. We’ve experimented with using the impulse generator on a variety of materials, and it’s led us to another important find: the beam can hit a target over very large distances with a minimum of divergence and what appears to be zero loss in energy, even after passing through other objects in the beam path. During these experiments, we have also been trying to measure the speed of propagation for these impulses. The results were extremely interesting, and in some ways hard to believe, but it’s based on experimental observations, and we’re going to continue refining our experimental measurements. Marx generator: A example of an industrial high-voltage generator (wikipedia) So these impulse are all generated by Marx generator discharges — is the high-intensity electrical discharge the key to generating these impulses? It is the combination of the discharge, the magnetic field, and the use of specially prepared superconducting emitters. These impulses are very short in time, we’re talking about 1 millionth of a second and shorter. The output force depends on overall voltage as well as the voltage rise-time. The faster the voltage rises, the larger the impulse, which is what gives us the impulse the ability to make holes in concrete over long distances. Let me ask about the emitter: you used a Type-II YBCO superconducting emitter, if I remember correctly. Does changing its size or the shape change the beam output — maybe making it stronger or refocusing it? Well, in terms of the size of the superconducting emitter, there are some limitations. We did many tests with various emitters, and overall we found that the diameter of the superconductor shouldn’t be smaller than 4 inches. The size is very important — we didn’t get good results with smaller superconductors. Now to answer your question about the shape of the emitter, the superconductor can in fact have different shapes, and the projected impulse will maintain the emitter’s cross-sectional shape, so is important. Have you observed any change in the molecular structure of the target samples you tested, or did you only see the simple mechanical compression & deformation you’ve described so far? Discharge Chamber: A complete schematic of the experimental chamber. We didn’t see any change of the molecular structure — just a large scale deformation of the target material from the beam’s force. It seems to act like a punch — it happens very rapidly, so it’s close to an explosive action. I should ask whether the beam loses energy as it penetrates materials. Does it naturally decrease or diverge with distance? That’s an interesting question — and to our great surprise the beam does not appear to lose energy when it meets materials. It can pass through brick or concrete walls, very thick metal-plates, and all types of plastics, and it doesn’t seem to lose energy at all. This is the consistent, long-term evidence from test discharges that we’ve performed over the last 4 years. These results seem a bit strange, but we don’t believe we’re breaking any natural laws. We’re simply not working in a closed system, and therefore the second law of thermodynamics isn’t directly applicable. In terms of action at a distance — and the dependence of distance on the beam energy — we don’t have much experimental data, but what we do have is a first measurement at a distance of 1.2 kilometers without any loss in energy. A more recent experiment was conducted over a distance of 5 kilometers, and the beam penetrated through several houses made of concrete. We did not measure any loss of energy, but after closely evaluating the calculations that we’ve made, we should get some decrease in beam-energy at distances greater than 100 kilometers. Now in this recent 5 kilometer experiment, did you notice any change in the focus of the beam? Did it widen or perhaps get smaller as it travels? If the magnetic field solenoid that we wind around the chamber is well-constructed, then it will produce a very good discharge and effectively maintains the non-divergent cross-sectional pattern of the emitter it was projected from. However, at a distance of 5 kilometers, the beam begins to lose focus– it gets a bit wider, indicating minor deviations in the shape of the impulse as it propagates. Have you been able to do efficiency calculations for the impulses, or is it still too early for that level of analysis? Well, Dr. Giovanni Modanese made some preliminary measurements that gave us the force in joules, but we did not try to make experimental predictions — we wanted to simply see the results of how different objects reacted to the action of this impulse. In terms of practical experimentation, it appears that the energy that we put into the discharge is much less than the energy that the impulse seems to generate, but I’m definitely not implying that it is some kind of over-unity device. I would suggest that we’re creating a set of special space-time conditions through interactions of the electromagnetic pulse-discharge with the Cooper pairs forming a Bose-Einstein Condensate in the superconductor. Deflection & Amplitude: Early measurements from Podkletnov’s initial research paper. So far you’ve published several scientific papers on this experiment, and I’d like to find out more about what topics you plan on publishing about in the future, and what your schedule for publication might be? We’ll try to publish on a variety of experiments in a more detailed manner, but at present we’re extremely interested in attempting to measure the interaction of the impulse beam with visible light, and we published some preliminary results about this in the journal of low-temperature physics. The details are all in my paper co-authored with Dr. Modanese, and we’re continuing this research to refine our measurements of the propagation speed of the impulse. We’re very cautious about what we write, because we don’t want to frighten the scientific community, and also we want to be absolutely sure that the results are checked and rechecked many times — but it seems that based on what we have now, and we’ve already been working for a year and a half, the speed of the impulse is much higher than the speed of light. With the parameters that we use now — using our current emitter designs and a voltage of 3 to 5 million volts, we are measuring the propagation speed of the impulse at close to 64 times the speed of light. Of course, we would like to thoroughly confirm all our measurements using as many different tools, systems and methods as possible. At present we use two atomic clocks, and we think that our measurements are precise, but we would welcome the advice of the international community. It would help to have additional input on how to measure the speed of the impulse as accurately as we can. As soon as we get a good confirmation of these results, we will try to publish all of this information. You’ve published a paper on these experiments titled the “Study of Light Interaction with Gravity Impulses and Measurements of the Speed of Gravity Impulses” which you coauthored with Dr. Modanese, which describes the propagation speed of the force beam. Now, in that paper, you’d done a lot of measurements on the speed and it looks like you use two different types of measurement devices, right? Right. So our main goal was first of all, to determine the speed of the gravity impulses and to study the interaction of the gravity impulses with the laser beam. There were actually two parallel experiments, and the results were pretty amazing, because first of all, we believe we were the first team to try to determine the speed of the propagation of a gravitational impulse at a distance of more than one kilometer. Atomic Clocks: An pair of synchronized clocks at NIST (wikipedia) We used very precise equipment — two synchronized rubidium atomic clocks — and we were able to determine with high precision the propagation speed of the gravitational impulses. We repeated these experiments for nearly half a year, using different voltages, targets, and experimental conditions. Nonetheless, we always had precise, consistent results, giving us a figure of 64 C, which indicates that the gravitational impulse is propagating at a speed 64 times faster than the speed of light. A propagation speed of 64 times the speed of light seems very different than the accepted speed of gravity. Do you have any thoughts on how to reconcile your experimental data with the accepted model? Well, modern astronomy feels confident that the speed of gravity is the same as the speed of light — but at the same time that does not explain or negate our measurements. In any case, I am approaching this from an experimental perspective, whereas Giovanni Modanese approaches this as a theoretical physicist. He’s very accurate in giving appropriate terms everything we observe, and he calls it a gravity-like impulse. For my part, I simply refer to it as a gravitational impulse, because after years of advances in the generation & measurement of these effects we have the ability to make objects heavier or lighter. I have no better term for it than “artificial gravity”, at least not at the moment. Does the high propagation speed of the impulse increase the margin for error in your experimental measurements? No. The error of the measurements is very low. First of all, keep in mind that a rubidium atomic clock is a very precise device to begin with. Also, we repeated the experiments many, many times. We also used very sensitive piezo-electric sensors, and we were able to describe our measurements with great precision in our paper. Obviously there is always a margin for experimental error, but in this case it should be very small. From reading your paper, if I understand it correctly you’ve used piezo-electric sensors to respond to changes in pressure — so they’re responding to a mechanical change in force. In the other experiment you appeared to be using an interferometer for measurements. Does that sound accurate? Piezo-Electric Sensors: An example of a modern printed piezo-array. Yes, that is essentially correct. In the second experiment when we used a laser beam, which of course is situated to intersect the impulse beam path. The laser beam was positioned at a small angle to the propagation line of the impulse beam, and it intersected the projection area of the gravitational impulse about 60 meters away from the emitter. In the region where the beams intersected, we noticed that the intensity of the laser diminished up to 9% depending on the voltage of the discharge. This effect was admittedly small, but our measurements were quite precise, and we repeated this experiment several times. It’s very interesting phenomenon. It seems that you’re precisely measuring force on the target — what about reactive force on the emitter? When the impulse is emitted, are you measuring an equal & opposite force on the superconductive emitter? No, there wasn’t. There was no reaction force at all. It doesn’t act according to Newton’s third law — that every force has an equal an opposite reaction. It appears as though the impulse is warping space-time, so perhaps you might call this a gravitational wave. Again, it propagates through space & interacts with normal matter, but does not lose energy at high distances — it remains collimated as it propagates. It seems that one of the milestones for this experiment would be independent replication & validation of these experiment — but the 4-inch superconducting emitter may be an obstacle to that. Do you know if those are manufactured and sold anywhere? Building effective emitters for large impulse effects is frankly a part of my professional know-how, but if you’re only talking about emitters that allow you to generate small effects, then it’s not a problem. I believe that American Superconductor can help to easily make emitters of this kind, and also there is a nice firm called “Superconductive Components” in Columbus, Ohio — they’re more or less familiar with my technology, and I think that they are up to the task of building emitter components. Keep in mind that diameter of the disc should still not be less than 4-inches, and I’m talking about the physical structure of the ceramic itself. The structure for high-output emitters is very difficult to build, and requires a lot of experience to build correctly, so even if I provided a detailed description, it would be difficult to construct without my help. However, emitters capable of pushing a thick book away from a table are possible to construct, and they aren’t quite so complicated. I understand that part of the phenomena included the observation of something you called a “flat glow discharge”. Can you describe that in more detail? Flat Glow Discharge: A visible discharge is seen during tests. We did notice a flat, glowing discharge that came from the entire surface of the emitter, and seems to emit further radiation towards & beyond the anode in a collimated beam. We may attempt to film this with a high-speed camera, but that we don’t presently have that, so we simply rely on our eyesight. We have also noticed that the discharge repeats the cross-sectional shape of the emitter, while special equipment would be useful, it is possible to see it with your own eyes — you don’t need a camera for that. You’ve also described a transient anomalous effect that was happening behind the impulse generator. Have you been able to explore this at all? There is a certain emission of radiation on the backside of the device, and this radiation has some harmonics which make it difficult to identify its exact and frequency. It’s highly transient and difficult to measure. This radiation penetrates different materials and it can damage equipment & apparatus positioned too closely. This is not a pleasant thing, but we noticed that the intensity of this field decreases rapidly with distance, so we avoid standing directly behind the device during operation, and it does not appear to be dangerous at all. Frankly speaking it is not very easy to measure these additional effects. You know, when you’re dealing with millions of volts, it’s better to keep a bit of safe distance. We also use a Faraday cage and special rubber-metal coatings to shield the radiation, because otherwise the high magnetic field strength from the discharge will erase computer hard drives and damage nearby test-equipment. Given the remarkable nature of these claims and the anomalous effects you’re describing, it would be very helpful to see photographic or video documentation for these effects. Is that something you plan to publish? It is something we are looking into. Back when we we began our experiments, in Tampere in the early ‘90s, it just wasn’t common practice to make videos or photos of the equipment or the experiment. I know that it’s typical in the United States, but here in Europe it’s different. Our most recent experiments are being conducted at the Moscow Chemical Research Center, which has a no-camera policy because the whole center is a very secure facility and some of the research laboratories are closed to the general public. We have signs on the walls of the laboratory prohibiting photography, as part of the established policy for this research center. This may change: I’ve discussed the possibility of photo & video documentation with the administration and they think it might be possible, but at present I don’t have any photos or video to share with you. Eugene, thank you again for your time and for sharing all of this information. Let me close by asking you what’s next — where do you see this research going? You could call what we’re doing experimental gravity research, and it has tremendous future potential, but it’s also very complicated — in some ways perhaps even more complex than nuclear physics, and certainly not as well understood. But when you think back to the beginning of the nuclear era, there was a period in the United States when the public, industry and military were interested, and progress was made rapidly. We also have some of the same issues today that nuclear research did back then. It really isn’t possible to make a small nuclear explosion in the lab, and in the same way there are thresholds for generating our effects that make it difficult to design small, inexpensive experiments. So in that context, experimental gravity research offers big opportunities but also requires a collaborative, organized approach to study it, by combining the knowledge different physicists, chemists, materials scientists & theoretical physicists. I believe it is only by working together with other experts that we will make real breakthroughs in this field, and I’m eager to do that because it’s very, very serious research and opens the door to many new possibilities.
https://medium.com/predict/eugene-podkletnovs-impulse-gravity-generator-8749bbdc8378
['Tim Ventura']
2019-12-29 07:00:08.833000+00:00
['Futurism', 'Gravity', 'Science', 'Physics', 'Technology']
2,366
Is Data Science Evil?
The evils of data science and AI We’ve come a long way since the 70s, and computers today don’t rape or mutilate people wearing black face masks. (Although, with the virus, we’ve all become a little more Darth-Vader-like). Instead, they enable the transition to a cashless economy, from which the poorest in society will be excluded (one cannot drop a credit card into a beggar’s hat, and neither would anyone like to have their card swiped by a card reader on the street). AI systems assist judges in estimating the risk posed to society by particular offenders and decide on sentences or bail based on algorithms that are kept hidden from view. Photo by Bill Oxford on Unsplash AI is at the forefront of environmental destruction, not only through the use of vast amounts of energy and carbon emissions for training computationally intensive models, but also by supporting research that will open up the Arctic to commercial exploitation, by enabling fossil-fuel companies to find new sources of oil, and by being one of the industries with the shortest time-to-obsolescence of its products. AI algorithms allow us to create new, terrifying weapons, come up with new types of terrorism, manipulate democratic processes, and endanger jobs on a global scale. Via the cultural imperialism of the US and its ubiquitous language, “US English (international)” (as my keyboard settings call it), AI-supported toys for children, based on Disney characters, have replaced traditional cultural content in many societies. Google translate supports only 109 languages out of an estimated 6500 that are in existence. None of the big AI products in use today support cultures and value systems other than vaguely Christian-based, Western ones. There are no Islamic rules of censorship in Facebook, but there is the North-American-minded censorship of women’s breasts. There are no Hindu restrictions on talking about cow’s meat on Twitter, but there is censorship of anti-vaxxers and global climate emergency deniers. Not that I am in favour of either of these dangerous and criminal groups. The point here is not to defend human stupidity, but to point out how our AI-controlled and enforced rules of public discourse have a severe cultural bias. Think of censoring the images of cows being slaughtered, or a restaurant ad showing a juicy steak. Yet, for hundreds of millions of people, such an image is as revolting, or more, than what we censor in our Western media when it touches our own sensibilities. Photo by bill wegener on Unsplash And lastly, to not let the list get too long (although it probably already is), AI causes an immense concentration of power in the hands of very few big companies that have ownership of the vast amounts of data needed to train effective models: Google, Baidu, Facebook, Twitter, Amazon, Apple, Tesla. Small companies, startups, a possible Shia-Islamic search engine, a Kazakh self-driving car, a culturally-aware South Pacific Islands social network — these are and will remain utopias, because their respective population bases are not big enough to make training AI models for them worthwhile, or even possible.
https://medium.com/the-innovation/is-data-science-evil-cb7f361fb91
['Moral Robots']
2020-09-28 18:46:11.273000+00:00
['Social Responsibility', 'Technology', 'Data Science', 'Ethics', 'Artificial Intelligence']
2,367
This extremely power thick battery might almost increase the variety of electrical lorries
Scientists have actually long seen lithium-metal batteries as a suitable innovation for power storage space, leveraging the lightest steel on the table of elements to provide cells loaded with power. But scientists and also business have actually attempted and also stopped working for years to create budget-friendly, rechargeable variations that didn’t have an unpleasant behavior of capturing on fire. Then previously this year Jagdeep Singh, the president of QuantumScape, asserted in a meeting with The Mobilist that the greatly moneyed, stealth Silicon Valley firm had actually fractured the essential technological difficulties. He included that VW anticipates to have the batteries in its vehicles and also vehicles by 2025, guaranteeing to lower the price and also improve the variety of its electrical lorries. After going public in November, QuantumScape is currently valued at around $20 billion, in spite of having no item or earnings yet (and also no assumption that it will certainly till 2024). VW has actually spent greater than $300 million in the firm and also has actually developed a joint endeavor with QuantumScape to make the batteries. The firm has actually likewise increased numerous millions from various other significant capitalists. Still, previously Singh had actually exposed a couple of information concerning the battery, triggering scientists, competitors, and also reporters to search license filings, capitalist papers, and also various other resources for hints regarding what exactly the firm had actually accomplished-and also exactly how. In a press statement on Tuesday, December 8, QuantumScape ultimately supplied technological arise from laboratory examinations. Its innovation is a partly solid-state battery, indicating that it makes use of a strong electrolyte rather than the fluid that a lot of batteries rely upon to advertise the activity of billed atoms with the gadget. Numerous scientists and also businesses are checking out solid-state innovation for a selection of battery chemistries since this technique has the prospective to enhance security and also power thickness, though establishing a useful variation has actually confirmed hard. The firm, based in San Jose, California, is still holding back particular information concerning its battery, consisting of a few of the essential products and also refines it’s utilizing to make it function. And some professionals stay doubtful that QuantumScape has actually really attended to the complicated technological difficulties that would certainly make a lithium-metal battery feasible in industrial lorries in the following 5 years. Test outcomes In a meeting with MIT Technology Review, Singh claims the firm has actually revealed that its batteries will properly provide on 5 essential customer requirements that have actually so far protected against electrical lorries from going beyond 2% of brand-new United States vehicle sales: reduced expenses, higher array, much shorter billing times, a longer overall lifetime when driving, and also boosted security. “Any battery that can meet these requirements can really open up the 98% of the market in a way you can’t do today,” he claims. Indeed, QuantumScape’s efficiency outcomes are significant. The batteries can credit 80% capability in much less than 15 mins. (MotorTrend discovered that Tesla’s V3 Supercharger took a Model 3 from 5% to 90% in 37 mins, in an examination in 2014.) And they keep greater than 80% of their capacity over 800 billing cycles, which is the harsh matching of driving 240,000 miles. In truth, the battery reveals little destruction also when based on hostile cost and also discharge cycles. Finally, the firm claims that the battery is made to accomplish driving varieties that might surpass those of electrical lorries with common lithium-ion batteries by greater than 80%-though this hasn’t been straight examined yet. “The data from QuantumScape is quite impressive,” claims Paul Albertus, an assistant teacher of chemical and also biomolecular design at the University of Maryland and also formerly the program supervisor of ARPA-E’s solid-state-focused IONICS program, that has no association or monetary partnership with the firm. The firm has “gone much further than other things I’ve seen” in lithium-metal batteries, he includes: “They’ve run a marathon while everyone else has done a 5K.” How it functions So exactly how’d they accomplished all this? In a common lithium-ion battery in an electrical vehicle today, both electrodes (the anode) is primarily made from graphite, which conveniently saves the lithium ions that shuttle bus to and fro with the battery. In a lithium-metal battery, that anode is made from lithium itself. That suggests that almost every electron can be used saving power, which is what make up the higher power thickness possibility. But it develops a number of large difficulties. The initially is that the steel is extremely responsive, so if it enters call with a fluid, consisting of the electrolyte that sustains the activity of those ions in a lot of batteries, it can activate side responses that weaken the battery or create it to ignite. The 2nd is that the circulation of lithium ions can develop needle-like developments called dendrites, which can penetrate the separator in the center of the battery, short-circuiting the cell. Over the years, those concerns have actually led scientists to attempt to create solid-state electrolytes that aren’t responsive with lithium steel, utilizing porcelains, polymers, and also various other products. One of QuantumScape’s essential advancements was establishing a solid-state ceramic electrolyte that likewise functions as the separator. Just a couple of 10s of micrometers thick, it reduces the development of dendrites while still enabling lithium ions to pass conveniently to and fro. (The electrolyte on the various other end of the battery, the cathode side, is a gel of some type, so it’s not a completely solid-state battery.) Singh decreases to define the product they’re utilizing, claiming it’s one of their most carefully protected profession tricks. (Some battery experts suspect, on the basis of license filings, that it’s an oxide called LLZO.) Finding it took 5 years; establishing the best structure and also production procedure to stop flaws and also dendrites took an additional 5. The firm thinks that the relocate to solid-state innovation will certainly make the batteries more secure than the lithium-ion range on the marketplace today, which still periodically capture on fire themselves under severe situations. The various other large advancement is that the battery is made without a distinctive anode. (See QuantumScape’s video clip below to obtain a much better feeling of its “anode free” style.) As the battery fees, the lithium ions in the cathode side traveling with the separator and also develop a flawlessly level layer in between it and also the electric get in touch with on completion of the battery. Nearly every one of that lithium after that goes back to the cathode throughout the discharge cycle. This gets rid of the requirement for any type of “host” anode product that’s not straight adding to the work of saving power or bring existing, more decreasing the essential weight and also quantity. It likewise need to reduce production expenses, the firm claims. Remaining threats There is a catch, nevertheless: QuantumScape’s outcomes are from laboratory examinations carried out on single-layer cells. An real automobile battery would certainly require to have loads of layers all collaborating. Getting from the pilot line to industrial production is a substantial obstacle in power storage space, and also the factor at which a lot of when appealing battery start-ups have actually stopped working. Albertus keeps in mind that there’s an abundant background of early insurance claims of battery innovations, so any type of brand-new ones are met with apprehension. He’d like to see QuantumScape send the firm’s cells to the type of independent screening that nationwide laboratories do, under standard problems. Other sector onlookers have actually shared questions that the firm might accomplish the scale-up and also security examinations needed to place batteries right into lorries when driving by 2025, if the firm has actually just carefully examined single-layer cells up until now. Sila Nanotechnologies, a competing battery start-up establishing a various kind of power thick anode products for lithium-ion batteries, launched a white paper a day prior to the Mobilist tale that highlights a list of technological difficulties for solid-state lithium-metal batteries. It keeps in mind that much of the academic benefits of lithium-metal slim as business pursue industrial batteries, provided all the extra steps needed to make them function. But the paper emphasizes that the hardest component will certainly be satisfying the marketplace obstacle: taking on the huge international facilities currently in position to resource, create, deliver, and also mount lithium-ion batteries. Massive wagers Other onlookers, nevertheless, state that the current developments in the area suggest both that lithium-metal battery will considerably exceed the power thickness of lithium-ion innovation which the troubles standing up the area can be settled. “It used to be whether we’ll have lithium-metal batteries; now it’s a question of when we’ll have them,” claims Venkat Viswanathan, an associate teacher at Carnegie Mellon that has actually investigated lithium-metal batteries (and also has actually done speaking with help QuantumScape). Singh recognized that the firm still encounter difficulties, yet he urges they associate with design and also scaling up production. He doesn’t believe any type of extra innovations in chemistry are needed. He likewise kept in mind the firm currently has, even more, $1 billion, supplying it the significant path to reach industrial manufacturing. Asked why reporters need to believe in the firm’s outcomes without the advantage of independent searchings for, Singh emphasized that he’s sharing as much of the information as he can to be clear. But he includes that QuantumScape isn’t “in the business of academic research.” “No offense, but we don’t really care what you think,” he claims. “The people we care about are our customers. They’ve seen the data, they’ve run the tests in their own lab, they’ve seen it works, and as a result, they’re putting in massive bets on this company. VW has gone all in.” In various other words, the genuine examination of whether QuantumScape has actually resolved the troubles as totally as it asserts is whether the German vehicle titan places vehicles geared up with the batteries when driving by 2025.
https://medium.com/@drakedigitalnews/this-extremely-power-thick-battery-might-almost-increase-the-variety-of-electrical-lorries-d22e609fe09d
['Victoria Hunt']
2020-12-09 16:46:55.493000+00:00
['Technology News', 'Technews', 'Technology']
2,368
My 5 Favorite Things About the iPhone 12 Mini
My 5 Favorite Things About the iPhone 12 Mini When the iPhone 12 mini was announced, I wrote a piece called The iPhone 12 Mini is the iPhone for Digital Minimalists. It was essentially a statement about how excited I was to buy an iPhone that would be useful when I used it and fit in my pocket when I didn’t use it. But as launch day approached, I felt increasingly stressed about the purchase. The truly minimalist thing to do is keep what you already own. My iPhone X wasn’t cracked or totally broken, but it was on the fritz, and it was beginning to hang during processes and present annoying usability errors. But it still worked, goddamnit, and I didn’t want to buy a new phone when my old one worked. So launch day came and went without me buying an iPhone 12 mini. But the universe soon corrected me on my error. During launch week, I was traveling to Denver to stay with a friend and catch up. While my friend was at work, I decided to borrow her truck and take a scenic drive through the mountains. And of course, in the middle of a state I’m not familiar with on roads I could easily get very lost on, my phone decided simply not to function anymore. The entire unit froze. The first thing I thought? “This wouldn’t be happening if I’d bought the damn 12 mini.” So after I got down the mountain, before even going home, I stopped by a Best Buy and placed an order for my new phone. And damn, am I glad I did — because this phone is fucking awesome. My favorite things about it, from least to most: 5. The sound quality is fantastic One of the first things I noticed about the phone was, of all things, the speakerphone quality. The speakers on this tiny unit are really, really good. For nearly the entire three years I owned my iPhone X, it seemed the speakers on one side of the phone were broken. I don’t recall ever dropping it in a way that would cause such a problem, but the speakers were always weak and tinny. It wasn’t a big deal to me because I always used bluetooth headphones, but it was annoying when taking calls on speakerphone or playing music in a car with no internal speakers. The iPhone 12 mini doesn’t have that problem. Despite being a substantially smaller unit, the 12 mini has substantially more powerful speakers. When I picked up a call from my mom and played it for my dad and I on the highway, her voice rang loud and clear. When I play music in my white panel van that has no internal speakers, I can hear the music clearly. In my quiet apartment, one could even call it loud. I’m one of those people that likes to put their phone on speakerphone a lot. If someone calls me when I’m not alone, I like to share my conversations with those around me. I also like to be that person who talks into their speakerphone like they don’t know how to use a phone. Great speakerphone quality was an unexpected and welcome blessing. I didn’t buy the 12 mini because I thought it would have great sound quality. But it was certainly welcome. 4. The camera is surprisingly good As someone who doesn’t have Instagram, Snapchat, Tik Tok, or any other social media, the quality of a phone camera is very low on my list of priorities. I do use my phone camera occasionally to take quality photos of things I care about, but I mostly use it to take photos of things I want to remember, like book names or receipt numbers. But t he iPhone 12 mini — paired with Apple’s photos app — can produce a damn good photo. No doubt these photos are not as good as what the iPhone 12 Pro lineup can produce. But for someone who is not a professional photographer or social media influencer, the iPhone 12 mini camera is really quite stunning. Stunning enough, in fact, that this modern Luddite is considering actually using their camera for taking photos of friends and loved ones instead of just grocery store receipts. For the first time since I deleted social media, I’ll be using a phone camera for what it’s meant for. Imagine that! Any smartphone with a camera good enough to get me taking phone photos is definitely a good smartphone camera. 3. Great for encouraging deep work The larger the phone, the more distracting it is. Large phones encourage you to spend a lot of time consuming content than small phones do. If you’re trying to get deep work done, being sucked into the content cycling machine that is your smartphone is anathema. As Angela Lashbrook pointed out in “The Big Disadvantage of Small Phones Like the iPhone Mini,” small phones are not great for people who are power users of their phone. Smaller screens inhibit your ability to retain information. But most digital minimalists are not trying to use their phone for that! We’re trying to get off our phones, either to work on the desktop or focus on the one precious life we have in front of us. The iPhone 12 mini is an excellent phone precisely because it doesn’t encourage content consumption. The small and portable form factor encourages you to put your phone down and move to your production environment for deep work, which is where you should be trying to perform deep work in the first place. 2. You don’t need a purse just to carry it around One of the things that has annoyed me most about iPhones over the past six or seven years has been the sheer physical size. There was a time when my decision about whether to purchase a particular pair of pants, jacket, or coat was ultimately decided by one question: Can my gigantic phone fit in these pockets? Invariably, the answer was no. Maybe clothes for six foot tall men would accomodate these phones, but my Women’s XS/Boys XL Husky clothes did not. This always left me asking the question “What purse should I buy to carry my phone?” This question always leaves me feeling incredibly dysphoric. I like to dress in masculine styles, and nothing undercuts my attempt to look masculine more than carrying a purse. As a result, I spent an inordinate amount of time shopping online for the most masculine purse-sized bag I could find. (Spoiler: Such a thing does not exist.) All this because Apple wouldn’t make a smaller damn phone. Most people can’t relate to feelings of dysphoria triggered by phone size, but most people can relate to feeling annoyed about having to carry a purse or bag just to accomodate an insensibly large phone. There are plenty of women (and men, and nonbinary people) who have taken to carrying purses or bags just to have somewhere to put their large smartphone. With the iPhone 12 mini, that limitation is gone. The iPhone 12 mini always fits in the pockets of my pants. It always fits in my coat. Wherever I want to go, no matter what I want to wear or do, the iPhone 12 mini can go with me. What’s more, I don’t worry about the phone sliding out of my pocket either. When I did try to stuff those too-large phones in my pocket, they would often do this thing where the top half would quickly slide out of my pocket. Then, as I sat down, my phone took advantage of my angled hips to escape into the chair, floor, or couch cushions. I’ve spent too long in my life searching for a phone that made an escape when I sat down somewhere. The iPhone 12 mini never tries to make the great escape. It never peeks it’s top half precipitously out of my pocket, threatening to fall and shatter on the ground. It never slides out of my pocket when I sit, leaving itself wedged between couch cushions. It stays in my pocket, where it’s f-cking supposed to be. If you are, like me, a tiny person who has always been frustrated by the disproportionately large size of smartphones, you will love the iPhone 12 mini. 1. The iPhone 12 mini fits in my hand One of the things strangers love to comment on about me is my hands. At 5’2”, I’m already a small person, and my hands are small even for someone my size. They like to point at my hands and say “ooh, your hands are so tiny!” Sometimes they grab my hands without my permission, or they put their hand palm-to-palm with mine and revel in how small my fingers are compared to theirs. My tiny hands are great when someone drops their credit card between the seats of their car because my tiny hands are always able to rescue it, but my tiny hands aren’t great for handling smartphones. While most other people can use a smartphone with one hand, I need both. If I try to pick up a smartphone and use it with one hand, it invariably counterbalances and tips out of my grasp. To counteract this, I’ve invested a lot of money in shock-proof cases over the last few years. For the first time since 2013, I own an iPhone I can actually use. Instead of having to pick it up with both hands and peck like an eighty year old, I can pick it up and use it with one hand. This seemingly small change has opened up a world of usability. I can pick up a phone call while walking down the street holding a cup of tea. I only have to take off one glove to use my phone in winter weather. I can use my phone while walking, instead of having to come to a standstill to type. These seemingly little obstacles routinely added friction to my life, and they are finally gone. No doubt this small change seems like it’s not that big a deal, but for a device I will use 1–3 hours a day for the next 2–5 years, for a total of 2,190 hours, it’s important I’m able to use it effectively.
https://medium.com/macoclock/my-5-favorite-things-about-the-iphone-12-mini-b6e870c679b6
['Megan Holstein']
2020-11-24 02:56:32.171000+00:00
['Gadgets', 'Minimalism', 'Digital Life', 'Apple', 'Technology']
2,369
Explaining AlexNet Convolutional Neural Network
Overfitting Prevention Having tackled normalization and pooling AlexNet was faced with a huge overfitting challenge. Their 60-million parameter model was bound to overfit. They needed to come up with an overfitting prevention strategy that could work at this scale. Whenever a system has huge number of parameters, it becomes prone to overfitting. Overfitting — Given a question that you’ve already seen you can answer perfectly but you’ll perform poorly on unseen questions. They employed two methods to battle overfitting Data Augmentation Dropout Data Augmentation Data augmentation is increasing the size of your dataset by creating transforms of each image in your dataset. These transforms can be simple scaling of size or reflection or rotation. See how no. 6 is rotated in various directions Source These schemes led to an error reduction of 1% in their top-1 error metric. By augmenting the data you not only increase the dataset but the model tries to become rotation invariant, color invariant etc. and prevents overfitting Dropout The second technique that AlexNet used to avoid overfitting was dropout. It consists of setting to zero the output of each hidden neuron with probability 0.5. The neurons which are “dropped out” in this way do not contribute to the forward pass and do not participate in back- propagation. So every time an input is presented, the neural network samples a different architecture. This new-architecture-everytime is akin to using multiple architectures without expending additional resources. The model, therefore, forced to learn more robust features.
https://medium.com/x8-the-ai-community/explaining-alexnet-convolutional-neural-network-854df45613aa
['Rishi Sidhu']
2019-06-07 14:40:48.833000+00:00
['Machine Learning', 'Data Science', 'Technology', 'Neural Networks', 'Artificial Intelligence']
2,370
5 Game-Changing Reasons to Implement PLM Solutions
PLM solutions form the basis of organizational change that leads to the success and prosperity of the business. With its benefits found across the entire organization, most enterprises are investing heavily in PLM software. Oracle Agile PLM solutions are implemented without sacrificing work quality for the frictionless business process, personalized experience, and high customer satisfaction. PLM is becoming the single source of managing all aspects of the business. Single-handedly providing solutions from the product development process’s commencement till the very end, PLM is known to be a game-changer. Following are some of the advantages a business can dig out of PLM solutions, which serve as a reason to implement PLM. Oracle Agile PLM Support & Maintenence Single Management Source: PLM solution applies to every process, workflow, and department. It acts as a central repository for all primary functions and processes. PLM is a holistic solution from dealing with product data to making an informed and sound decision regarding product development and marketing. It is a centralized body of solutions that caters to HR processes, strategy formation, conflict management, and manufacturing operations. A single factor, method, or solution that derives the entire organization is cost-effective, saves time of marketing, and does not unnecessarily sacrifice the organization. This is a convincing reason why many companies exercise PLM methods and tools in their daily processes. Derives Innovation: Since PLM is an all-rounder move to run the business, it triggers innovation too. Users, employees, and stakeholders from all departments share their opinion in the decision-making process. The data collected from all departments act as a pool of information. This pool of data is a trigger for innovation. Designers and manufacturing departments share their feedback with the brands’ team and the marketing people. The collaborative move in Agile PLM support & maintenance signifies and uplifts the innovation. Product quality is improved, need is generated, and marketing characteristics are studied to bring innovation in the entire process. When companies lag in innovation and out of box ideas, they require PLM services for differentiation. Improved Product Development: As the value chain and manufacturing process are streamlined, unprecedented speed and flexibility improve the product development process. The collaboration between engineering, quality, sales, and service departments makes way for betterment along the process. The refinement in product development, manufacturing, and marketing, uplifts the entire process. This is one of the principal reasons for implementing product life cycle management solutions. Eliminate Human Errors: One of the integral reasons for the high cost of delayed processes and quality deterioration is human error. Human error reduces the product life cycle and makes it difficult to manage. To avoid this cost to production because of human error, PLM software and solutions are executed. As the human errors are removed, the associated micro-risks that come with it, are also removed. A typo, for instance, in manual work, can potentially harm the production line. Repetitive process, mundane tasks could be generated as a byproduct of human error. Therefore, to avoid the situation in the first place, PLM is implemented. Automation Replaces Manual and Erroneous Work: Manual work or tasks can never compete with automated workflows in efficiency, productivity, and results. A minor automated process can outdo a manual job through flexibility, speed, and agility. Dozens of micro-steps could be avoided, which saves time, cost, and human effort. Following steps could easily be automated to generate better results and efficacy, Expedite projects Employee tracking, workload, and assignment management and tracking Managing and maintaining important files Aligning projects, meeting and managing project timelines File organization and keeping them up to date Are you thinking about bringing a change to your workflow and manufacturing processes? Are you unable to do so because of your unwanted suspicion? The reasons mentioned above suffice the need to implement and execute a PLM solution for your organization. At the right time and through the right means, the right oracle agile PLM support vendor could be your key to success in the business world.
https://medium.com/@adeel-javed/5-game-changing-reasons-to-implement-plm-solutions-8f2701db8a81
['Adeel Javed']
2020-11-02 05:55:50.088000+00:00
['Solutions', 'Maintenance', 'Technology', 'Plm', 'Support']
2,371
The Paradox of Scale
The Paradox of Scale Netflix couldn’t build Netflix There are a bunch of heroes in the digital world that are held up as aspirational examples in conversations about digital business, technology and ways of working. Netflix, Spotify, Uber, Amazon and a host of our favourite characters, but these myths can become damaging fictions. Admired and respected as towering giants of our digital world, our hero companies emanate an almost mythical quality. The scale, power and inspiration they command are the stuff of legend. Glib statements about “business” distort their stories into gaudy two-dimensional caricatures whilst organisations seeking Digital Transformation aspire to emulate what they see in this theatre. Paradoxically our heroes would be the first to point out they wouldn’t be able to build themselves as they stand today. Photo by YIFEI CHEN Gall’s law has been on my mind lately. So much so that my partner has quoted it back to me — and she doesn’t work in tech, she works in the woods. Hopefully it’s that I communicated it well, but more likely it’s that she’s a very smart woman. For me it’s a statement that goes to the heart of why so many institutional IT projects cost a fortune and deliver little, with alarming regularity: A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system. — John Gall (1975, p.71) If you work with technology and your role includes the word “architect” (as mine sometimes does) your ears may well be burning. The paradox of scale I spend plenty of my time in conversations about microservices. API design, infrastructure, operations, architecture. If you’ve been there, you’ll have heard phrases like scheduling, circuit breakers, service discovery, distributed tracing, health checks, cenralised logging, mutual authentication, rolling deployments, traffic splitting, the list goes on. When you consider that each of those can easily balloon into a couple of months of work, especially when they come “for free out-of-the-box”, a spot of arithmetic will tell you you could be looking at two years of peripheral work before you even think about the thing you’re actually trying to build. Do you have a million users and a billion transactions? And do you have them today? Or are you just starting out with a new product? It’s easy to assume this stuff is critical to running production-quality systems and, you know what, it might be, but more likely it isn’t right now. The question is not “whether”, but “when” these things are useful Paradoxically, trying to build it all — to emulate our heroes on day one — is more likely to be disastrous. In Malcolm Gladwell’s re-examining of the story of David and Goliath, David refuses to put on the heavy battle armour of professional soldiers, preferring to fight with only a shepherd’s sling. David knew he couldn’t bear the weight of the armour and wouldn’t be at his best, potentially putting himself in even greater danger. As Malcolm Gladwell paraphrases it, "I've never worn armor before. You've got to be crazy." The rise of the titans Google is 20 years old, yet Google Cloud is only just starting to challenge AWS. Google boasts impressive infrastructure, tens of billions invested, including undersea cables and a global network estimated to carry 25% of the world’s Internet traffic. Only now are they offering this through Google Cloud. It’s been a long road. This is a far cry from the humble beginnings of 1998, when Google was an underdog search-engine beloved of 90s hipster-equivalents. One does not simply build Google 20 years is a long time. Long enough to accumulate extraordinary experience, infrastructure and legacy. A long road with plenty of twists and turns along the way. I don’t know specifics, but I do know what life is like. Success is a messy business, exploratory, trying, failing, scratching your head, learning something new, trying to think different. And so it’s been, I’ll bet, for all our heroes. They thought big, acted small, found a foothold and started journeying. For better for worse, for richer for poorer, in sunshine and in rain, wax on, wax off, they fixed the plumbing and built the roof, they put one foot in front of the other until today they stand towering in the world they helped to create. Where they stand now is (and continues to be) their journey, rather than their first port of call. On becoming a hero The thing no one ever tells us about standing on the shoulders of giants is that we first have to get up onto those shoulders. Looking at what our heroes do today and trying to copy it is a highly effective way to fail. First because it’s going to be expensive and take years, second because they are on their journeys and will keep moving in their own directions. By the time we ever get there, they’ll be long gone But if we shift our perspective and look, not at where they are today, but at how they got started and the way that they move, there’s a beautiful release. Now we don’t need to be dazzled by their scale, wondering how we could possibly be like that. Instead we can look at where we’re starting from, face the direction we want to go (which will be unique to each of us) and start putting one foot in front of the other.
https://medium.com/notbinary/the-paradox-of-scale-c0c6546c8c61
['David Carboni']
2019-06-28 16:58:16.340000+00:00
['Digital Transformation', 'Technology', 'Technology And Design', 'Scaleup']
2,372
Is Silicon Valley’s Time in the Sun Over?
Silicon Valley, the home of tech giants Facebook, Apple, Google, and more, located in San Francisco’s southern Bay Area, is undergoing a change from which it may never recover. In August, the Wall Street Journal did a deep dive on the mass exodus of employees from Silicon Valley. With the global pandemic forcing employees to work from home and San Francisco hit particularly hard with government-enforced shutdowns, companies gave their workforce permission to skip on the area’s crazy-high cost of living and work from other places in the country. Facebook and Google allowed employees to move away through the summer of 2021, allowing them to sign one-year leases elsewhere, before returning to their California HQs. This benefitted cities all over the country including Phoenix, Las Vegas, and Austin. The Silicon Valley exodus is just a small piece of the pie, though: California as a whole is losing net residents on an annual basis. Political mismanagement has given upper-middle-class citizens, which most if not all Silicon Valley workers would fall into, plenty of justification to leave. Per the prior-linked WSJ story, 40% of Facebook’s staff said it would be interested in permanent remote work in a May 2020 survey. Hired, a job search marketplace, conducted its own survey of 371 Silicon Valley workers. If asked by their employers to work remotely full-time, 42% would move to a less expensive city. These numbers speak volumes — a good chunk of tech workers are living in the Bay Area because they have to, not because they want to. This brings up an interesting question: if tech workers want to leave the state, why shouldn’t their employers come with them? Well, some of them are. Oracle recently announced it was moving from Redwood Shores, California to Austin, Texas. Hewlett Packard Enterprise, a spinoff of Hewlett-Packard, announced a move from San Jose, California to Spring, Texas — just outside of Houston. HP’s founders, Bill Hewlett and David Packard, are credited with beginning the creation of Silicon Valley while working on their company in Palo Alto. Tesla and SpaceX founder Elon Musk as well as Dropbox CEO Drew Houston have recently announced that they are personally moving from Cali to Austin — perhaps paving the way for their companies in the future. According to census data, 12.5% of those that left California in 2019 went to Texas, which is more than any other state. Why are all of these companies and executives choosing Texas, though? Texas politicians have been trying to woo California-based companies for years now, using no state income tax and lower costs of living as a bargaining chip. With remote work proving to work and workers wanting out of California, we may see a continued Silicon Valley exodus even after the coronavirus is under control. Texas isn’t the only beneficiary, though. While the vast majority of venture capitalists are located in California, New York, and Boston, some are trying to buck that trend. Some cities are giving themselves their own monikers to try to attract companies: Denver/Boulder, Colorado is “Silicon Mountain,” Dallas/Austin, Texas is “Silicon Praire,” Atlanta, Georgia is “Silicon Peach,” and Salt Lake City, Utah is “Silicon Slopes.” Some of the more popular destinations, like Austin, may eventually hold the majority of tech companies if this trend out of the Bay Area continues. With remote work proving to be successful, though, companies may turn to a variety of different cities. There are plenty of other cities around the country offering a relatively low cost of living and plenty outside-of-work fun to attract companies and their employees. Either way, Silicon Valley is slowly crumbling before our eyes. It may never completely go away, but it is hard to imagine it being regarded as the “tech hub of the world” for much longer.
https://medium.com/swlh/is-silicon-valleys-time-in-the-sun-over-677467b18681
['Dylan Hughes']
2020-12-16 23:58:08.412000+00:00
['Investing', 'Business', 'Technology', 'Economics', 'Silicon Valley']
2,373
Vuetify — App Bar and Drawer. Adding side menu and change toolbar on…
Photo by Maksym Kaharlytskyi on Unsplash Vuetify is a popular UI framework for Vue apps. In this article, we’ll look at how to work with the Vuetify framework. Toggle Navigation Drawers We can toggle the navigation drawer by using the v-app-bar-nav-icon component. For example, we can write: <template> <v-container> <v-row class="text-center"> <v-col col="12"> <v-card class="mx-auto overflow-hidden" height="400"> <v-app-bar color="deep-purple" dark> <v-app-bar-nav-icon @click="drawer = true"></v-app-bar-nav-icon> <v-toolbar-title>Title</v-toolbar-title> </v-app-bar> <v-navigation-drawer v-model="drawer" absolute temporary> <v-list nav dense> <v-list-item-group v-model="group" active-class="deep-purple--text text--accent-4"> <v-list-item> <v-list-item-icon> <v-icon>mdi-home</v-icon> </v-list-item-icon> <v-list-item-title>Home</v-list-item-title> </v-list-item> <v-list-item> <v-list-item-icon> <v-icon>mdi-account</v-icon> </v-list-item-icon> <v-list-item-title>Account</v-list-item-title> </v-list-item> </v-list-item-group> </v-list> </v-navigation-drawer> </v-card> </v-col> </v-row> </v-container> </template> <script> export default { name: "HelloWorld", data: () => ({ drawer: false }), }; </script> We have the v-navigation-drawer to display a menu on the left side. When it’s displayed depends on the drawer state. If it’s true , then it’ll be displayed. When we click on the v-app-ba-nav-icon , then we make it true . Scroll Threshold We can set a scroll threshold on the v-app-bar . For example, we can write: <template> <v-container> <v-row class="text-center"> <v-col col="12"> <v-card class="overflow-hidden"> <v-app-bar absolute color="#43a047" dark shrink-on-scroll prominent src="https://picsum.photos/1920/1080?random" fade-img-on-scroll scroll-target="#scrolling" scroll-threshold="500" > <template v-slot:img="{ props }"> <v-img v-bind="props" gradient="to top right, rgba(55,236,186,.7), lightblue"></v-img> </template> <v-app-bar-nav-icon></v-app-bar-nav-icon> <v-toolbar-title>Title</v-toolbar-title> <v-spacer></v-spacer> <v-btn icon> <v-icon>mdi-magnify</v-icon> </v-btn> <v-btn icon> <v-icon>mdi-heart</v-icon> </v-btn> <v-btn icon> <v-icon>mdi-dots-vertical</v-icon> </v-btn> </v-app-bar> <v-sheet id="scrolling" class="overflow-y-auto" max-height="600"> <v-container style="height: 1500px;"></v-container> </v-sheet> </v-card> </v-col> </v-row> </v-container> </template> <script> export default { name: "HelloWorld", data: () => ({ drawer: false, }), }; </script> We added the scroll-threshold prop to make the toolbar change from a taller toolbar with a background image to one with that has a solid background without an image. The fade-img-on-scroll prop lets us make the image on the app bar disappear when we scroll. Also, we’ve to set the scroll-target to the selector of the div we’re watching for scrolling. Photo by chuttersnap on Unsplash Conclusion We can add a navigation drawer to our app bar. Also, we can make the app bar display differently when we scroll. Enjoyed this article? If so, get more similar content by subscribing to Decoded, our YouTube channel!
https://medium.com/javascript-in-plain-english/vuetify-app-bar-and-drawer-835ec46505c1
['John Au-Yeung']
2020-11-23 17:04:40.234000+00:00
['JavaScript', 'Web Development', 'Software Development', 'Technology', 'Programming']
2,374
The Three Levels of Software Safety
The more software eats the world, the more critical safety is … but what exactly does that mean? Hacker image by catalyststuff Software engineers are bad at safety because software engineers are not used to the idea that software can injure. All around the industry, the mantel of technical leadership has been passed to people about my age, perhaps a few years older. We grew up when computers weren’t so powerful, when their use was an optimization rather than a necessity, when their first commercial successes were in toys. We don’t think about safety as being a relevant issue for software, and we need to change our perspective on that. But what does it mean for software to be safe? It’s easy to conceptualize how a car could be safe or unsafe. Easy to understand how a medical instrument could be safe of unsafe. But code? I like to think of software safety as being about three levels of concerns. Understanding where what you are building fits on those three levels, will tell you how best to focus your time and attention in a safety conversation. Level 1: Safety as a Synonym for Security For years, the only “safety” software developers thought about was “memory safety.” People will still jump to that conclusion, treating safety as a synonym for security. The connection between memory and safety isn’t just a conflation of terms. Safety is ultimately about preventing a system from reaching dangerous states. In software, the principle clearinghouse of state change is memory. So the first line of defense preventing a program from reaching a dangerous state is controlling what can access its memory and how that memory can be accessed. The need to manage what parts of memory a process can access traces its roots back to mainframe timesharing. Resource management and memory safety started as a straight forward customer service issue (if you’re paying for time and someone else is running something that eats up all the memory, it’s a problem). Computer scientists theorized that these annoyances could be weaponized and they almost immediately were starting with the Morris worm in 1988 and continuing to this day. Level 2: Safety as Predictability But as software grew in complexity and sophistication, state stopped relying on memory so much. A program could be “stateless” by passing state without storing it from one system to another in a transaction. The more common these kinds of designs became the more concurrency bugs and pure functions became part of conversations with software engineers. Undesirable states could be triggered by transactions happening in the wrong order (race conditions), or by depending on other states which were also in flux. Safety at this level of software becomes about predictability (or determinism). The ecosystem that has grown around it includes type systems and formal verification. These were not techniques that were invented the solve safety problems, but the renewed interest from academia and the penetration of such approaches in developer tooling is a direct result of increase awareness of how dangerous unpredictable systems can be. Level 3: Safety as Ergonomics The area of software safety I find the most interesting is also its newest frontier. It’s the unsafe states that can occur when computers become participants in other larger human based systems. Like the issues in level 2, the problems and techniques of level 3 are not new discoveries. Modern technologies — particularly AI — have made them relevant in ways they were not before. Software has always been present in human systems, but there is a difference between being present in the system and being a participant in the system. When you use a database to keep track of inventory, software is present in the system. When you build software that automatically reorders items based on that database, the software is now a participant. The key difference is whether software is recording state or altering state, because safety is ultimately about our likelihood of reaching undesirable states. As software automation becomes more and more prevalent, the issue of how software-initiated state changes can effect other human participants in the process is moving into the forefront of the conversation around technology. Many describe this issue as software ethics, but for me it is a safety issue. Ethics assumes unsafe states are foreseeable. When a design decision causes an airplane to crash, we do not call ethics into question unless someone can demonstrate that the people who made that decision had information that suggested that outcome was a possibility. Otherwise it’s just an accident. So far the tools we have to handle Level 3 are user centric design and problem setting processes. User centric design helps broaden our perspectives so we can see more potential outcomes on different potential interactions. Problem setting forces us to set boundaries with our technology. The same way memory safety restricts what states a program can access, problem setting asks software engineers to treat these powerful technologies as scalpels and restrict what parts of the process they can participate in. Safety as a Scaling Problem Safety, like risk, is ultimately about scale. At each level, problems that were always there take on new significance and the undesirable states that were always possible become more common. We shift our attention and develop new safety tools and techniques to accommodate these different priorities. When a program runs on a single machine with a single user, the most relevant safety issues are how that program affects the state of the computer itself. Concurrency issues are still there; ergonomic issues are still there, but the impact either of those issues have is not as significant. If a single isolated computer has concurrency issues all the data needed to it sort out and determine the correct state is on that single computer. If a single user has a negative outcome from using the program it is unlikely to spread to other users unless the program is shared. When a single computer grows to a network the first level of safety is still important, but the second level of safety becomes more dangerous. State moves from computer to computer and the blast radius of undesirable state changes increases. When digital systems stop merely supporting processes but become load bearing participants of that process, the third level of safety moves to the front of the line. Again the blast radius of undesirable state changes increases, damaging not just machines but causing people to take actions that are misinformed and potentially injurious. The first step in developing a safety practice is to determine what scale what you are building will reach in the immediate future and focus on building out best practices around that level of safety. Then as you scale the technology, continue to evolve your approach to reflect the concerns of higher levels.
https://medium.com/software-safety/the-three-levels-of-software-safety-2097610ada60
['Marianne Bellotti']
2020-12-28 03:32:24.095000+00:00
['Cybersecurity', 'Software Development', 'Safety', 'Distributed Systems', 'Technology']
2,375
Scale-Up the Best Version of Yourself — Accelerating Innovation in a Start-Up Company
Every morning I wake up and some of my first thoughts are: “how can I make a larger impact with my big idea and what are the next steps of learning to get there”. Well, I need some coffee and some minutes of thinking, perhaps a way of mindfulness in the silence of the early morning, before I get going. Every day is full throttle, especially when scaling-up your startup company. But you need to have a strong team and good support to achieve your goals in the best possible way. I had been thinking about this when we suddenly got some unexpected help. End of last year we were invited and selected to take part in the Silicon Valley accelerator program run by the Nordic Innovation House — the TINC. TINC aims to scale-up the best of Nordics startups. With the pandemic ongoing it was the first time this had to be run 100% virtually, so all of the participants from Iceland, Norway, Sweden and us from Finland ended up working 4 weeks late evenings with the accelerator team in California. That was a tough one, but there were some clear benefits — the “nighttime” learnings of tools, techniques and methods taught by the accelerator team, we utilized during daytime in our interaction with our trusted customers. Oh boy, we were trialing, building MVPs, pivoting, getting direct feedback. We had created a functional MVP on our data side and we were pretty excited about this! Our focused strategy became very clear quickly, why we are now putting our best efforts on conquering the beachhead market. This all accelerated our creation of business value to our customers removing their pain points. It was continuous learning. It never stopped. After four weeks we were pretty exhausted, the usual working days started early morning and ended around midnight. And we made some great connections. We are looking forward to being able to go and meet many fantastic people post-pandemic in the Silicon Valley that we networked with and built incredible relationships with. This sparked us to keep on working on our sprints and do agile development in a similar manner, engaging our team, having workshops and trials with customer to move ahead. Working in this intense way, which I would also dare to say is necessary for a start-up that wants to scale-up and grab the market opportunity fast, and this also knits the team together closely. We are on a daily basis doing progress, solving some problems and encountering a bunch of new ones. With our diverse team we could apply critical thinking, solving complex problems and collectively create unique knowledge, making us extremely hard to beat. But we do not stop here; working with many customers in parallel (we probably are in various phases with some 50 customers), our name and brand recognition is quickly growing. We have a great brand recognition. Being a top innovator with a key scalable solution to improve society and industry that is good for our planet is great. You can read more about Nordic Innovation House TINC accelerator program under: https://www.nordicinnovationhouse.com/siliconvalley/sv-programs/tinc Our company: https://www.rocsole.com Note: Curious about the rest of my scribblings on the board — get in touch and I will tell you.
https://medium.com/@mika-tienhaara/scale-up-the-best-version-of-yourself-accelerating-innovation-in-a-start-up-company-712681c2f6e9
['Mika Tienhaara']
2021-03-19 12:04:59.099000+00:00
['Technology', 'Accelerator', 'Startup', 'Entrepreneurship']
2,376
Barak Berkowitz on Inside Ideas
Barak Berkowitz on Inside Ideas Today’s guest on Inside Ideas is Barak Berkowitz. Barak is the Director of Operation and Strategy at MIT Media Lab as well as the founder of MarketCentrix, a consumer strategy consultancy for tech companies. Barak started his tech career at Macy’s, running the world’s first large-scale consumer computer store chain. He now consults to start-ups, VCs and large companies. Past clients have included, Digital Garage, Apple, Sony, Fujifilm, Polycom and many others. Barak also serves as an investor and advisor to many start-ups including LittleBits, New Context, Improbable and Quixey. Over the last 25-plus years Barak has served in a number of consumer technology leadership roles. Most recently he was CEO of Evi, the virtual personal assistant that was acquired by Amazon.Com and is the intelligence in Alexa. Prior to Evi, Barak was the Managing Director of Wolfram Alpha, the revolutionary answer engine. He also was Chairman and CEO of Six Apart, the global leader in blogging, Cofounder and President of OmniSky, the wireless internet innovator that went public in 2000, EVP and General manager of the Go Network, the Disney owned web portal and EVP and General Manager of Logitech. Prior to Logitech Barak spent over 9 years at Apple USA and Apple Japan in a number of roles leading consumer-marketing programs. Article from INNOVATORS MAGAZINE Get a deep dive on inside ideas from global thought leaders, sign up for our ‘Innovate Now’ newsletter to get episodes straight to your inbox.
https://medium.com/inside-ideas/barak-berkowitz-on-inside-ideas-c83e7141deb3
['Marc Buckley']
2021-03-22 17:28:12.869000+00:00
['Mit Media Lab', 'Technology', 'Global Goals', 'Sdgs', 'Future']
2,377
Bitcoin Fundamentals: Step by step explanation of a peer-to-peer Bitcoin transaction
Nakamoto introduced bitcoin on 31 October 2008 to a cryptography mailing list by publishing the White Paper “Bitcoin: A Peer-to-Peer Electronic Cash System”[1][2], and released as open-source software in 2009. Bitcoins are created as a reward in a competition in which users offer their computing power to verify and record bitcoin transactions into the blockchain. This activity is referred to as mining and successful miners are rewarded with transaction fees and newly created bitcoins. Technically, Bitcoin consists of: ● A decentralized peer-to-peer network (the Bitcoin protocol) ● A public transaction ledger (the blockchain) ● A set of rules for independent transaction validation and currency issuance (consensus rules) ● A mechanism for reaching global decentralized consensus on the valid blockchain (Proof-of-Work algorithm) One of the most interesting things is that blockchain promises disruptive changes because empowers the ‘Internet of Value’ that represents a world where money is exchanged at the speed in which information moves today. Transactions would occur in real-time and across global networks, solving the problem of international payments systems that are not interoperable. And what is more: as it does not need intermediaries, the concept goes further and in turn favours the social inclusion. Here is a high-level example of a peer to peer transaction. So, how does it work? 1. Nick opens his bitcoin wallet. This implies that Nick is indirectly creating his own bitcoin address. He’s supposed to get some bitcoins. 2. Nick wants to transfer bitcoins to Rose. So, he scans or copies Rose’s bitcoin address. 3. Nick fills an amount of bitcoin he wants to transfer and the fee he is willing to pay for this transaction. So, a transaction includes inputs, outputs and the amount of bitcoin that will be transferred. 4. Before sending the new transaction to the blockchain, the wallet using Nick’s private key signs it. 5. Now, the transaction is sent to the closest node on the bitcoin network. Then it is propagated into the network and verified (basic checks: eg. there are enough bitcoins on the origin wallet, structure, etc.). After it successfully passes verification it goes and sits inside the “Mempool” (short for Memory Pool) and patiently waits until a miner picks it up to include it in the next block to be mined. 6. It’s mining time and miners pick up the transactions (first those who pays more transaction fee) and group them into blocks trying to solve the Proof Of Work (or POW — a consensus algorithm) and calculate a certain hash function. 7. The miner who get it propagates the new block to the network. 8. The nodes verify the result and propagate the block. 9. Now Rose sees the first confirmation. 10. New confirmations appear with each new block that is created and linked. The transaction in more detail: A Bitcoin transaction is composed of 4 key elements: Input (Origin): Bitcoin address (public) of the origin wallet. Amount: Amount of bitcoins to be sent on the transaction. Output (Destination): Bitcoin address (public) of the destination wallet. Metadata (Optional): The metadata or message has a maximum size of 80 bytes. The metadata is stored in the OP_RETURN part of the transaction. Bitcoin Transaction components A transaction doesn’t simply move some bitcoin from one address to another address. A Bitcoin transaction moves bitcoins between one or more inputs and outputs. Each input is a transaction and address supplying bitcoins. Each output is an address receiving bitcoin, along with the amount of bitcoin going to that address. A sample Bitcoin transaction. Transaction C spends .008 bitcoins from Transactions A and B. The diagram above shows a sample transaction “C”. In this transaction, .005 BTC is taken from an address in Transaction A, and .003 BTC is taken from an address in Transaction B. For the outputs, .003 BTC are directed to the first address and .004 BTC are directed to the second address. The leftover .001 BTC goes to the miner of the block as a fee. Note that the .015 BTC in the other output of Transaction A is not spent in this transaction. Each input used must be entirely spent in a transaction. If an address received 100 bitcoins in a transaction and you just want to spend 1 bitcoin, the transaction must spend all 100. The solution is to use a second output for change, which returns the 99 leftover bitcoins back to you. Transactions can also include fees. If there are any bitcoins left over after adding up the inputs and subtracting the outputs, the remainder is a fee paid to the miner. The fee isn’t strictly required, but transactions without a fee will be a low priority for miners and may not be processed for days or maybe discarded entirely. A typical fee for a transaction is 0.0002 bitcoins (about 20 cents), so fees are low but not trivial. So how does it work? Step 1: Bob and Alice create the transaction STEP 1: Transaction creation and signing Anyone can create a transaction with 3 necessary components. The Input, Amount and Output. For example, let´s say that Bob and Alice are exchanging bitcoin for dollars. As Bob sends the bitcoin to Alice, Alice needs to send her bitcoin address (public) and Bob creates the transaction and sign it with his private key. STEP 2: Bitcoin transaction broadcasting STEP 2: Broadcasting Once the transaction is created, it is sent to the closest node on the bitcoin network. Note: The transaction doesn´t need to be sent right after the creation. It could be sent a long time after the creation (just need to be sure that you have enough bitcoins in the wallet when you decide to send it) Step 3: Propagation and verification STEP 3: Propagation and verification Once the transaction arrives at the closest node, then it is propagated into the network and verified. After it successfully passes verification it goes and sits inside the “Mempool” (short for Memory Pool) and patiently waits until a miner picks it up to include it in the next block. Step 4: Block Validation STEP 4: Validation Once the transaction is on the Mempool, then the miners pick up the transactions (First those who pays more transaction fee) and group them in blocks. As on May 2017, each block has a maximum size limit of 1 MB (change in this limit is under discussion by the community) and contains around 2.000 to 3000 transactions, depending on the size of each transaction. Then, by using the Proof-of-Work Consensus Algorithm, the network agrees on the valid block, and consequently the transactions, on average every 10 minutes. Follow me on Twitter! [1] “What is the Bitcoin Mempool? — 99Bitcoins.” 1 May. 2016, https://99bitcoins.com/what-is-bitcoin-mempool/. [2] “Bitcoin: A Peer-to-Peer Electronic Cash System.” https://bitcoin.org/bitcoin.pdf. Accessed 5 May. [3] “CoinMarketCap.” https://coinmarketcap.com/.
https://medium.com/blockchain-strategy-and-use-cases/bitcoin-fundamentals-a5d62fe98bac
['Gayan Samarakoon']
2020-01-30 12:27:06.035000+00:00
['Decentralized', 'Blockchain Technology', 'Blockchain', 'Peer To Peer', 'Bitcoin']
2,378
Smart homes will transform our lives, but not how you think
Smart homes will transform our lives, but not how you think Saro Jan 22·4 min read Lately, the term smart home has become a catch phrase for home automation, but in the future it will be just one of the many functionalities of a smart home. Most people spend over half of their lifetimes inside their home. It’s the place where we find comfort and security. It’s the place where we raise families, and build memories. In the past decade, home automation has gone from a speciality tech that was only installed in million dollar celebrity homes to a common tech that any one can setup with a few hundred pounds and a few hours of their spare time. This is just the start of a revolutionary transformation of our homes. Right now, consumer experience with technology is broken into a hundred pieces. Technology is available to help us in all aspects of our lives such as managing our calendar, paying bills, securing our homes, entertaining ourselves, improving our fitness, preparing food, cleaning our homes, managing our personal finances, communicating with the outside world, being creative, taking care of our children, staying healthy, managing chronic conditions, caring for our mental health, to name a few. Yet, these solutions work independently, transferring the responsibility of assemblage to the users, most of whom lack the capability or the time. The result is a broken consumer experience, where consumers use dozens of apps that do not work with each other, appliances and “smart” devices that are not interoperate with each other, numerous websites that are mostly independent of each other, and of course, voice assistants that dont work with each other. The exciting future of smart homes is to bring all this disparate technology, touch points and solutions together for a seamless consumer experience. Imagine a smart home AI assistant whose primary directive is to care for your wellbeing, that brings together all of this technology and takes over the task of assembling it for you in a way that you prefer. That is the future of smart homes. A smart home that pays your bills, takes care of your investments and debt planning, helps with budgeting, keeps your home safe and secure, organises housekeeping and maintenance, keeps tabs on your medication adherence, organises tele-health services for you, tracks your mental health, keeps tabs on your kids digital and location safety, helps with parenting duties, prepares your food, orders groceries, looks after your car, helps organise your digital memories, and stay in touch with the outside world. There are some formidable problems that must be solved before smart homes can do more than home automation. Convergence outside the home: Current digital solutions are attempting to bring our personal data together outside our homes. This personal data is then shared with numerous first party and third party service providers to provide some tangible utility to consumers. There are serious limitations with this approach and is probably one of the most important barriers to overcome, to bring about a seamless user experience for consumers. Privacy : Existing smart home solutions are based on the Internet of Things model, where the brain is in the Cloud, so all your personal data must be transferred outside your home. While data privacy on social networks and communication platforms are worrying, opening up your home is a different beast altogether, where expectations of privacy is paramount. Security : Since all smart devices are now connected to the Internet, this increases the number of entry points to compromise the home security. Combine this with the fact that users are responsible for managing the system security, its easy to see why the number of security incidents (such as the ones with ring cameras) have exponentially increased. Knowledge expectations from users : Imagine a situation where to open a water tap in your home, you need to understand the concepts of absolute pressure, aerators, air lock, alkanity, baffle, Bernoulli’s law, facultative bacteria, Gland, and influents. Hardly anyone would use water taps in their homes. A smart home user needs to understand Zigbee, Zwave, wake words, WiFi expertise, interoperability techniques to setup and troubleshoot a smart home system. Upfront cost : Hardware is expensive. Users are expected to pay upwards of £300 to setup a basic smart home. If they need a thermostat and security cameras, their bill shoots up to over £800. On top of this, users also need to fork out a subscription fee for cloud services. If any of the devices go bust after the warranty period, it burns a further deep hole in their wallets. Risky venture : Over 55% of smart device manufacturers that offered a cloud service has shut them down after a few years, turning these ‘smart’ devices into paper weights(Revolv, Works with Nest, Osram, Android Things, Lowe’s, ) . Running a cloud service without recurring revenue is almost impossible and in the current turbulent period where platform providers are fighting tooth and nail to establish a moat, its still a risky time for consumers to invest heavily in a platform. It is inevitable, and only a matter of time that an AI assistant in your home will be taking over all aspects of your lives. Lets hope and pray that this AI assistant’s loyalty and allegiance is to you rather than a large corporation.
https://medium.com/@whisperhomes/smart-homes-will-transform-our-lives-but-not-how-you-think-f17e35e627e8
[]
2021-01-22 23:10:28.836000+00:00
['IoT', 'Privacy', 'Smart Homes', 'Future Of Technology', 'Technology']
2,379
TECHNOLOGY
TECHNOLOGY Monetary innovation is the utilization of creative innovation to convey a wide scope of monetary items and administrations. It is planned to work with the multi-channel, advantageous and quick installment experience for the shopper. This kind of innovation is viable in various business sections, like portable installments, venture the board, cash move, raising support and loaning. The quick development of monetary innovation has been exceptionally helpful for shoppers around the world, for example, the capacity to serve clients that were not recently taken care of, a decrease in costs, and an expansion in rivalry. We should investigate a couple of the advantages identified with monetary innovation: Better installment frameworks — this kind of innovation can make a business more exact and proficient at giving solicitations and gathering installment. Additionally, the more expert assistance will assist with further developing client relations which can improve the probability of them returning as a recurrent purchaser. Pace of endorsement — numerous independent company adventures are beginning to utilize the elective moneylenders like those engaged with monetary innovation since it can possibly speed up the pace of endorsement for finance. Much of the time the application cycle and time to get the capital can be finished inside a time of 24 hours. 3 Best Practices To Follow When Generating Your Bulk QR Code If you need to create multiple QR codes simultaneously, you are on the right page. Typically, employers need to generate several vCard QR codes for their workers. At times, you may need to organize an event and you may need to have all of the participants bring their unique QR codes with their name tags. In this article, we are going to talk about the best practices you have to follow when it comes to generating these in bulk. Read on to find out more. 1. Add an Appealing Call-to-Action First of all, there should be something that will arouse the interest of people. Apart from displaying your QR code, you may want to add something eye-catchy. The idea is to attract the attention of the viewers.You can use different call-to-action phrases such as “scan to win” and “scan to find out more”. This type of phrase can make the call to action more engaging. Apart from this, the purpose of adding a call to action is to ensure your message is brief and concise apart from being interesting. 2. Put the code where It can be seen These codes must be placed in the right location strategically. If the codes cannot be spotted, they won’t scan them. So, you may not want to make the mistake of placing it at the corner of printed mediums or posters. So, you should not follow this practice and position the QR codes in a place where they can be easily seen. Besides, they should be big enough. 3. Design Matters Typically, these are available in black and white. Therefore, people can treat these figures just like barcodes. They are there for technical purposes and common people have no use of them. Still the design of these codes is of paramount importance for a number of reasons. So, make sure you choose the best design possible before printing them. When it comes to printing these in bulk, don’t forget the design and static aspects of these tools. If you go for the right code generator, there will be no such problem. For example, if you employ the best tool, you can add your desired logo, make alterations to the color and add many other features. This will make the image stand out and make it more interesting. Since the generation of QR codes one by one involves a lot of effort, we suggest that you opt for the best generator to save you time and headache. Large companies hire the services of dedicated employees to generate QR codes for every service, web page, and individual product. However, since we have code generators today, you can just click a button and the tool will create hundreds of codes in a few minutes. Long story short, we suggest that you invest in a good bulk QR code generator and follow the best practices. After all, you cannot take the risk of compromising the quality of the codes. The Introduction Of The Face Recognition System According to news, the Saudi Ministry of Interior is planning to launch an Iris recognition biometric system across Saudi Arabia. This includes plans to cover major places, such as airports, sea and land. The purpose of installing the new systems is to make use of advanced technology for the identification of passengers. In other words, this state-of-the-art system is designed to make sure no intruders can enter a secure area. The National Information Centre of the ministry will be responsible for installing the Iris recognition system. Basically, this system involves the application of mathematical recognition techniques that scan the iris of a person before allowing access to the facility. Digital Transformation Is About People, Not Technology According to an article published in The Economist, “the infusion of data-enabled services into ever more aspects of life”, shall be the most evident consequence of the enduring Covid-19 pandemic. Digital transformation is expected to have greater importance for companies in the future, very shortly. A 2019 survey of CEOs, directors and some senior executives discovered that their #1 concern was risk involving the digital transformation. However, 70% of their initiatives towards this motion failed to meet its goals. Of a whopping $1.3 trillion spent on the new endeavors in 2019, unfortunately, $900 billion was wasted. Why? Fundamentally, digital transformation teams fail, despite the possibilities for growth and efficiency gains, as people lack the mindset to a shift. With flawed organizational practices, it is extremely difficult to transform completely. Moreover, digitization would magnify the flaws, only to make it appear bigger. What is Digital Transformation? When you bring in a new system into an organization, it is only obvious to get a little hyper with the plans for implementation, specification, and counting. Digital change is one of the most critical processes today, which ensures organizations are relevant as well as profitable in this competitive market. The process involves integrating innovative technologies and services into existing business practices and streamline operations. The idea is to improve and add greater value to the final product. This involves adding new tools and applications, storing data, recording information, and a lot of new techniques. That’s of course, the digital aspect of things. But, if you spare a thought, we are talking about “transformation”, which means introducing innovative ways to work with the existing team. Tricky, right! Anybody would be willing to buy a new set of digital suites with the latest tools, but who would run it? The key here is to ensure that the talent, or people, onboard, and the company culture is prepared to adapt. A successful transformation is change management, and people can only make it happen. Getting Your Team Involved Any change is difficult. If you want to introduce major changes in your organization, you have to ensure everyone is with you, and not only your leadership team. Yes, you cannot let the team take big decisions for the organization, but involving your team in a process can give better results. A McKinsey study showed that whereas 84% of the CEOs are dedicated to major transformation changes, only about 45% of the frontline employees agree. Obviously, connecting the dots is a primary obstacle to enact a successful strategy. There are many ways to achieve this: • Take feedback from the team about the changes you implemented • Keep your team abreast of the implementation strategy • Incentivize the team with internal marketing to convince new technology to the most reluctant team member Transformation to the digital landscape can be potentially beneficial to an organization, but only if every single team member agrees and accepts the change. Make sure you have a positive digital transformation team that understands why it is important to adopt new technology and its benefits. Invest & Train Your Team Going digital would have hurdles. Some of the team members may not be as tech-savvy as others. However, you cannot leave them behind. To bring them up to that level, a lot of training is required to help them adapt to the latest technology and tools. Remember, people have different ways of learning too, and speeds may differ. For instance, some team members may understand the concept in one demo session, whereas others may require multiple days of training to get a grip of the new technology. Experiment with varied training materials, such as online courses and hands-on learning, and give them the flexibility to choose how they want to learn. It may take some time to learn how to use new technology for better results, especially for team members who do not possess the natural inclination towards technology. Investing in training is a sure-shot way to leverage this transformation. Digital Transformation Framework Doesn’t Change Everything The digital transformation framework is not about changing everything at once. When you start transforming the business, getting carried away is easy. However, it’s critical to know about the technologies to adopt. You may consider the one that employees would find easier to implement, and being selective to choose the best way. Anything that glitters is not better. When you are planning to transform your business processes digitally, it is only to simplify the work process and facilitate your team members. So, do not make it complicated. If you have any doubt about the changes, consult the frontline staff. For instance, if you want to adopt a new platform for online communication, but you cannot decide between Zoom, Teams, and Slack, consult your staff and take their opinion. Broaden Your Vision Do not have a myopic vision when it comes to a major transformation. Digital transformation services aim to make lives simpler and better. A successful transformation strategy is about introducing new changes into the business to make it more efficient and reduce employee workloads. If executed properly, such a digital revolution can lead to improved working practices, increases value of customers, and lesser workload for the team. If your digital move is not ticking all the boxes, something is amiss. Bring Change Right from the Top The concept of grassroots change is intuitive. However, in reality, change is more likely to take place if driven right from the top. Again, that does not indicate a hierarchical or autocratic structure or a culture that breeds fear. It simply implies leadership, both transformational and transactional. When digital change is concerned, the primary implication is that no major change or even an upgrade to the organization is possible unless you select and develop the top leaders to start. It’s very clear that leadership, both good as well as bad, flows down to affect every aspect of an organization. The single most factor that determines the effectiveness of the transformation of an organization is the CEO or the top leader of an organization. Of course, industry, culture, context, legacy, people, and real tech matter, just like other resources. However, these things are rather too similar among competitors, while values, mindset, integrity, and competence of senior-most leaders stand out to be the primary differentiating factor. Needless to add, everything in an organization can be imitated, but not talent. So, invest in the best talent for greater impact, which is exactly where you would get the highest value. Final Thoughts Technology is all about doing a lot more with minimum resources, yet the arrangement is effective only when technology is paired with the best human skills. Like technological disruption leading to automation and eliminating out-of-date jobs, it has created more jobs. This is precisely why innovation is also called ‘creative destruction.’ Any creative facet of innovation depends on people. So, leveraging human adaptability to upskill and reskill the workforce can augment technology and humans simultaneously. Simply put: a brilliant innovation would be irrelevant if we do not have enough skilled workforce to implement it, and the most inspiring human minds would be least useful if they do not connect with technology. The core implication is — when leaders want to invest in new technology, they should consider investing in people who make technology useful.
https://medium.com/@shahabsatti26/technology-7513363397a9
[]
2021-08-23 11:12:44.693000+00:00
['Technology', 'Qr Code', 'Monitoring', 'Computer Science', 'Face Recognition']
2,380
If you use Gmail, you may not have received Medium notifications this week.
If you use Gmail, you may not have received Medium notifications this week. Please contact Medium support who will sort it for you. My first action when I picked up this problem was to check my email settings in Medium, but the switch was on. Then I worried my laptop was the cause — I’ve had one crash on me before. I’m sure you can relate to our human tendency to imagine the worst! When Lucy The Eggcademic (she/her) said she had the same problem, I heaved a sigh of relief and sent a query to Medium support. I’m happy to report they fixed my problem in under an hour! Three days’ worth of email poured in, enabling me to catch responses I’d missed on Medium’s bell. They reckon it may be because of the massive Gmail outage on 14 December. Screenshot by Author I suggest you contact Medium Support if you have the same problem.
https://medium.com/everything-shortform/if-you-use-gmail-you-may-not-have-received-medium-notifications-this-week-392d7c9fe188
['Caroline De Braganza']
2020-12-19 10:21:39.345000+00:00
['Technology', 'Advice', 'Gmail', 'Medium', 'This Happened To Me']
2,381
CIT (Center Of InsurTech, Thailand) ศูนย์กลางของนวัตกรรมด้านเทคโนโลยีประกันภัยยุคใหม่ บริการด้วยใจพัฒนาธุรกิจประกันภัยเพื่อประชาชน
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/center-of-insurtech-thailand/cit-center-of-insurtech-thailand-%E0%B8%A8%E0%B8%B9%E0%B8%99%E0%B8%A2%E0%B9%8C%E0%B8%81%E0%B8%A5%E0%B8%B2%E0%B8%87%E0%B8%82%E0%B8%AD%E0%B8%87%E0%B8%99%E0%B8%A7%E0%B8%B1%E0%B8%95%E0%B8%81%E0%B8%A3%E0%B8%A3%E0%B8%A1%E0%B8%94%E0%B9%89%E0%B8%B2%E0%B8%99%E0%B9%80%E0%B8%97%E0%B8%84%E0%B9%82%E0%B8%99%E0%B9%82%E0%B8%A5%E0%B8%A2%E0%B8%B5%E0%B8%9B%E0%B8%A3%E0%B8%B0%E0%B8%81%E0%B8%B1%E0%B8%99%E0%B8%A0%E0%B8%B1%E0%B8%A2%E0%B8%A2%E0%B8%B8%E0%B8%84%E0%B9%83%E0%B8%AB%E0%B8%A1%E0%B9%88-cf62af93f1fc
['Siwawut Wongyara']
2020-12-24 11:29:27.563000+00:00
['Thailand', 'Technology', 'Startup', 'Insurance', 'Insurtech']
2,382
Lessons from George Vanderheiden, one of the greatest investors I’ve ever known.
George Vanderheiden is one of the best investors I’ve ever known. He chose to retire in early 2000, massively underweight technology and massively overweight home builders, tobacco and other value oriented stocks. He had written “Tulip bulbs for sale” on the whiteboard outside his office and left a package of them underneath in late 1999. 2020, especially the end of it, has me thinking of George and what I have learned from listening to him over the years. Manias often end at the end of a calendar year. To quote George via a New York Times article dated January 13, 2000: “What I’ve found over a lot of years is that a lot of these manias seem to end at the end of the year” with the article further noting that George “thought a value revival might be just around the corner, noting that the end of other investing crazes — the run-up in the Japanese stock market during the 1980’s, the biotechnology craze of 1991 and the rise of the Nifty 50 in 1972 — all fizzled around the start of a new year.” Obviously prescient and does make one think about the current environment, which is clearly a mania of sorts even if it is “narrower” mania than 1999/2000. It is important to start the year thinking about where one could lose the most money, rather than where one can make the most money. George spoke to the analysts and fund managers from time to time after he retired. During one of those talks, hosted by Will Danoff, George observed that while most fund managers began the year thinking about which stocks they should own to make the most money, he began the year with the knowledge that roughly 50% of his positions were mistakes, tried to carefully think about which of those mistakes would cost him the most money and eliminate those positions. Minimizing mistakes is essential given that almost all investors — even the best ones — are wrong circa 50% of the time. I really love the humility inherent to this mentality, which is aligned with my increasing belief that “I don’t know” are the three most important words in investing, not “margin of safety.” One can always be wrong, no matter how much work has been done or big the “margin of safety” appears to be. Being too early is the same as being wrong. Analysts and fund managers often say that “they were early,” rather than wrong. George’s point was that being too early is a mistake rather than a defense. Time in the market is more important than timing the market, but timing does matter for individual stocks as the “price you pay determines the return you get.” Being a fund manager makes it easier to change your mind relative to being an analyst. George once said that being an analyst, forced to publicly defend and justify your decisions, made it much harder to change your mind relative to fund managers “who could buy or sell in the dark of the night without anyone knowing.” Very true and important given that I believe investing success comes down to finding the right balance between conviction and flexibility; i.e. changing your mind at the right time, especially when you have been wrong. As a fund manager, you cannot take so much risk that you get taken out of the game or take yourself out of the game at the wrong time. George was obviously 100% right about the technology bubble and value resurgence that followed. He was positioned perfectly for the next 3–5 years, but he wasn’t there to reap the benefits. While George chose to retire, I am also reasonably confident that the relentless questioning from management, consultants, the press and the constant outflows from his funds in 1999 contributed to his decision. This dovetails with both the idea of “client alpha” and George’s own “sometimes being too early is the same as being wrong.” Difficult to even say the latter because George was SO right for the right reasons and didn’t just own value stocks, he owned the stocks that were the best performing value stocks over the next few years. But the larger point for me is that there is a level of risk that is unhealthy for all constituents, even the fund manager. There is nothing better when you are winning and nothing worse when you are losing. “Client alpha” really matters. “Client alpha” is the idea that having the right clients who are aligned with what you are trying to achieve as a fund manager is critically important. Otherwise capital is often taken away at exactly the wrong time. Buffett was able to stay in the game in 1999 because he had a permanent capital vehicle and a carefully cultivated public image. This is one reason that being a good communicator is important to being an investor if one is managing external capital — one cannot have “client alpha” if the clients don’t understand what the fund manager is trying to accomplish. George was a great communicator, but no one has ever been better than Buffett. George is one of the greats. I am grateful for all that I learned from him over the years. And to this Morningstar headline from October 5, 2000, “Gasp, could George Vanderheiden been right,” I will simply yes, George was right.
https://medium.com/@gavin-baker/lessons-from-george-vanderheiden-one-of-the-greatest-investors-ive-ever-known-4bfa74c4b6d4
['Gavin Baker']
2020-12-26 16:47:39.311000+00:00
['Stocks', 'Investment', 'Investing', 'Technology', 'Stock Market']
2,383
Time to shape your own financial universe
Bitcoin is laying the foundation of a whole new world right as we speak. A world with no intermediaries or centralized institutions where information and value will be free of all restraints and limitations. 10 years of BTC history slowly but surely turned it in digital “gold” — an unstoppable force heading straight up. Nowadays, value is dictated either by demand or consensus. Demand gives value to things like food and housing while consensus offers it to non-essential resources such as gold, diamonds, oil or even fiat. The latter’s actual value stands in its recognition by the masses. And the issuers of these resources dictate their increase in popularity using several means including governmental and military forces. This is also the case for Bitcoin. The more people and institutions recognize and concede on its value, the more it rises. Other crypto currencies follow this same pattern. However, coins being a valid payment method for let’s say gas, could further create and increase their tangible value due to demand. The problem arises when we consider where this demand comes from. Mandatory need for payments? Making transactions? How about concluding smart contracts? Smart contracts have remained until recently, unexplored in terms of potential and intrinsic business value. Their sole usage was related to miscellaneous app development. But DeFi brought smart contracts into the spotlight and uncovered their amazing business value. Just think about the fact that all loan operations could be implemented and carried out through code. A simple, transparent and efficient way to manage the entire loan process would be available to anyone, while keeping an exceptionally low cost compared to current bank fees. Taking a closer look at the ICO smart contract developed by Fusion (website here) reveals a coded automated finance service very similar to IPO. With over 13000 users and 72 mil USD worth of ETH over the hard cap automatically returned to participants; not one single server was run and human resources were a fraction of what traditional institutions employ. And still, we managed to carry out the IPO style ICO by simply deploying a smart contract. The total cost of the whole operation added up to a mere 5 ETH for gas. This was made possible by decentralized, coded, and automated finance applied through smart contracts. We are confident any type of financial service can be coded and automated. Thus, the entire financial system will run smoothly, with no boundaries or middlemen involved. Fusion is especially designed to serve as the infrastructure behind it all. And Chainge will be the exclusive medium that empowers mass users so they can enjoy the whole range of benefits that come with digital, coded and automated finance. With Chainge, anyone can be an independent digital/crypto bank-like entity, serving a worldwide network of users.
https://medium.com/@chainge-finance/time-to-shape-your-own-financial-universe-c3e2278e7492
['Chainge Finance']
2020-12-15 11:09:59.536000+00:00
['Defi', 'Decentralized Finance', 'Blockchain Technology', 'Smart Contracts']
2,384
Join My Java Facebook Group: “Java Software Development Group”
Log into Facebook | Facebook Log into Facebook to start sharing and connecting with your friends, family, and people you know.
https://medium.com/the-java-report/join-my-java-facebook-group-a32cee52a0d9
['Adrian D. Finlay']
2018-04-24 12:39:37.609000+00:00
['Programming', 'Java', 'Technology', 'Tech', 'Software Development']
2,385
SQL vs. NoSQL Database: When to Use, How to Choose
NoSQL for Semi-structured Data NoSQL datastores cater to semi-structured data types: key-value, wide column, document (tree), and graph. Key-Value Datastore A key-value store is a dictionary or hash table database. It is designed for CRUD operations with a unique key for each record: Create(key, value): Add a key-value pair to the datastore Read(key): Lookup the value associated with the key Update(key, value): Change the existing value for the key Delete(key): Delete the (key, value) record from the datastore The values do not have a fixed schema and can be anything from primitive values to compound structures. Key-value stores are highly partitionable (thus scale horizontally). Redis is a popular key-value store. Wide-column Datastore A wide-column store has tables, rows, and columns. But the names of the columns and their types may be different for each row in the same table. Logically, It is a versioned sparse matrix with multi-dimensional mapping (row-value, column-value, timestamp). It is like a two-dimensional key-value store, with each cell value versioned with a timestamp. Wide-column datastores are highly partitionable. It has a notion of column families that are stored together. The logical coordinates of a cell are: (Row Key, Column Name, Version). The physical lookup is as following: Region Dictionary ⇒ Column Family Directory ⇒ Row Key ⇒ Column Family Name ⇒ Column Qualifier ⇒ Version. So, wide-column stores are actually row-oriented databases. Apache HBase was the first open-source wide-column datastore. Check out HBase in Practice, for core concepts of wide-column datastores. Document Datastore Document stores are for storing and retrieving a document consisting of nested objects. a tree structure such as XML, JSON, and YAML. In a key-value store, the value is opaque. But the document stores exploit the tree structure of the value to offer richer operations. MongoDB is a popular example of a document store. Graph Datastore Graph databases are like document stores but are designed for graphs instead of document trees. For example, a graph database will suit to store and query a social connection network. Neo4J is a prominent graph database. It is also common to use JanusGraph kind of index over a wide-column store. SQL vs. NoSQL Database Comparision Non-relational NoSQL datastores gained popularity for two reasons: RDBMS did not scale horizontally for Big Data Not all data fits into strict RDBMS schema NoSQL datastores offer horizontal scale at various CAP Theorem tradeoffs. As per CAP Theorem, a distributed datastore can give at most 2 of the following 3 guarantees: Consistency: Every read receives the most recent write or an error. Every read receives the most recent write or an error. Availability: Every request gets a (non-error) response, regardless of the individual states of the nodes. Every request gets a (non-error) response, regardless of the individual states of the nodes. Partition tolerance: The cluster does not fail despite an arbitrary number of messages being dropped (or delayed) by the network between nodes. Note that the consistency definitions in CAP Theorem and ACID Transactions are different. ACID consistency is about data integrity (data is consistent w.r.t. relations and constraints after every transaction). CAP is about the state of all nodes being consistent with each other at any given time. Only a few NoSQL datastores are ACID-complaint. Most NoSQL datastore support BASE model: Basically Available: Data is replicated on many storage systems and is available most of the time. Data is replicated on many storage systems and is available most of the time. Soft-state: Replicas are not consistent all the time; so the state may only be partially correct as it may not yet have converged. Replicas are not consistent all the time; so the state may only be partially correct as it may not yet have converged. Eventually consistent: Data will become consistent at some point in the future, but no guarantee when. Difference between SQL and NoSQL Differences between RDBMS and NoSQL databases stem from their choices for: Data Model: RDBMS databases are used for normalized structured (tabular) data strictly adhering to a relational schema. NoSQL datastores are used for non-relational data, e.g. key-value, document tree, graph. RDBMS databases are used for normalized structured (tabular) data strictly adhering to a relational schema. NoSQL datastores are used for non-relational data, e.g. key-value, document tree, graph. Transaction Guarantees: All RDBMS databases support ACID transactions, but most NoSQL datastores offer BASE transactions. All RDBMS databases support ACID transactions, but most NoSQL datastores offer BASE transactions. CAP Tradeoffs: RDBMS databases prioritize strong consistency over everything else. But NoSQL datastores typically prioritize availability and partition tolerance (horizontal scale) and offer only eventual consistency. SQL vs. NoSQL Performance RDBMS are designed for fast transactions updating multiple rows across tables with complex integrity constraints. SQL queries are expressive and declarative. You can focus on what a transaction should accomplish. RDBMS will figure out how to do it. It will optimize your query using relational algebra and find the best execution plan. NoSQL datastores are designed for efficiently handling a lot more data than RDBMS. There are no relational constraints on the data, and it does not need to be even tabular. NoSQL offers performance at a higher scale by typically giving up strong consistency. Data access is mostly through REST APIs. NoSQL query languages (such as GraphQL) are not yet as mature as SQL in design and optimizations. So you need to take care of both what and how to do it efficiently. RDBMS scale vertically. You need to upgrade hardware (more powerful CPU, higher storage capacity) to handle the increasing load. NoSQL datastores scale horizontally. NoSQL is better in handling partitioned data, so you can scale by adding more machines.
https://towardsdatascience.com/datastore-choices-sql-vs-nosql-database-ebec24d56106
['Satish Chandra Gupta']
2021-09-03 01:25:21.273000+00:00
['NoSQL', 'Artificial Intelligence', 'Big Data', 'Technology', 'Data Engineering']
2,386
Some days we talk, some days we type.
Reading your students from day to day and giving feedback on their learning is the basics of teaching and guiding students through curriculum in high school. Great teachers have the innate ability to look at a student and see where they are by reading body language and facial expressions. I like to think of us as FBI agents, only we are not trying to interrogate, we are trying to teach. In the pandemic, reading of students has become tricky. Great teachers don’t have the ability to read students like they used to. Most of the students are sitting behind a screen. Most will not put their cameras on. I can’t blame them. The learning environment has now invaded their homes. Many don’t want to show the outside world into theirs. I have found this is for a variety of reasons. As teachers, we have been challenged to read students differently. I took that challenge this year. Here is what I have found so far. Listening to tone, pace, background noise, and if a student even answers a question has become the norm of reading the virtual room. Some days we talk, some days we type. Some days, the virtual classroom is energetic. There is a lot of conversation. The students are all participating. You can feel it as a teacher. It feels good. We are used to this type of interaction. It feels somewhat “normal.” Some days are completely quiet. No matter what you put out there as the virtual teacher, there is little verbal response. I usually move to the chat when this happens. I will pose a question in the chat room and see if I get a response. It usually works. When a response comes in, I praise the effort. Thanking the student for their thoughts is first. Then I strategize to branch out the discussion with a “what other things do you think about this… anyone can chime in.” The typing usually starts to build. There is still complete silence in the virtual space however the typing is fast and furious. The most important thing is to be humble, genuine, and honest about what needs to be learned that day. Students know when you are getting frustrated or disheveled in the lesson, even when they are virtual. When I hit a high level of frustration because I can no longer read the virtual room, I ask students what are some cool things other teachers have done to make their lessons fun or engaging. I learn with the students how to keep moving forward. Teaching in a pandemic has taught me so much more as an educator. I have learned to embrace that learning virtually or in person. I feel ready to take on anything in education going forward!
https://medium.com/@peterhostrawser/some-days-we-talk-some-days-we-type-df797f003dda
['Peter Hostrawser']
2020-12-26 15:39:13.336000+00:00
['Learning', 'Education Technology', 'Education', 'Virtual Learning', 'Teaching']
2,387
❔ re:compass #44: A Good Question
🔺 START Summit When: 22–27 March 2021 Access: CHF 9–199 Format: hybrid conference Streaming live from hubs in San Francisco, Boston, Shanghai, Rio de Janeiro and Bangalore START Summit brings talks about innovation and entrepreneurship while aiming to connect more than 5000 startups, investors, corporates and young talents. START Summit is Europe’s largest student-run conference for entrepreneurship and technology, which this year will have two physical locations in St.Gallen and Helsinki while giving opportunity to tune in the event programme online for everyone around the world. 💱 Finovate Europe When: 23–25 March 2021 Access: £599 Format: online conference Finovate is curated, fast-paced fintech conference, and gets right to the point. 20+ product demos and content sessions are spread across 3 days so that content is more manageable for digital consumption. Meetings can be scheduled before, during and after the event, 24 hours a day, to accommodate all time zones. With 1000+ attendees covering all the verticals of the fintech eco system, there’s no better way to meet and stay informed. 🛡️ PRIV8 When: 23–25 March 2021 Access: free Format: online conference PRIV8 digital privacy summit by Orchid will feature keynotes from all fronts of the privacy war, including from the whistleblower and cybersecurity expert, Edward Snowden. The conference will focus on the battle-lines drawn between freedom and security in our seemingly perpetual privacy crisis. Stellar speakers will cover different aspects of digital privacy, from how we communicate, to how we transact, to how we live in an era of pervasive digital surveillance. 🐺 WOLVES Summit When: 24–26 March 2021 Access: €54 — €690 Format: online conference WOLVES Summit is back — this year on a truly global scale. The landmark event in the European startup ecosystem offer an exclusive programme encompassing everything from content, virtual meet-the-speaker breakout sessions, workshops and of course our famous 1:1 meetings — the best chance to find an investor who is interested in working with startups. Join the conference for content, stay for their Wolves Match matchmaking system. 🤝 Virtual Shake-Up 3.0 When: 24–25 March 2021 Access: free Format: online conference Virtual Shake-Up is dedicated to all event professionals, aptly subtitled with “Event strategies 2021 and beyond”. 3.0 puts a spotlight on hybrid events, where the top leaders in the event industry will provide you with actionable advice and tested strategies for hosting both virtual and hybrid events — and will share their vision for 2021. 🌱 Scaling sustainably from seed to IPO When: 24 March 2021, 16:00 GMT Access: free Format: online session Successfully scaling an organization is hard — no matter the size of the company. This LeadDev panel will engage in conversation with engineering leaders who have scaled their companies, from seed to Series A, and up to IPO and beyond. They’ll be discussing their experiences scaling organizations of different sizes, with varied team structures and unique visions. Deciding on your company’s mission and vision, developing your marketing strategies, and hiring more employees are all essential to scaling and come with a unique set of challenges — so how do we go about tackling them? 📻 The power of audio: Building customer connections and boosting loyalty When: 24 March 2021, 17:00 GMT Access: free Format: online session Don’t miss this VB Live event to learn how branded audio connects brands and customers in powerful ways. Audio is booming and it is an incredibly powerful way to reach customers, and brands that do it right can increase personalization, consumer convenience, and customer loyalty. Join the event to learn how brands can include both branded audio and media, and user-generated media in their go-to-market strategy as well the benefits of branded audio media. 🦾 [Podim] Encouraging an entrepreneurial mindset in CEE region When: 25 March 2021, 18:50 CET Access: free Format: online session Podim is organizing their satellite event #3 and will take a closer look at the Central Eastern European startup ecosystem and discuss what can be done to support and improve the ecosystem. Speakers include Gabriele Tatzeberger from Vienna Business Agency, Tanja Sternbauer from The Female Factor and Matija Nakić from Farseer, hosted by Dejan Jovićević from Der Brutkasten. The event is going to be streamed on Podim’s Facebook page and on Der Brutkasten’s Facebook page, however, there are 50 Zoom spots to apply for the direct participants as well. 🏰 Disney x Women in Big Data Women’s History Month Celebration When: 25 March 2021, 16:00 PDT Access: free Format: online session In this dynamic panel discussion, women in technology at Disney will take you along their career journeys to and within the tech world and provide their best advice. Following the panel, attendees will be invited to participate in a networking session where they can connect with the panelists, additional Disney technologists and recruiters, and members of the Women in Big Data community in virtual breakout rooms.
https://medium.com/reevent-me/re-compass-44-a-good-question-244df6699047
['Aiste Lehmann']
2021-03-19 07:48:36.787000+00:00
['Technology', 'Thoughts', 'Startup', 'Events', 'Entrepreneurship']
2,388
Understanding and fixing N+1 query
In this article, you’ll learn about the famous N+1 query that everybody is talking about, and how to fix and prevent them. Regarding backend performance, there is a performance issue that everybody has heard about at least once: N+1 query. What you’ll hear less often is that baking a chocolate cake is a perfect analogy to explain it. What’s a N+1 query? TL;DR: The N+1 query problem happens when your code executes N additional query statements to fetch the same data that could have been retrieved when executing the primary query. If you understood the previous statement, you can skip right to the next section: “How to fix it?” If you did not understand the previous statement, don’t worry. We’ll sort it out together. The recipe analogy First, let put aside all those query, and SQL, and web application shenanigans, and imagine that you’d like to bake a cake. For the sake of the example, let’s imagine that your fridge and pantry are not in your kitchen, but in your attic, forcing you to take the stairs anytime you’d like to fetch something out of the fridge. Now, let’s imagine that you go to your bedroom to grab your cookbook, to find your chocolate cake recipe, and go back to your kitchen with it. You check the first line: 200 grams of Dark Chocolate. You go into your attic to fetch a tablet of chocolate, go back to the kitchen, and read the second line: 3 Eggs Once again, you take the flight of stairs to fetch those 3 eggs, go back to the kitchen and read the next lines of the recipe: 100 grams of Butter A bit exhausted by all those back and forth, you take the path to the attic, AGAIN. If only there was a simpler way… What you would be experiencing, if you cooked this way, is exactly what happens when we have a N+1 query problem. Your ORM is forced by your code to do N additional queries (the round trips to fetch the ingredients) after the first initial query (when you fetched your recipe). A concrete example We’ll stay in the cooking business by considering a web application that displays a listing of cooking recipes like follow:
https://medium.com/doctolib/understanding-and-fixing-n-1-query-30623109fe89
['Alexandre Ignjatovic']
2020-07-15 11:33:33.079000+00:00
['Sql', 'Ruby', 'Software Development', 'Performance', 'Technology']
2,389
SpaceX Launches 88 Satellites in Ridesharing Mission
SpaceX successfully launched 88 satellites into polar orbit under the Transporter-2 mission. This is SpaceX’s second dedicated mission under the SmallSat Rideshare program, which intends to launch satellites from different companies and startups, like a carpool. On June 30, around 3:31 pm EST, Falcon 9 took off from Cape Canaveral, Florida. The first stage booster, which was reused for the eighth time since it entered service last year, was separated at 3:34 pm EST and landed back to Cape at 3:39 pm EST. This marks SpaceX’s 20th launch this year but importantly the first one when the first stage booster landed onshore at Landing Zone-1, the company’s designated landing pad. All previous missions this year landed in the sea over the drone ships. The launch also raises the tally to 127 missions for SpaceX. With months left in the calendar year, it can be predicted that the company is going to break its own record of 26 launches that took place last year. SpaceX successfully launched 88 satellites into polar orbit under the Transporter-2 mission. This is SpaceX’s second dedicated mission under the SmallSat Rideshare program, which intends to launch satellites from different companies and startups, like a carpool. On June 30, around 3:31 pm EST, Falcon 9 took off from Cape Canaveral, Florida. The first stage booster, which was reused for the eighth time since it entered service last year, was separated at 3:34 pm EST and landed back to Cape at 3:39 pm EST. This marks SpaceX’s 20th launch this year but importantly the first one when the first stage booster landed onshore at Landing Zone-1, the company’s designated landing pad. All previous missions this year landed in the sea over the drone ships. https://twitter.com/SpaceX/status/1410392725996904448 The launch also raises the tally to 127 missions for SpaceX. With months left in the calendar year, it can be predicted that the company is going to break its own record of 26 launches that took place last year. Transporter-2 mission launch. Image: SpaceX Although the launch was originally scheduled on Tuesday, it was stopped T-11 seconds before liftoff due to the entry of a helicopter in the “keep out zone”. Elon Musk was quite disappointed about Tuesday’s dangerous situation due to range violation. He tweeted and pointed out how the existing regulatory system is faulty and spacefaring civilization requires major regulatory reforms. #Transporter-2 Mission The transporter-2 mission consisted of 88 satellites, among which 85 were launched for external government and commercial customers. The other 3 were the company’s own Starlink internet satellites. All of them flew as part of this one mission which is the company’s second most dedicated ride-sharing mission to help other startups and companies to share satellites. Satellites include the first launch for Umbra — a space intelligence startup that raised $32 million in January, and YAM-2 and YAM-3 which are Loft Orbital’s satellites. Five independent sensors are present in both YAM-2 and YAM-3 for five individually separate customers. The Pentagon’s Space Development Agency (SDA) had four satellites on Transporter-2 for $21 million. Transporter-2 is the successor to Transporter-1, which was the first ridesharing mission for SpaceX. Transporter-1 carried 143 satellites in January but less orbital mass than Transporter-2. https://www.youtube.com/watch?v=sSiuW1HcGjA
https://medium.com/@way2gsn/spacex-launches-88-satellites-in-ridesharing-mission-c91905e60804
[]
2021-07-06 12:22:00.300000+00:00
['Spacex', 'Space Exploration', 'Satellite Technology', 'Mission', 'Elon Musk']
2,390
How to survive and thrive during the 2020s
Time for playing or exploring is not wasted time. We need to provide ourselves more opportunities for pure-play, exploration, and adventure. This means sometimes we will be wasting time and that is OK; as long as we keep learning and exploring. Even if we have no idea on how to solve a problem, we can still figure out what to do and improvise. We may not know how to go forward, but we can always learn, figure out a strategy, iterate, develop, practice, and improve. We need to take more risks — it is OK to look stupid and try out new things. Change is always frustrating and uncomfortable, but it is the only way forward. We need to treat every experience as a learning experience. Go out there and start again. Embrace failure Perhaps we will suck and that is fine — we need to stop taking ourselves so seriously. If we really want to get lucky in the long term, we need to provide ourselves more opportunities for failure. Failures are merely stopping points along the bigger journey. We need to celebrate your failures and use them as learning opportunities. We need to develop more grit and resilience to get back up after a failure. Take a long-term view Imagine your best future self after 10 years. Start acting like that person today. Act like the CEO of your own life. Where do you want to be in 10 years? Do not leave it to others to choose your destiny. Make a plan, and work toward it. Take a small action every day, consistently. You need to think and act long term and not give up on your dreams. Even if the whole world is turning itself against on your ideas, you need to hang in there and be patient. J. K. Rowling was rejected by at least a dozen publishers before she was able to publish Harry Potter. Star Wars was rejected by many major studies until it became a household brand and franchise worth 70 billion dollars. We need to remember that this is a marathon and we are playing the long game. We should not be discouraged by failures and rejections. Keep imagining Imagination has become one of the most successful critical success factors for success and happiness in this age. We can use imagination to tap into rich worlds of possibility, to dream about our lives, to invent new things, to create new theories, and to share our stories with the world. Imagination allows us to act like children, be foolish and curious, let it go, have fun, mess things up, get out of the rut, and invent new ways of thinking. These actions will increase our neuroplasticity, and enable us to come up with a constant stream of fresh ideas. The fascinating thing about imagination is that it is unlimited. The more you use it, the more you will have it. Imagination is the key to success since each plan starts with an image. This might be a vision, a dream, a possibility, or a mental picture of what you want to turn into reality. Imagination is your blueprint, your compass, and your map navigating the future. Imagination allows us to communicate with our subconscious mind and create positive mental imagery. If we believe that our dreams can be realized, we will work harder towards making them happen. Imagination is like a muscle and it can be strengthened through practical exercises and experiments we can easily apply in our daily lives. How do you exercise your imagination regularly? You can give our brain puzzles, adventures, problems, questions, experiments, visualization exercises, and challenges every day. You can hunt for the most interesting stuff. You can spend time every day learning new things that excite and surprise you. You can be a hunter and learner of the most interesting things. You can provide yourself more opportunities for pure-play, exploration, and adventure. Similar to Bill Gates’ think weeks, schedule time in your calendar for learning and exploring new things. Time for playing or exploring is not wasted time. You can be obsessive in following your passions and curiosities. Keep learning Invest in yourself and your learning every day. This is the biggest investment you can ever make. Be an autodidact: A self-taught person individual who initiates and manages his/her own learning and reads voraciously. Assume responsibility for your own learning. Go out of your comfort zone. Learn outside your discipline. Think and act wider. Education as usual is dead. Long live lifetime learning! Be a polymath: An individual whose knowledge spans a significant number of subjects, known to draw on complex bodies of knowledge to solve specific problems. To be a polymath, expand your horizons and read books from diverse disciplines. Be greedy about your learning. Read widely and diversely beyond disciplines. Read at least 100 books every year (which means 2 books per week). To solve wicked problems of the 21st century, you need to think beyond borders and disciplines. Cross boundaries — there are no borders. Play the long term game. Imagine that you are running a marathon. Image created by Author Below is the new success equation: Ask Right Questions + Follow Your Interests + Intense Curiosity + Continuous Learning + Imagination + Passion You need to design multiple lives and multiple careers for your future. You are not cut out for just one job. Your imagination is without boundaries — why do you limit yourself to one job or one career? Think about where you will be after 10 years. Act according to that vision. You can be as big as your dreams and ambitions. So, dream big dreams, be specific in your vision and capture your wishes and desires in your diary. Try to master the mental models of different fields and use them to develop a holistic perspective and make better decisions in your life. Live exponentially and compound yourself for the long term. When you are setting your goals, always leave an open room for X objectives(unknown goals). X objectives are goals that you cannot see right now. They will emerge as you navigate uncharted territories. How can you compound your skills and assets for the long term? Image created by Author Design your life by surprise. Leave more room for flexibility, serendipity, exploration, and adventure. It is not easy to scare yourself and set yourself new challenges and adventures. In order to do this, you must get out of your comfort zone. You must set sail to new horizons and explore your own blue oceans. If you do this, you will be rewarded. You will be happier. Forget the career ladder and start creating your own assets. Remember: Imagination and asset creation are linked very closely. Imagine and create your own game. It is never too late to follow your interests, curiosities, and passions. Show up for creative work and use random prompts or anchors to get going. Start small — small is beautiful. You need to establish a system of creativity to create your own assets: Consistent Small Actions + Smart Moves + Hard Work + Play Your Game Image created by Author You need to astonish and amaze yourself every day. You can use improvisation to increase adventure and quality in your life. Try to apply automated writing, drawing, doodling, story-telling, ideating, designing, creating, dancing, and singing in your life. Image created by Author Create your personal poster and manifesto. The world needs interesting, unique, weird people — like you! What kind of super-hero are you? How will you help other people? Brainstorm ways to amplify your strengths. Image created by Author This is a marathon and you are playing the long game. If you really want to get lucky in the long term, you need to provide yourself more opportunities for failure. These strategies will enable you to develop creative solutions, experiments, and innovations for the challenges of the new decade. This essay has attempted to outline the vision and strategies to create a truly creative life, ready to embrace this crazy world. The new decade is the right time for you to boost our creativity and imagination. You can design and create a happy life for yourself by designing a decade full of creative challenges and entrepreneurial adventures. You can create your own renaissance during the 2020s. Fahri Karakas is the author of Self-making Studio. You can explore more here.
https://medium.com/journal-of-curiosity-imagination-and-inspiration/how-to-survive-and-thrive-during-the-2020s-38050a04487f
['Fahri Karakas']
2020-08-09 21:11:56.451000+00:00
['Work', 'Self', 'Technology', 'Productivity', 'Creativity']
2,391
Learn how a Microsoft designer built an internal Icon Library in his spare time
Imagine a world where icons coexist no matter which design tool a UI/UX professional prefers. The icons live peacefully together, properly tagged and classified, easily searchable by everyone. One designer at Microsoft, Jackie Chui, built this utopia. In three weeks flat, he spun up a browser-based library with the company’s full set of 4,000 icons. He made searching intuitive by including class names for engineers and tags for designers. Engineers can even do a reverse search to grab metadata by pasting an icon into the library. Perhaps the best feature of them all is cross-tool functionality. Because Jackie’s library runs in the browser, anyone at Microsoft can bring the icons straight into their preferred design tool. No more arguing over which application should rule them all. If you feel like your icons live in a siloed design nightmare, take a look at Jackie’s pragmatic approach to building a shared icon library. How to set up your own Icon Library 1. Research the competitive landscape and your users’ needs First, like any designer worth their salt, Jackie did his preliminary research. He talked to several designers at Microsoft and watched their design processes in action to best understand their icon issues. Next, he looked for existing products on the market. The best Icon Library with organizing and copy/paste capabilities that he could find was IconJar, but it still lacked several features Microsoft needed: It didn’t have the ability to share and crowdsource tags. Jackie knew he wanted everyone the autonomy to contribute tags for icons. It wasn’t browser-based, meaning the icons couldn’t live in the cloud. It was Mac only — a big turn-off to the company who created Windows. He decided he’d need to build a tool from scratch to meet the company’s unique needs. He wanted to keep the project a secret and launch it as a fun surprise for his coworkers, so he worked on it after-hours. 2. Design and develop your tool with the wisdom of a designer and engineer As an early step, he built a basic version of his planned icon library’s UI in Sketch. (These days Jackie uses Figma — we’ve become the primary design tool for Microsoft’s Cloud and AI design studio teams 💪.) He took inspiration from IconJar’s design and mocked up a similar version, styling it in Microsoft’s Fabric design language. Once he was finished with the design, he needed to build it. He had some experience with HTML, CSS and Javascript from creating his portfolio website, so he decided to leverage that existing knowledge by tapping a beginner-friendly Javascript library called Meteor.js. It allowed him to build both the front-end and back-end databases without spending too much time mastering a new programming language. He took a React tutorial on Meteor’s website, and applied what he learned to build the icon library. For example, Meteor’s tutorial taught Jackie how to build a database for a to-do list, and he extended that concept to creating a database for his icon library. With a real problem in mind that needed fixing, he felt more than motivated to learn along the way. 3. Collect and extract the icons from the company’s repository For those not in the know, icons are normally stored in a font file. The challenge with icons is that you cannot ‘type’ the icons using your keyboard since there are many more characters than keys on a keyboard. Instead, you have to copy the ‘icon character’ and paste it into your design to use. Previously, designers had to keep a file with a bunch of icon characters that they used often so they could copy and paste in their designs. What Jackie’s tool does is allow designers to search and copy the ‘icon character’ easily in one central location. He downloaded and extracted the files and got the actual unicode characters of each icon. Next he found a Microsoft documentation page with all the icons and their corresponding class names used by engineers. He copied the list of icon names into an Excel spreadsheet then converted that into a JSON file to use in his code. Presto. 4. Distribution After three weeks of not sleeping a whole lot, the first version of his Icon Library was ready. He hosted the tool on Microsoft Azure, a cloud-computing service for building and managing services and applications that is open to the public . At this point he didn’t think too much about the distribution, he just sent a simple email with a link to the tool to his team, then it spread to the rest of the design studio through word-of-mouth. Instantly it was a hit (see pile of emails below for evidence.) Currently, Jackie’s working on a V2 of his Icon Library. His coworkers are helping him find bugs and suggesting new features, however it’s already wormed itself into many designers’ workflow. Once he’s done building the last few features and fixing the remaining bugs, he’ll share it more publicly to other Microsoft teams to extend the tool’s impact. Plans for the future So, what’s next for our tireless overachiever? 😉 In the coming months Jackie wants to develop a converter that, with one click, could take all 4000+ icons from his Icon Library and turn them into Figma components. This would allow everyone at Microsoft to organize and search for icons much more easily inside Figma without having to go through an intermediary tool. Yep, his new tool will make his last one obsolete. The potential for this goes far beyond Microsoft icons. Eventually Jackie plans to make his converter available to the design community, allowing anyone to be able to drop in an icon font file and get all their icons back as Figma components.
https://medium.com/figma-design/learn-how-a-microsoft-designer-built-an-internal-icon-library-in-his-spare-time-1dfa5f84e886
['Valerie Veteto']
2018-08-16 19:53:07.542000+00:00
['Design', 'Editorial', 'Technology', 'UX']
2,392
Announcing Criptext Pro
Thanks to feedback from users all over the world we’re closer to leaving Beta than ever before. With the end of our Beta stage in sight we want to announce the launch of Criptext Pro. What is Criptext Pro? Criptext Pro is an enhanced secure email experience with productivity features for the most avid email users. It’s basically Criptext on steroids. Features include Custom Domains, Send Later, Reminders, Unlimited Read Receipts and much more. Criptext Pro will cost $10/mo and will be available in February 2020. Will Criptext continue to be free? Yes. We believe privacy and security is a right, not a privilege which is why Criptext will always have a Standard version for free. Early user privileges Existing users will continue to have all the features that they enjoy today for free. That is, you wont have to pay to have Read Receipts, for example. Additionally, as an early Criptext user you will have special perks which include: 1. 25% discount on Criptext Pro 2. Unique legacy profile badge 3. Gif on profile pictures 4. Double points on referral program 5. 90 day Criptext Pro trial Soon we’ll be formally launching Criptext Pro on our website where you’ll be able to find screenshots and more details about this enhanced version of our secure email service.
https://medium.com/criptext/announcing-criptext-pro-686d2d980163
['Mayer Mizrachi']
2019-11-14 15:40:14.388000+00:00
['Startup', 'Infosec', 'Email', 'Technology', 'Cybersecurity']
2,393
Meet our new book — Jumpstart Snowflake — A Step-by-Step Guide to Modern Cloud Analytics
“Don’t Shoot for Middle. Dare to think big. Disrupt. Revolutionize. Don’t be afraid to form a sweeping dream that inspires, not only others, but yourself as well. Incremental innovation will not lead to real change — it only improves something slightly. Look for breakthrough innovations, change that will make a difference.” Leonard Brody & David Raffa In 2019 I had an idea, I have to write a book about Snowflake. Snowflake is so popular and I can leverage my experience with Cloud Data Warehouses, 10+ years of experience in BI/DW/BigData and book writing. I want to help people to quickly onboard with Cloud Data Platform Snowflake. Moreover, I want to help them get more context about the role of Snowflake in the typical organization as well as cover 3rd party tools, analytics, and data warehouse modernization use cases. Check this new book on Amazon or Apress. First of all, I have to find the publisher. I have experience with Packt Publishing and I want to try something new. I consider Apress and O’Reilly the top publishers in the data analytics fields. So, I contacted both the book idea. Hello, My name is Dmitry. I am Data Engineer. I used to write for PacktPub and delivered 4 books about Data tools and technology. However, I want to try with new publishers. I really like your books and idea behind the brand. Couple words about me: I have 10+ years of experience with Data Analytics. Moreover, I am passion about innovation and emerging technology. Often I am presenting at world conferences such as Enterprise Data World or Data Summit. My next book idea is about one of the most disruptive technology — Snowflake Computing. Let me know if you are interested and we can work through outline and schedule. Thank you! I’ve got a response from Apress pretty fast and they liked the idea. O’Reilly reply that they aren’t interested. Then easier for me to proceed. However, several months later, O’Reilly came back and asked about the book. But we were in the middle of the book. Then the next step is to define the list of chapters. The book has 14 chapters and it is 264 pages. It is a pretty small book for this big topic and we can’t go too deep in detail. But it should give a good overview of Snowflake’s unique features and advantages as well as the role in Modern Analytics. By the end of writing, I had to find reviewer for the book. I was very lucky that one of the most respectful and knowledgeable data warriors accepts the offer to check our writing and gave us tons of valuable advice. Moreover, Snowflake book could have one of 4 options for its cover: Let’s check the summary of the chapter: Chapter 1: Getting Started with Cloud Analytics — will cover some of the historical trends of the Data Warehouse world prior to Snowflake as well as the fundamentals of Cloud Analytics. Key components of cloud Chapter 2: Getting Started with Snowflake — will help to start with Snowflake. It will guide you on how to launch snowflakes and meet you with web interface and functionality. In addition, it will guide with a Snowflake pricing model and available options for Virtual Warehouse. Snowflake Architecture Chapter 3: Building a Virtual Warehouse — will introduce the key component on snowflake — Virtual Warehouse. The reader will learn how to work and monitor Virtual Warehouse. — will introduce the key component on snowflake — Virtual Warehouse. The reader will learn how to work and monitor Virtual Warehouse. Chapter 4: Loading Bulk Data into Snowflake — will show how you can bulk load data into Snowflake using the COPY command. — will show how you can bulk load data into Snowflake using the COPY command. Chapter 5: Getting Started with SnowSQL — will introduce the snowflake CLI tool. The reader will learn how to work with the CLI tool and what he can accomplish with it. — will introduce the snowflake CLI tool. The reader will learn how to work with the CLI tool and what he can accomplish with it. Chapter 6: Continuous Data Loading with Snowpipe — will introduce another snowflake utility — Snowpipe. The reader will learn how to use Snowpipe and will create a data pipeline in order to loading data into Snowflake. — will introduce another snowflake utility — Snowpipe. The reader will learn how to use Snowpipe and will create a data pipeline in order to loading data into Snowflake. Chapter 7: Snowflake Administration — will guide the reader to manage Snowflake Account. Chapter 8: Snowflake Security Overview — will cover the fundamentals principles of the Snowflake security model and will guide the reader on how to set up security and access with the cluster. Chapter 9: Working with Semistructured Data — will demonstrate to you how to work with popular semi-structured formats like JSON, XML, and AVRO. In addition, the user will learn about Binary data and how he may work with it at Snowflake. — will demonstrate to you how to work with popular semi-structured formats like JSON, XML, and AVRO. In addition, the user will learn about Binary data and how he may work with it at Snowflake. Chapter 10: Secure Data Sharing — will cover Data Sharing use cases. Snowflake Data Sharing Chapter 11: Designing a Modern Analytics Solution with Snowflake — will help users to learn about the integration of Snowflake Cloud Data platform with Visual Analytics platform Tableau and Cloud Data Integration tool Matillion ETL. A high-level overview of modern data platform Chapter 12: Snowflake and Data Science — cover some of the most popular 3rd party tools and applications that are working with Snowflake and allow us to do Data Science and Machine Learning. Moreover, it has an example of integration with Azure Databricks (Spark). Chapter 13: Migrating to Snowflake — is about migration scenarios and use cases as well as some typical architectures. Finally, it has a real-world project: Chapter 14: Time Travel — is about Snowflake’s unique feature that allows us to enables accessing historical data (i.e. data that has been changed or deleted) at any point within a defined period. It serves as a powerful tool for performing the following tasks. Time Travel So, if you want to start with Cloud Data Platform and learn more about modern analytics and Snowflake, this is a great book to start and if you need any help with Snowflake Proof of Concept or implementation you may contact us.
https://medium.com/rock-your-data/meet-our-new-book-jumpstart-snowflake-a-step-by-step-guide-to-modern-cloud-analytics-d41479924d62
['D Ma']
2020-01-14 21:16:41.255000+00:00
['Software Development', 'Data Science', 'Cloud Computing', 'Technology', 'Software Engineering']
2,394
On not using product and design to solve organizational problems
This a terrible pattern. Product is paid to solve business problems. Go after segments, drive research, develop solutions that generate revenue or at least, product market fit. In many companies though, a lack of proper org structure (hierarchies, roles, team management etc.) result in severe communication problems. These communication problems manifest, seemingly, as product problems(one could and should argue a PM’s job is to prevent this, but this isn’t a regular comms problem, this is an outcome of situations mandated by a CxO). An example of this is ‘exec meetings’ that a CFO or CRO or CPO might call for with a ‘limited group’ of people where broad strategic ‘decisions’ are made and roadmaps are implicitly set. Well, since Engineering or Research aren’t a part of these (since they are small groups at early stages) this naturally results in a complete 0-buy-in situation. This is a terrible pattern , and one all too easy even with (especially with?) the ‘right intentions’ A business problem -> org problem -> comms problem -> product problem -> design problem. This will stress your teams to no end. Worse, they’ll not have a place to surface this (few can write to their CEO and say ‘enough’ — I have, but then, I took a risk and it paid off) When in reality, a business problem -> product and thats it. This is how the best companies work. Take this to heart. Having experienced this often and frequently enough, I can say this is a real productivity killer. And a silent one. So why single out ‘BigCo’ VPs? Generally they tend to bring this pattern from their large Co experience and retro fit to startups. It does not work. Startups need way more agility and responsiveness to market trends than large companies do where delicate power structures are perhaps more front and center. Building companies is hard, because building teams that function at scale is hard. Don’t make it harder, avoid this ‘unforced error’ else it’ll turn into a self goal.
https://medium.com/@getensemble/on-not-using-product-and-design-to-solve-organizational-problems-89c8d77129df
[]
2020-11-27 01:12:00.103000+00:00
['Organizational Culture', 'Management', 'Technology', 'Business']
2,395
Hard Truth: Conference Calls Suck Because You’re Lazy
Hard Truth: Conference Calls Suck Because You’re Lazy It’s not the tech, friend. Here’s how to create a better and more inclusive call for everyone. The conference call is a gangrenous finger on the clammy hand of human achievement. Designed to bridge communication between distant parties, this particular technology is so sonically enraging it’s a miracle humans haven’t given up on talking altogether. Scientifically¹ speaking, conference calls are bad². In a completely unofficial Twitter poll I conducted, every respondent lamented on various aspects of conferencing, most notably attendees not muting themselves. This leads to intrusive background noise including: Dogs barking Mysterious coughing Unwelcome snacking (a nightmare for those with misophonia, myself included) And in at least one case, toilet flushing It’s not like this is a secret! There’s an entire corpus of online coverage about the pox of conference calls. Wanna learn five reasons why web conferencing sucks? There’s a piece for that very specific need. Wanna find five tips to improve conference call audio? We got that too. The list goes on and on and on. Companies that make video conferencing systems love to take jabs at conference call audio quality, which is funny but not entirely fair. Video adds a whole other dimension of problems to a meeting, like frozen screens and pixelated participants. “If people were generally considerate, [conference calls] would be fine,” one respondent to my twitter poll opined. “But let’s be honest, generally people are selfish dickbags and this translates to terrible conference calls.” Experts? They agree. “In a broad sense, people are ultimately kind of lazy,” Disquiet founder Marc Weidenbaum, who consults with companies on their acoustic branding, told me. “They’re not lazy in the sense that they don’t wanna do anything. Most people are not thinking of other people, and honestly they’re not aware. Acoustic literacy is just not something that’s taught in schools.” Credit: Joanna Walters Zoning out Having “bad conference call hygiene” can deleteriously affect employee focus, Weidenbaum explained. Poor sound quality on a call — due to someone using speakerphone, for example — leads to people tuning out. Employees check their emails, answer texts, or do other work they could’ve been doing in lieu of the meeting. This isn’t even done out of disrespect: it’s because most people don’t want to listen to something that sounds like a marching band leader humming “Yankee Doodle Dandy” through a traffic cone in the bottom of the Mariana Trench. The consequences of sound While these concerns might sound like mild annoyances, the financial implications of conference call gaffes are pretty significant. Losing time in a meeting due to technological difficulties (and thus, lateness) is expensive, especially when a meeting is only about 30 minutes long. That wasted time becomes frustrating for managers paying employees to do work and employees who, well, just want to do that work and go home. “There’s fundamentally a large problem in the abuse of meetings,” Dr. Julie Gurner, an executive coach with a doctorate in psychology, told me. “Managers forget there are bottom line repercussions as well as productivity drops associated with every person who is being paid for that hour and not able to accomplish their work.” Who gets heard? Feeling valued is at the crux of the conference call conundrum. This is especially true for certain groups of people, including women and minorities, who already struggle to be heard in the workplace. Research shows that women are overwhelmingly interrupted more than men and that Black, Latino, Asian, and LGBTQ+ employees experience workplace discrimination in multiple forms. The auditory nuisances of a conference call amplify this feeling of “not being heard” literally and figuratively. “[Conference calls] require people to be more assertive than they may have to be than a real-life meeting,” Dr. Gurner said. “It requires another level of assertiveness and for certain populations who already feel reticent to engage, this can make matters worse.” Anecdotally, employees experience this all the time on conference calls: A lot of people talking — usually at once — and not many listening. “It’s ironic because these technologies meant to connect us actually create divisions,” Weidenbaum said. Making lemonade from cursed lemons Believe it or not, there are actionable things you and your team can do to make conference calls suck marginally less for everyone involved. For one thing, Weidenbaum recommends mounting sound-absorbing art to office walls, which can help reduce background noise. “One company I worked with had an office where the art on the walls were actually soft sculptures intended to absorb sound,” Weidenbaum said. “All these things that looked like paintings were on top of cushions, so they just didn’t look like normal sound buffers.” To that point, businesses should spend more time and money considering the effects of sound in a conference room, not just filling it with unreliable and expensive gadgets. “Companies will spend so much money on [conferencing] technology but not in the acoustics of the room where a call takes place,” Weidenbaum said. “Usually these rooms have a lot of glass and reflective surfaces, and then people are confused why their $10,000 speaker system doesn’t work. It’s because they’re spending money in the wrong place.” Above all, managers can — and should — be more discriminating about scheduling meetings and determining who needs to attend them. Letting employees know what they’ll be asked to speak about ahead of time can make for a more productive conversation. “Meetings shouldn’t be about just talking — they should be about doing work,” Dr. Gurner told me. “If you’re asked to attend a meeting, employees should be told in advance they’ll be invited to speak about something specific. This way no one is struggling to find something to say and everyone feels equally valued.” My personal advice? Take your expensive conferencing system, throw it in the trash and light it on fire. Carrier pigeons all the way (or just Slack me next time). But if you absolutely must circle back/sync/regroup with your team, maybe just use the mute button.
https://medium.com/article-group/hard-truth-conference-calls-suck-because-youre-lazy-1b93c065b232
['Rae Paoletta']
2019-06-27 18:29:37.860000+00:00
['Work', 'Startup', 'Psychology', 'Creativity', 'Technology']
2,396
Deep Learning: GoogLeNet Explained
Characteristics and features of GoogLeNet configuration table (figure 1) The input layer of the GoogLeNet architecture takes in an image of the dimension 224 x 224. Type: This refers to the name of the current layer of the component within the architecture This refers to the name of the current layer of the component within the architecture Patch Size: Refers to the size of the sweeping window utilised across conv and pooling layers. Sweeping windows have equal height and width. Refers to the size of the sweeping window utilised across conv and pooling layers. Sweeping windows have equal height and width. Stride: Defines the amount of shift the filter/sliding window takes over the input image. Defines the amount of shift the filter/sliding window takes over the input image. Output Size: The resulting output dimensions(height, width, number of feature maps) of the current architecture component after the input is passed through the layer. The resulting output dimensions(height, width, number of feature maps) of the current architecture component after the input is passed through the layer. Depth: Refer to the number of levels/layers within an architecture component. Refer to the number of levels/layers within an architecture component. #1x1 #3x3 #5x5: Refers to the various convolutions filters used within the inception module. Refers to the various convolutions filters used within the inception module. #3X3 reduce #5x5 reduce: Refers to the numbers of 1x1 filters used before the convolutions. Refers to the numbers of 1x1 filters used before the convolutions. Pool Proj: This is the number of 1x1 filters used after pooling within an inception module. This is the number of 1x1 filters used after pooling within an inception module. Params: Refers to the number of weights within the current architecture component. Refers to the number of weights within the current architecture component. Ops: Refers to the number of mathematical operations carried out within the component. Figure 2: Partition of GoogLeNet At its inception, the GoogLeNet architecture was designed to be a powerhouse with increased computational efficiency compared to some of its predecessors or similar networks created at the time. One method the GoogLeNet achieves efficiency is through reduction of the input image, whilst simultaneously retaining important spatial information. The first conv layer in figure 2 uses a filter(patch) size of 7x7, which is relatively large compared to other patch sizes within the network. This layer's primary purpose is to immediately reduce the input image, but not lose spatial information by utilising large filter sizes. The input image size(height and width) is reduced by a factor of four at the second conv layer and a factor of eight before reaching the first inception module, but a larger number of feature maps are generated. The second conv layer has a depth of two and leverages the 1x1 conv block, which as the effect of dimensionality reduction. Dimensionality reduction through 1x1 conv block allows the decrease of computational load by lessening the layers' number of operations. Figure 3: Partition of GoogLeNet The GoogLeNet architecture consists of nine inception module as depicted in figure 3. Notably, there are two max-pooling layers between some inception modules. The purpose of these max-pooling layers is to downsample the input as it’s fed forward through the network. This is achieved through the reduction of the height and width of the input data. Reducing the input size between the inception module is another effective method of lessening the network's computational load. Figure 4: Partition of GoogLeNet The average pooling layer takes a mean across all the feature maps produced by the last inception module and reduced the input height and width to 1x1. A dropout layer(40%) is utilised just before the linear layer. The dropout layer is a regularisation technique that is used during training to prevent overfitting of the network. Dropout technique works by randomly reducing the number of interconnecting neurons within a neural network. At every training step, each neuron has a chance of being left out, or rather, dropped out of the collated contribution from connected neurons. The linear layer consists of 1000 hidden units, which corresponds to the 1000 classes present within the Imagenet dataset. The final layer is the softmax layer; this layer uses the softmax function, an activation function utilised to derive the probability distribution of a set of numbers within an input vector. The output of a softmax activation function is a vector in which its set of values represents the probability of a class or event occurrence. The values within the vector all add up to 1. Auxilary Classifiers Before bringing the exploration of the GoogLeNet architecture to a close, there’s one more component that was implemented by the creators of the network to regularise and prevent overfitting. This additional component is known as an Auxilary Classifier. One main problem of an extensive network is that they suffer from vanishing gradient descent. Vanishing gradient descent occurs when the update to the weights that arises from backpropagation is negligible within the bottom layers as a result of relatively small gradient value. Simply kept, the network stops learning during training. Auxilary Classifiers are added to the intermediate layers of the architecture, namely the third(Inception 4[a]) and sixth (Inception4[d]). Auxilary Classifiers are only utilised during training and removed during inference. The purpose of an auxiliary classifier is to perform a classification based on the inputs within the network's midsection and add the loss calculated during the training back to the total loss of the network. The loss of auxiliary classifiers is weighted, meaning the calculated loss is multiplied by 0.3. Below is an image that depicts the structure of an auxiliary classifier. An auxiliary classifier consists of an average pool layer, a conv layer, two fully connected layers, a dropout layer(70%), and finally a linear layer with a softmax activation function. Each of the included auxiliary classifiers receives as input the activations from the previous inception modules. The image below illustrates the complete GoogLeNet architecture, with all its bells and whistles.
https://towardsdatascience.com/deep-learning-googlenet-explained-de8861c82765
['Richmond Alake']
2020-12-23 04:18:08.476000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Technology', 'Data Science']
2,397
Apple iPhone 8 series now supports FaceTime HD 1080p video calling
Apple iPhone 8 series now supports FaceTime HD 1080p video calling Previously, older iPhone models, including the iPhone 8 series, only supported 720p resolution FaceTime calls over WiFi. Apple recently rolled out the iOS 14.2 update to all of its eligible iPhones. The update supported 1080p FaceTime calling on iPhone X and above. However, the company’s official website now shows that the iPhone 8 and iPhone 8 Plus have also been added to the list of devices that support the FaceTime 1080p calling feature. The feature is said to have been added with the iOS 14.2 update for the iPhone 8 and iPhone 8 Plus, however, the changelog did not mention it. Previously, older iPhone models, including the iPhone 8 series, only supported 720p resolution FaceTime calls over WiFi. In addition to the latest iPhone 12 series, FaceTime 1080p call support is now available on iPhone 8, iPhone 8 Plus, iPhone X, iPhone XR, iPhone XS, iPhone XS Max, iPhone SE (2020), iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max. According to a report from MacMagazine, iOS14.2 was released on November 5, and the iPhone XR’s specs page started listing the FaceTime HD feature on its website from November 9. Note: The iPhone 12 line of smartphones can make FaceTime calls at 1080p over Wi-Fi and mobile data. However, older iPhone models will not be able to do this over mobile data. According to the report, older iPhones can only use the FaceTime 1080p calling feature over WiFi. FaceTime calls over mobile data on older iPhones will still take place in 720p or lower resolution, depending on connectivity. To install the iOS 14.2 update on your iPhone, you can go to the Settings panel and open the “General” menu. There, you need to tap on the “Software Update” option and then wait for the phone to search for an update. The update will then appear, where you can tap on the “Download and Install” option to get it. Make sure your iPhone has a battery percentage over 50% and your phone is connected to a stable Wi-Fi network.
https://medium.com/@brahimtimail/apple-iphone-8-series-now-supports-facetime-hd-1080p-video-calling-5d01d12d8c0f
[]
2020-12-14 15:03:20.075000+00:00
['iPhone', 'Tech', 'Facetime', 'Apple', 'Technology']
2,398
How will 5G change the behaviour of Blockchain Technology?
5G is the most recent upcoming remote system innovation that is being executed in certain urban networks to build the degree of speed to standard speed, also similarly as with 4G systems. It has been watched multiple times preferable speed over 4G organize and having a higher capacity to oblige various sorts of related contraptions, 5G will be definitely not equivalent to various old ones. In an industry that has been starving for progression in this interregnum between the dispatch of the mobiles with advancements like AR, AI and blockchain. The introduction of the 5G arrange has become such a catch-for all handset makers to slant toward as the assurance of things to happen to improvement being for all intents and purposes around the corner. Read: Top 5 Productivity Tools for Remote Workers in 2020 The qualities of 5G will bump a disturbance of the Internet of Things (IoT), with an arrangement of tinier and low-controlled gadgets all readied to be advantageously related by techniques for 5G. While the information may stream progressively, considering the raised framework benevolence of 5G, it needs to encounter guarantees about strategies, for example, encryption or blockchain-filled game-plans. Read: Explore Types of Careers You Could Pursue in Information Technology The headway will give security and standardization to various 5G applications. For example, in the advancement of automatic vehicles, oneself moved vehicles depend upon giant loads of information to be sent between the vehicle, a focal working framework, and its general condition during progress. There are a lot of gadgets, sensors, and devices inserted into the vehicles and over the earth; it’s of fundamental criticalness to guarantee the information gathered remains careful from the hands of programming engineers. Read: How 5G will Change The Concept of Business? With 5G on the trip, SMEs are anxious to get a cut of this cutting edge adaptable framework, and more strategies will move to the cloud and on the web; starting now and into the foreseeable future, information security and check are major. How 5G could open blockchain’s inert breaking point? 5G is reliably spoken about close by blockchain and AI as a critical piece of another pile of related advancements that will change the activities of the Internet, and support the Fourth Industrial Revolution. So what business does blockchain play in this top tier nearness, and in what cutoff will it work close by 5G development? Secure SIMs Besides recognizing that blockchain can be utilized to guarantee about 5G itself. Telecom affiliations can give an eSIM or an application to a supporter that makes an exceptional virtual character that is encoded and dealt with on a blockchain. This SIM security, as impressive understandings, has used on online business locale, as clever understandings: Supporters can utilize the exceptional IDs for modified beware of web business goals. 5G and Blockchain: Secure phone banking? Around the front, it’s hard to battle with SMS and text as an interface truly coming to fruition, even as admonishing applications like WhatsApp are getting steam in unquestionable quality. SMS has decades worth of nature to millions of customers, grandfathered in with the soonest PDAs, yet it is so far quick and deft enough to go after current structures without an enormous data load. It can in like manner viably, securely, quickly and safely paymasters, buy things or affiliations, and make stores or send settlements. It offers the favoured objective of modernized banking, especially for the world’s around two billion unbanked: Your character, and the related number, can fill in as both bank and clearinghouse. Read also: “ Microservices vs Web Services” This advancement in banking could be the shrouded stage in extra required changes for the creation scene, for instance, giving better and more strong access to impact and lively web, decreasing the noteworthy costs related with trades and repayment parts, and compelling government contamination and general money related irregularity. If that is all 5G ends up doing, it may be a more essential weave forward than what has started late been envisioned and ensured by those in the profitable business. Read: Best Web App Ideas for Startups in 2020 The move of 5G goes unbreakable with an effect in related Internet of Things (IoT) gadgets; as exhibited by Ericsson, the measure of cell IoT will shoot up from 1.3 billion to five billion going before the culmination of 2025, augmentation of 285%. The term takes in everything from sharp indoor regulators to autonomous vehicles, to water sensors for agribusiness; on a very basic level, any related contraption that screens its condition and awards that information to individuals or other IoT gadgets. With 5G openness, these gadgets will ceaselessly chat with one another correspondingly in regard to their clients, in a routinely creating Internet of Things. Read: Top 12 Web Designing Trends in 2020 The upsides of 5G are its high speeds, limit, low lethargy and capacity to associate with huge measures of gadgets. Lethargy suggests the time between when a sign is sent and gotten. In blockchain terms, inactivity is the time between an exchange being passed on and it is being gotten by focus. Regardless, for IoT, regardless of whether it be applied to sharp homes or self-overseeing vehicles, accomplishing low lethargy is basic if contraptions will converse with one another without encountering long room occasions. Read Also: Backend Developer Roadmap for 2020 This decrease could open another idea, the Internet of Skills (IoS). This is the technique by which stars lead their work distantly through PC created entertainment headsets. For example, a dental master would have the choice to perform strategies by implication. In the event that lethargy can’t be compelled, by then the ace won’t be responsive enough, taking a chance with the patient and subverting as far as possible. Read: How COVID-19 Pandemic is Changing Consumer Behavior? The impacts of 5G on IoT and related musings will be additionally evolved by multi-find the opportunity to edge figuring. This is such a system’s association whereby association is spread from joined focus to fringe ones, understanding significantly more essential quicken while in like way diminishing lethargy. Decentralized blockchains offer further positive conditions over the current customer worker model utilized in IoT. Their decentralized structuring proposes that character can be ensured about and ensured. Beginning at now, IoT gadgets separate themselves by techniques for cloud workers, with their indisputable proof information held in these databases. As necessities are the information can be undercut, taken or imitated, familiarizing a colossal security risk with any application that abrupt spikes popular for such a system. Inclining up computerization While discussing mechanization, it is customary to think of robots superseding paid occupations at present done by people. Regardless, if all else fails, the level of mechanization may finally be certainly more expensive than this, including the substitution of assignments and unpaid run-of-the-mill attempts. This would be starting at now have the alternative to be found in the procedure of sharp homes, with nuclear family mechanical congregations conversing with one another, keeping stock levels and controlling stock. Free vehicles and trucks are beginning to move past testing, with the foundation being the basic square. Read: Top 8 technology trends in 2020 Inside the following decade, conventional associations, for example, agribusiness, mining and entering all envision robotization through quick IoT, obliged by billions of sensors and gadgets presenting over 5G. Scaling up Grievously, the second issue of scale can’t be plainly explained by blockchains. The sheer degree of IoT recommends that decentralized blockchain structures are bad for managing the essential throughput. In any case, given that each contraction should have its own zone and on-chain exchanges, there should be an on-time limit that appears at incalculable exchanges each second. To spread it out just, flexibility must enhance an exceptionally essential level on the two layers. Alter safe information 5G attracted IoT contraptions are set to drive a tremendous enlargement in the information. Cisco widens that they will convey 847 zettabytes by 2021. Despite the way that blockchains at their inside are streamed information accumulating structures, it is unfeasible to store essential extents of information on-chain. In the event that this IoT information isn’t dealt with on-chain, regardless, this despite everything leaves it open to assaults. Read Also: Why Performance Management should be your #1 priority? Notwithstanding, it is really conceivable to store hashes of information on-chain, with joins calling attention to outside information hoarding objectives for the entire dataset. Definitely, such outside putting away could be run on other decentralized shows, for example, the InterPlanetary File System (IPFS) or OrbitDB. While this doesn’t ensure a similar degree of change limitation, it offers a more grounded degree of security than joined various decisions. Inside and out, by dealing with hashes on-chain, any changing of the information will understand an adjustment in the hash, thus causing to notice such a trap, close to a period record by strategies for the timestamp. Engaging sharp understandings Blockchains can in like way plainly advantage by 5G to the degree accommodation and execution. One such model is smart contracting. Blockchain sharp understandings routinely rely on prophets. These prophets move outside information to the comprehension. Obviously, this data must be spoken with the web. For applications, for example, deftly chains, 5G can invigorate these prophets in faraway zones where they, in any case, would not be conceivable. System refreshes Blockchains can in like way get created enhancements from 5G. The colossal options in range and data transmission, in contrasting and the decreases in inertness helped by edge taking care of, could incite a flood in extra focus by joining open blockchains. By extricating up thought to faraway areas comparably as giving stretched out openness to nonstatic gadgets, for example, mobiles and tablets, there could be significant augmentations in engineer adventure and, with that, improved security and decentralization. Read: How to Maintain a Website and Web Server? Also, because of latency lessons, modellers would have more increase to examine different streets regarding reductions in square occasions, hence creating chain throughput. In this manner, this would offer far transcendent help for IoT contraptions utilizing blockchains for settlement, accord and security. A multi-wrinkle relationship To really regard the estimations of 5G, IoT and blockchain, you need to consider them as synergistic rather than offering thoroughly detach inspiring powers. With the correct structure, this improvement stack close by second layer blueprints, edge getting ready, PC made a reality, intensified reality and IoS is set to make an exceptional extent of gigantic worthwhile at the same time from an overall perspective adjusting working conditions, business and distraction. Originally published at https://www.decipherzone.com.
https://medium.com/cryptolinks/how-will-5g-change-the-behaviour-of-blockchain-technology-2be3724a3e09
['Mahipal Nehra']
2020-09-14 08:28:18.328000+00:00
['Blockchain', 'Blockchain Technology', '5g', '5g Technology', 'Banking']
2,399
New to Opacity? Here’s how to get OPCT
One thing that puts people off cryptocurrency is confusion around obtaining it. Admittedly, it can be mind-boggling if you’re new to cryptocurrency, but breaking it down into manageable steps makes it far easier to understand — sort of like baking a cake… Recipe: Purchasing OPCT Let me guide you on the steps required to bake your own OPCT cryptocurrency cake — and once you take that initial bite, you’ll want more! Note: Please ensure you are considering your own personal data, security, and finances carefully at each step — precautions are there so you, the buyer, don’t get burned! There is a valuable lesson learned from all the stories about exchange accounts being hacked — do not leave your assets on an exchange. If you do, you’re leaving yourself to potentially being hacked and your assets stolen. Keep your assets safe in a hardware storage solution, such as the Ledger Nano. In this guide, I’ve utilized Coinbase, KuCoin exchange, and MetaMask. There are other sites you could use, but I’ve found the ones listed to be far easier and more reliable than their competitors. Although these are what I use, you are, as always, free to choose other sites for purchasing once you get used to maneuvering cryptocurrency purchases. Let’s get started! Here is what you will need: Ingredients: § KuCoin account § MetaMask in-browser Ethereum wallet download § Coinbase account § A mobile smartphone § Two forms of ID* § Payment method — ideally a credit card for added security *ID is required by external exchanges to ensure money laundering regulations are being met. The ID requirement is not a stipulation from Opacity Storage directly. Method: Let’s Bake! 1. Create your accounts First, if you do not hold active accounts with Coinbase and KuCoin, you will need to set these up. Coinbase Head to Coinbase here and click the ‘Get Started’ button in the top right-hand corner. This will open the account creation form which requires a name, email and password to be entered. It’s all very standard stuff at this point. KuCoin Head to KuCoin here and click the ‘’Sign Up’’ button in the top right-hand corner. This opens the account creation form which requires only an email. You will next hit the ‘’send verification code’’. You will be emailed a code to your personal email which you enter into the KuCoin sign up to complete email address verification. Once verified, you can set up a password. The next step is to head to the ‘’Assets’’ tab on the top-right section of the account and click the ‘’ deposit’’ button. This will require you to set-up two methods of security. The first is a trading password — this consists of six-digits. The second is via the Google Authenticator app via smartphone or browser. Connect your account to your smartphone by scanning the QR code on the KuCoin website. MetaMask Head to MetaMask here. MetaMask is a browser-based wallet and by far one of the easiest methods for storing your Cryptocurrency. Download their browser extension which is available on Chrome, Firefox, Opera and Brave. Once the extension is set-up, it will show as a fox icon, usually on the top right (especially if you’re using Chrome). When you’ve set-up your account, you will be given a generated backup phrase consisting of twelve randomly generated words which are specific to your account. Keep these safe to recover your account in case you need to! Finito! That’s the security methods and accounts set-up. Let’s keep going! 2. Buy Ethereum (or bits of ETH) on Coinbase First, make sure you have a payment method on Coinbase. I would advise using a credit card over a debit card as your transactions are covered, but you can choose the method you prefer. To reach the end-goal of buying OPCT, you need to purchase Ethereum (ETH). The quickest and smartest way to do this is via Coinbase. On Coinbase, you can purchase a minimum of £1.99/$2.53 worth of ETH and a maximum of £6000/$7,628.68 (later, you will require 2 OPCT for an Opacity account, currently around £0.06/$0.08 each, plus a small amount of ETH for the transaction fees). Once you’re ready, hit the ‘’Buy Ethereum instantly’’ button and enter the amount (recommended is £10/$12.61 minimum to cover fees and the purchase of ETH bits) and confirm once it pops up on your screen. Congratulations! You are now a cryptocurrency adopter. It won’t take long for the ETH to reach your wallet. You can check that your ETH purchase has cleared by checking on the accounts tab of your Coinbase account. 3. Deposit on KuCoin On the KuCoin assets tab, click the deposit button. The next page is bare except for a search box — type ETH into the search to populate the drop-down box. Once you’ve clicked on ETH, you will note that it lists your personal wallet address. This is an important code-address that you will use to receive your funds from other places. KuCoin has a nice little feature: with the click-of-a-button you can copy the wallet address to your computer clipboard. This ensures the wallet address is correct in its entirety as there have been horror stories where people have sent funds to wrong wallet addresses and ultimately lost that money. There is no way to return the money if this happens so use the clipboard facility where possible. We will now use this address to transfer the ETH you purchased from step-two on Coinbase. 4. Buying OPCT— Nearly There! With your KuCoin ETH wallet address copied to the clipboard, open the Coinbase tab in your browser. We will now be sending the ETH that’s currently sitting in your Coinbase account over to KuCoin account so we can purchase OPCT on Kucoin. On Coinbase, you can transfer your ETH by pressing the ‘’send’’ button. A pop-up window will overlay so you can fill out some details such as the recipient. This is where you will paste your KuCoin wallet address from the clipboard. Once you’ve confirmed that the recipient section (your wallet address) is correct, you can click on the ETH input box in the amount section. Click on ‘’send max’’. You will note the full amount you purchased isn’t listed, this is because a small amount is deducted to go towards transfer and miner fees. The fees are so small, it’s not worth worrying about. Next, you can choose to add a note — maybe a date, a reason for sending or something else. You can opt to leave this blank if you wish. Once completed, hit continue and confirm your transaction. The transfer may take a little bit of time depending on the gas prices and strain on the Ethereum blockchain; typically it’s quick. You can check receipt over on your KuCoin assets page. 5. Transfer into Your Trading Account on KuCoin Now that your ETH is stored on KuCoin you will need to transfer the ETH into your trading account in order to buy OPCT. Simply hit the ‘’transfer in’’ button. It really is that simple. Type ETH into the coin field, click the number in the available amount section to send the full amount to your trading account, then confirm. 6. Let’s buy some OPCT Now the fun begins! Your trading account has funds so you can now buy your OPCT. On KuCoin, head to the exchange which is located next to the KuCoin logo and the market button. Once you have done this a few times, it will become far less daunting so bear with it. Currently, you are the proud owner of Ethereum (ETH), if you’ve followed all the steps. You will use ETH to buy OPCT, so you want to select ETH as a pair to OPCT in the Exchange section. Click on the small ‘’select pair’’ search box and type in ‘’OPCT’’. This will drop down a box populated with OPCT/ETH and OPCT/BTC. Click the OPCT/ETH option as this is the only applicable one to this guide (unless you own Bitcoin). The next page shown is where you will purchase OPCT for use on the Opacity platform. You will need to enter your trading password before you proceed. This is the password you set up in step one along with the Google Verification Authenticator. Once you’ve entered your trading password, you will see the main order book with several options. Ideally, you want to try to purchase the cheapest sell-order in the order book. There is a withdraw fee of 34.7 OPCT (subject to change) and a minimum withdrawal amount of 69.4 OPCT. I’d advise you to hit the 100% button to spend all of your ETH on OPCT, but you can choose a lesser amount. When you’re ready, press the green ‘’Buy OPCT’’ button. Congratulations! You are now the proud owner of OPCT. 7. Metamask Transfer This is where MetaMask comes in. You want to move your cryptocurrencies into a wallet for safekeeping rather than keep them on exchanges. To do this, you will need to move your OPCT from your trading account into your main account on KuCoin. You will need to withdraw it by pressing the ‘’Transfer Out’’ button in the trading account card. Once the funds have been released to your main account, hit the ‘’withdrawal’’ button and then select OPCT. You’ll note that the fee has already been deducted from your OPCT amount. This is normal for most exchanges. To move your OPCT into MetaMask, you will need to click the fox icon on your browser (this is the extension we discussed in step one) and sign in, if required. Hover over the account section on MetaMask and select ‘’copy to clipboard’’ — once again your wallet address will be copied to your clipboard, so paste this into the wallet address form on KuCoin. In effect, this is similar to what we did when moving from Coinbase to KuCoin. You’re simply moving your currency in and out. On KuCoin, once you’ve pasted your MetaMask wallet address on the form, hit confirm and then follow the steps as before to authenticate your trading password. Decoration: Optional For convenience, you can add the OPCT token information into MetaMask to save time from having to go through the Blockchain record. 1. Open MetaMask via the fox icon on the browser bar and click the three lines in the top left to open the menu. 2. Click the ‘’add token’’ button which will take you to a new page. 3. OPCT needs to be added manually to display the tokens in the wallet. Add the following information into the form and click next. Once it’s complete, you will now see your OPCT balance listed on MetaMask. Contract Address: 0x77599d2c6db170224243e255e6669280f11f1473 Symbol: OPCT Decimals: 18 Serve, and enjoy once cooled alongside a cold Scotch-Whisky with a dash of water.
https://medium.com/@aron_88459/new-to-opq-heres-how-to-get-it-8285819698e
['Aron Hiltzik']
2021-07-22 16:15:30.176000+00:00
['Cryptocurrency', 'Technology', 'Cryptocurrency Investment', 'Blockchain', 'Resources']