Unnamed: 0
int64 0
3k
| title
stringlengths 4
200
| text
stringlengths 21
100k
| url
stringlengths 45
535
| authors
stringlengths 2
56
| timestamp
stringlengths 19
32
| tags
stringlengths 14
131
|
---|---|---|---|---|---|---|
2,000 |
Blockchain and Sport — Why I co-founded a Platform to decentralize the industry
|
Sports need to be more transparent and decentralization through blockchain can change many of the current processes within the sporting world. In this article I want to share my personal experiences as well as why I co-founded Globatalent, a blockchain platform built to bring transparency within the sports industry.
There are many corruption cases which have occurred in various public and private sports agencies within different sports and I believe that fair-play should be implemented and regulated throughout the industry in every single sport.
All athletes strive to specific strong values such as effort, sacrifice, perseverance, collaboration, camaraderie and a healthy lifestyle. For those moving into their own sports professionally, we can not allow these qualities to be associated with vote buying, doping, abuse of authority, violence on the fields and hooliganism. Sport is fundamental to the world and it is important that the positive values of sports are not undermined with corruption cases as we have seen in different sports all over the world. Pierre de Coubertin, the father of Modern Olympics said ‘All sports must be treated on the basis of equality’.
I experienced the dark side of the sports industry when I worked as a General Manager at a top-level basketball club. I was approached with incredibly dishonest proposals by unethical people who are currently still within the game. Thankfully, the values instilled in me by my family meant that I did not accept any of these proposals but it was the disappointment of seeing misconduct within a sport that I love that made me decide to walk away from this career path. I had the strength and courage to make such a hard decision but others within sports do not have this opportunity and I concluded that Blockchain is one of the tools we can use to take the sports industry to the next level of development. A development where there is a massive need.
Jesse Owens defied the odds by becoming the first African-American athlete to gain a sponsor. Adi Dassler, the founder of Adidas, worked tirelessly to persuade Owens to wear his footwear and he finally agreed. Dassler approached Owens because he is one of the true heroes in bringing equality to the sporting world. In the 1936 Berlin Olympics, during a time when Germany was under Nazi-rule, Owens proved that Adolf Hitler’s Aryan race were not the ‘Perfect People’ as mentioned in propaganda being promoted within the country at the time. Jesse Owens was not blonde, Jesse Owens was not blue eyed and Jesse Owens was not German — he was a black man living in the United States of America at a time when racism was still rife. He shocked the world by winning in different track and field events, obliterating his opponents. He fought for equality within the game by using his phenomenal talent and along with Adidas, who took a massive risk by endorsing someone who was everything that the dictator of their country opposed, started the journey for creating a sporting world with a lot more equality.
We aspire to follow the path that Jesse Owens outlined for all of us
Blockchain decentralization will reduce opportunities for corruption and unethical actions that we are seeing in the sporting world today. It can help on a global scale and in every single sport.
For example, we will be able to use blockchain technology that will allow athletes to include their medical passport that will be accessible for anti-doping agencies. We will look out for the athletes first and prevent their data from being mishandled and from being used in an unethical way.
Blockchain can also have a huge impact on the entire ticketing process. If we were to include tournaments like the next World Cup to be in a blockchain, we would have the ability to track the buying and selling of tickets. We could therefore block or authorize the resale of tickets, prevent people from selling the tickets at an extortionate price and also make sure it would be impossible for people to purchase tickets who have bans and are forbidden to access stadiums due to violence or any other reason. All this would be in a safe, traceable and a transparent manner while always having the identity of each buyer. We can make sure that the digital footprint is being used properly and create prevention of fake tickets from being sold on the internet.
We can create a world where both national and international federations could use the same blockchain. A federated German triathlete who partakes at the Ironman triathlon would be able to check whether everything is federated and if it is not or there are any issues, they would only have to pay a small fee for their health insurance and it could be an automated process. This is currently impossible. Adding to this, if they shared the list of those who are sanctioned for doping, it will immediately prohibit enrollment without violating their right to privacy.
Globatalent is one of the first projects based on the blockchain sports world and we are excited about this new paradigm. We want to lead this transformation in the world of sport and help create and generate sport based on blockchain and introduce a decentralized ecosystem.
We are already in contact with companies that are considering implementing blockchain for the creation of decentralized processes in the world of sport. Some of them will be integrated into our platform offering their services as a scout of the players and others will be independent to Globatalent.
I co-founded Globatalent to build a transparent, fair and valued sports ecosystem. Globatalent is here to promote these values.
By Sunil Bhardwaj
Co-Founder & CEO at Globatalent
— — — — — — — — — — — — — — — –
Our Community Channels:
Telegram: https://t.me/globatalent
Our Social Media:
Facebook: https://www.facebook.com/globatalent.official/
Twitter: https://twitter.com/globatalent
Linkedin:https://www.linkedin.com/company/18286680/
Instagram: https://www.instagram.com/globatalent/
Medium: https://medium.com/Globatalent
Bounty: https://bitcointalk.org/index.php?topic=2690003
Youtube: https://www.youtube.com/channel/UCD9YH3-Stofoh0eXWxwVBnA
Vimeo: https://vimeo.com/globatalent
New Explainer Video: https://vimeo.com/256492584
New Founders interview Video: https://vimeo.com/255880001
White Paper : https://globatalent.com/whitepaper
|
https://medium.com/globatalent/blockchain-and-sport-why-i-co-founded-a-platform-to-decentralize-the-industry-34a0d6df7165
|
[]
|
2018-03-21 13:13:48.999000+00:00
|
['Decentralization', 'Olympics', 'ICO', 'Blockchain', 'Blockchain Technology']
|
2,001 |
Reasons why a business must invest in mobile app development
|
In this digitally growing era, the customer’s behavior is continuously evolving and turning into techies. They are adopting new technologies and willing to get solutions for their requirements at their fingertips. Mobile apps are acting as a bridge between the customers and the businesses regardless of their locations. This evolving nature of customers and rapid change in the demand of the market is imposing businesses to take a swift turn towards mobile app development companies in India and invest in them to get better revenue and enhance overall business performance. To get a competitive edge for your business, hire mobile app developers in India is the right choice. With effective mobile apps, most of the leading businesses are gaining holistic experiences for users. Hire dedicated mobile app developers in India can be a profitable solution for your business, how? Reasons are listed below… To know more continue reading.
Reasons that make mobile app developers a profit solution for your business:
Reasons for opting for mobile app development are not limited. The benefits that a company can avail by investing in the mobile app are endless and hence have become the most vital part of an organization. Know more about the benefits of mobile apps.
Increase customer engagement:
Engaging customers with your brand is the best way to hold them for a long duration. Mobile apps help you to keep your customers engaged with your brands. With mobile apps, you can incorporate the latest and creative features to your business management and help your customers to utilize it more effectively. Not just the users but it also tempt the viewers to use your apps once and providing you a higher possibility to convert them into users.
Personalize channel:
In the present world, everyone looks for a personalized solution in which they get an effective solution based on their requirements without search or struggling more. Now, mobile app development is scaled up with artificial intelligence and machine learning that can understand the user’s requirements and learn the recent customer’s preferences. Using this platform you can create a personalized channel with the customers, by tracking their recent searches, learning their preferences, and serving them real-time solutions.
Build brand awareness:
Mobile phones are an inseparable part of the lives and now businesses are leveraging this opportunity to build greater brand awareness among the targeted customers and their users. Mobile apps remain on the screen of their mobile phone and always remind about your brand as the first preference for their immediate requirements.
If you want to hire Offshore Developers in India for your business development. DxMinds is a one-stop solution for all your business app requirements, we have an expert team of developers with years of experience in developing applications on various platforms at affordable prices.
|
https://medium.com/@davidhook333666/reasons-why-a-business-must-invest-in-mobile-app-development-4f8c33680de2
|
[]
|
2020-12-24 13:08:47.598000+00:00
|
['Mobile App Development', 'Technology', 'Business', 'Application', 'App Development']
|
2,002 |
How IoT Is Enhancing Structural Health Monitoring (SHM)
|
IoT provides value beyond visualization in Structural Health Monitoring (SHM) by extracting insights at a massive speed and scale, often with the help of AI. IoT adoption is becoming almost inevitable for infrastructure monitoring, security and operation.
The need for reliable and timely data in construction and critical infrastructure management is clear. However, until recently, data collection for structural health monitoring (SHM) has mainly been a manual exercise where engineers have to go out in the field to make key measurements. The challenge of manual data collection is that it’s slow, unreliable and highly inefficient.
The emergence of the Internet of Things (IoT) is a shot in the arm to the civil and structural engineering industries. Significant reductions in the cost of sensors and connectivity, combined with the growth in platform-as-a-service business models, now make it possible to gather lots of data remotely, aggregate it and make critical analysis to generate actionable insights.
IoT is a concept of exposing data generated from points of operational interest. At the basic level, it seeks to improve situational awareness by enabling visualization of important field parameters. Structural health monitoring (SHM) is the process of monitoring or assessing the condition of a structure in order to gather information on its current state by tracking variables like vibration, strain, stress and other physical phenomena, responses and conditions. It seeks to assist in non-destructive evaluations aimed to detect location and extent of damage, calculate the remaining life of an asset and predict upcoming accidents.
As critical assets like bridges, dams and equipment are constructed and age, it becomes apparent for the owner or operator to ensure their safety and longevity. While non-destructive testing is standard practice in the assessment of structural integrity, the possibility to protect and monitor assets with IoT sensors offers asset owners the opportunity to extract value through artificial intelligence, mitigating financial and operational consequences of deterioration and routine manual supervision. Moreover, asset owners are offered the capability to predictably manage maintenance investment more effectively, minimizing downtime and avoiding, or carefully planning, costly repairs.
The idea behind SHM is to collect data from multiple sensors installed on structures in order to process and understand useful information about the current state of the structure for maintenance and safety purposes. The ability to monitor and track critical assets in order to improve operational effectiveness is one of the most important ways IoT can add value to an asset owner or operator. Sensors can be placed at critical locations on members of the structure and relay information to the cloud through LPWAN, BLE mesh or 5G networks. The SHM system provides data about the changes in a structure due to the aging of materials, environmental factors or accidental damage.
Typically, SHM systems are devoted to monitoring humidity, temperature, accelerations, tensile stress, compressive stress and building materials degradation. The methods used are non-invasive and require the deployment of sensors in checkpoints well defined by the professionals. The information from the sensors is merged with mathematical models to determine the safety of the structure.
Drivers for IoT Adoption in Structural Health Monitoring (SHM)
Structural health monitoring seeks to identify the possible existence of a risk or opportunity in an infrastructural asset. It has evolved from a topic of interest to researchers to a business case due to the proliferation of sensors and IoT capabilities. Below, we summarise how the adoption of IoT in SHM is creating value for asset owners.
Safety and Compliance
The prevalent use of the user pay principle due to public-private partnerships on critical infrastructure means that users deserve and can demand safe and available assets. As structures age, safety concerns escalate. While it used to be difficult to detect deterioration inside structures, IoT sensors now allow the gathering and analysis of data thus ensuring higher safety and security. Permanent and reliable information about the structure guarantees the safety of users, peace of mind and reputation of the provider.
Furthermore, it’s cost-effective to implement a mandatory monitoring system through IoT sensors that may be required by regulation for mission-critical assets like nuclear power plants in order to demonstrate safety.
IoT provides value beyond visualization in Structural Health Monitoring by extracting insights through AI at a massive speed and scale, making IoT adoption inevitable for infrastructure monitoring, security and operation.
Continuous Structural Monitoring
IoT sensors enable the supervision of a structure on a continuous basis in real-time. This is important in order to maintain functional utility, optimal performance and security. Maintenance scheduling becomes more focused. Monitoring is complementary to already existing methods to test and diagnose e.g. BeanAir, Roctest, SGS and Giatec. It’s also important to monitor structures during construction, commissioning, operation, refurbishment, alteration and even dismantling in order to discover deficiencies on time, reduce insurance costs and improve service availability. Some structures may require monitoring due to their innovative approach or use of special materials.
Extending Structural Lifetimes
Sometimes a structure approaches its design lifetime and there is a need to evaluate if it’s possible to postpone costly maintenance without risking structure deterioration. Real-time monitoring and analysis may be used to discover hidden structural reserves that allow for controlled lifetime extension. Moreover, due to inherent design considerations, it can be possible to increase operational safety margins in a controlled way without risking over-stretching the infrastructure. Latent lifetime or load bearing capabilities can be made available and improve the utility of the asset.
Sensors can also be employed to detect and track the evolution of a defect in a structure, while simultaneously recording its response to stress events and environmental conditions. Armed with this information, decision-makers can take informed steps to prevent a defect from worsening, safely extending the asset’s life and reducing the risk of failure.
Optimizing Operations
A centralized structural management system is very important, especially in cases where the asset is complex or many assets are involved. Having all the critical field information on important assets presented in a consolidated way enables optimal operation, maintenance and repair based on reliable and objective data. Modern approaches such as maintenance on demand are made possible. Bringing the physical structures online allows for digital twinning. Reliable field data helps in quality control and quality assurance during construction, operations, maintenance and repair thereby eliminating the hidden costs of poor quality and non-conformance.
Continuous Improvement
Gathering data increases knowledge about the structure which allows for innovative and better structural designs in future especially in cases where similar assets have to be deployed at a larger scale. Sensors enable optimization of the design aspects and elimination of weaknesses on time.
Connected assets also offer research-based benefits through easy data collection thereby enabling knowledge growth and transfer through simulation and publications.
Increased Asset Value
An asset that relies on leading-edge technologies to improve outcomes has more prestige and perceived value. In disaster-prone areas, this may be used to negotiate for reduced insurance costs and even environmental taxes.
Marketing Technical Advancement
Technology can be an instrument to innovate and acquire new customers, especially for forward-looking organizations that seek to stay at the forefront of self-disruption. Connected infrastructure improves marketability. In some cases, using IoT capabilities in a project may reassure the public after the failure of an asset, build public confidence and generate excitement for future use.
Conclusion
Major factors driving the growth of the structural health monitoring market include high capital investments for structural health monitoring across various industries, stringent environmental regulations pertaining to the sustainability of structures and a decrease in the cost of IoT sensors that decrease the cost of adoption in structural health monitoring systems, among others. Additionally, the evolution of AI and digital twinning is making the adoption of IoT in SHM ever more appealing.
|
https://medium.com/iotforall/how-iot-is-enhancing-structural-health-monitoring-shm-db7e7cc3e491
|
['Farai Mazhandu']
|
2020-06-17 13:26:01.076000+00:00
|
['Construction', 'Automation', 'Artificial Intelligence', 'Technology', 'Machine Learning']
|
2,003 |
The Ethical and Reputational Risks of Artificial Intelligence
|
[Boardwise advises Directors on corporate governance. This piece originally appeared in their newsletter here.]
“AI algorithms may be flawed,” Microsoft’s 2018 annual report states. “If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.” [1]
Microsoft speaks from experience. In 2016, Microsoft released a chatbot powered by AI. In less than 24 hours the chatbot started tweeting misogynist and racist remarks. Amazon had a similar experience when, in October 2018, it was discovered that their AI powered hiring software discriminated against women.
There is no doubt that AI is here to stay and — undoubtedly — will create trillions in revenue. Yet, board members charged with sustainable growth and protection of their company’s brand must ensure AI is deployed in ways that manage the associated ethical-cum-reputational risks intrinsic to the technology.
What is Artificial Intelligence?
Let’s start with a distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI). A machine with AGI would have an intelligence that largely mimics or mirrors that of human beings. It would be able to perform a number of tasks, learn how to do new ones, and would constantly improve its skill set. Of course, just as humans come with varying degrees of intelligence, so too would AGI. We can imagine some AGI’s with the intelligence of a five-year-old while others have the intelligence of a well-educated 40-year-old, and still others with an intelligence that surpasses anything a human has ever achieved.
AGI does not exist at present and it is hotly debated if and when it might. This kind of AI is not deployed by companies now. To be clear, I’ll only discuss ANI here, referred to as “AI”.
AI comes in different types. Here is a familiar one to start. You text your friends, family, and colleagues. You notice — as you type — your iPhone or Android suggests how to complete the word you are typing or it might suggest the next word altogether. How does it do this?
Whenever one texts, Apple or Google is collecting that information (or as engineers say, that data). Once it is collected a computer program looks for patterns in that data. For instance, the program notices that, after someone types the word “good,” there’s a 44% chance they’ll type “morning” next, 32% chance they’ll type “job” next, and a 15% chance they’ll type “afternoon” next. Every other word in the dictionary has a less than 15% chance of being typed next. The program then does one last thing: it makes suggestions to the texter the next time they type in the word “good”. Those suggestions are the programs outputs. The computer program that does that — the “algorithm,” as it’s often put — is an artificially intelligent program.
Consider another example with more ethical risk.
Let’s get a computer program to “know” when there is a dog in a picture you just took with your phone’s camera. Let’s write a pattern-recognizing piece of computer software that will flash a green light when you upload a new picture of a dog and a red light when you upload a picture that does not contain a dog. How do we create that machine?
First, we get tons of pictures of dogs and we upload those pictures into our computer software. We tell the computer software — the algorithm — these are pictures of dogs, and any new picture that gets uploaded that is like these also contains a dog. The software will look for patterns in the pictures of all those dogs we uploaded. Perhaps the software notices all the pictures have dark circular things at a certain distance from each other (eyes), or it looks for long, somewhat thin pinkish things (tongues), or it looks for similarities among dogs we, as humans, do not notice or consider, e.g., the degrees of the interior angles of the triangle formed by the eyes and the middle of the mouth. In truth, when engineers write these algorithms and tell them to look for patterns, they often don’t know what patterns the software is recognizing or finding, because sometimes it will look at things so granularly we can’t comprehend it.
Are you starting to see the risk yet?
We’ve trained our program so we can now upload our new photo that contains a dog. If our program is well made and recognizes patterns that indicate dogs, our uploaded new picture will get a green light. Further, if we upload a picture of a cat, our software that nailed down that “dog pattern” will trigger a red light.
None of these programs are perfect, of course. Sometimes — say, 1% of the time — our software will identify the thing in the picture as a dog when it’s really something else; a wolf, say. That’s not a big deal, though, since we just want our newest picture of a dog to go with our other dog pictures; this isn’t life and death.
Until it is.
The Ethical Risks of AI
There are many ethical risks associated with AI. Three from our AI examples help explain how these risks arise and point to strategies to manage them.
Invasions of Privacy
AI requires lots of data. When talking about data related to AI, “data” is really just a euphemism for “information about people.” AI programmers need lots of data to train their algorithms, which means lots of information about a lot of people.
This raises questions about how we collect that data in the first place, and whether it is done ethically. Did we get the informed, meaningful consent from those whose data it is or did we collect it in a way that violates their privacy? When they consented, did they know what we would do with their data? For how long we would save it? With whom we would share it? Whether we’ll sell it?
Here is the problem. Companies are incentivized to constantly surveil their actual and potential customers to learn more about them and to feed their AI programs to become increasingly more accurate (say, for the purposes of target marketing). That can lead to an astounding invasion of privacy. Consumers who discover they are being surveilled may very well create a backlash against those companies who fail to respect their privacy.
Board directors must ensure companies have systematic and robust processes for acquiring the data that feeds their AI algorithms in a way that does not infringe on their customers’ or employees’ privacy. Failure to do this may result in alienating those constituencies, not to mention inviting lawsuits. Microsoft, Facebook, Disney, Google, and other companies have faced lawsuits for just this reason.
Let’s assume all the data inputted into the algorithm is responsibly acquired. Next is the issue of whether the data is biased in a way that could lead to discriminatory behavior. We saw this in the case of Amazon’s hiring algorithm.
Amazon receives tens of thousands of applications for employment. It takes robust human power to review those resumes, so Amazon created software to read the resumes and throw out ones unlikely to lead to a hire. It turned out, though, Amazon’s software discriminated against women. When the software recognized it was reading the resume of a woman, the software threw out that resume. How did this happen?
Like any other piece of AI, Amazon’s Human Resources AI Software first needed to be trained. It had to be given lots of resumes which were already reviewed by humans and told ‘the person who wrote this resume was hired’ and ‘the person who wrote this resume was not hired’. Then, the software took that information and looked for patterns of those people hired and not hired. The goal was to enter a resume and get a green light (“hire”) or a red light (“do not hire”) based on how well this new resume matched the resumes of those people hired in the past.
Amazon used the thousands of resumes it reviewed in the past decade for its training data. When the software searched for patterns in that data, it noticed something: women are not hired at the same rate as men. Thus, when a resume said, for instance, “Women’s NCAA Basketball,” it got a red light. This is discrimination at scale.
Board directors must ensure their companies have systematic and robust processes that vet the data used to train AI for bias. This holds for more than HR AI algorithms. It matters when you’re talking about credit scores, granting a mortgage, setting insurance premiums, target marketing, and more.
Unexplainable AI
Artificially intelligent programs provide a wide variety of outputs. For instance, this person is high/low risk for a mortgage, this person’s credit score is X, this person should be hired/promoted/fired, this person should (not) be insured at this high/low premium, this person should (not) be admitted to this university, this person should (not) go on a date with this person, this person is (not) legally liable.
These big decisions have massive impact. Traditionally, these are decisions made by humans. As AI deployment increases, these decisions may be outsourced to AI algorithms. A problem is that we may not understand why the AI algorithm gives the output it does: why does this person get a green light and this other a red light? In the dog-recognition software example, the AI may find patterns we had not considered or could ever understand, like the degrees of the interior angle of the triangle formed by a dog’s two eyes and mouth. The ethical risk consists in being unable to explain why a company treats consumers and employees as it does. It is not justified to say “that’s just what the machine told us to do.” Once again, the risks are high, both for brand reputation damage for unexplainable (and potentially unfair) decisions, and exposure to lawsuits.
Again, directors and their companies need systematic and robust processes around what people should do with the output of an AI machine. Blindly following the AI is not enough. What to do with that information must be clearly articulated and understood. Directors are prudent to push engineers for “explainable AI,” that is, AI whose outputs are explainable to us mere mortals.
Final Remarks
Any company employing AI must address the ethical risks that, if not properly managed, threaten to damage the corporate brand and expose the company to litigation. Tackling these after a crisis occurs is painful and difficult. Companies and the directors whose charge it is to protect the long-term sustainability of their brands need to proactively weave ethical risk management strategies into their more general AI strategies.
To start, it is wise to implement processes and practices to protect companies when they purchase, integrate, and deploy AI. Consider the ethical-cum-reputational risks relating to privacy, bias, and explainability. Companies that don’t address these ethical risks of AI may find the harm they ultimately suffer from the realized risks exceed the gain from deploying AI solutions in the first place.
|
https://medium.com/@reid_blackman/the-ethical-and-reputational-risks-of-artificial-intelligence-ae3dbbc697d8
|
['Reid Blackman']
|
2020-04-23 17:52:38.375000+00:00
|
['Reputation Management', 'Technology', 'Risk', 'Ethics', 'Tech']
|
2,004 |
You’re Not A Mote of Dust
|
Spheres of influence
Consider mere humanity, once again.
Despite being dwarfed by the immensity of the cosmos, some people are working on some pretty neat things. For example:
Breakthrough Starshot aims to launch tiny, interstellar spacecraft at 20% of the speed of light within 20 years, on a realistic budget.
Artificial Superintelligence (ASI) is estimated by poll-of-experts to be around 50 years away.
These things are likely to happen within a human lifespan, and if you can imagine them, it’s not hard to fill in some more details.
— Intelligent, self-replicating machines will likely be developed after ASI, but we know they’re physically possible. After all, you are one. And we know they can be robust and powerful.
— We could also imagine a seed for a self-replicating ASI. Something that isn’t superintelligent now, but can become so, given the resources to grow and improve itself. Back in 1878, even Einstein was just a single cell, packed with potential.
— The escape velocity of our galaxy is something like a tenth of one percent of the speed of light. The tech to travel to the nearest star at high speed is about the same as the tech to travel to the nearest galaxy. The galaxy trip just takes a lot longer. But, coasting while dormant for a few million years shouldn’t be that big a leap (the more challenging part, it turns out, is slowing down enough to land). As a last resort, put the ASI on the problem for a few minutes to work out the details. If it gets done early, maybe it can work on boosting that “20% of the speed of light” target a bit.
What’s the point of this? We’re talking about the tech required to initiate full colonization of the Milky Way Galaxy, followed by the entire local group of galaxies, the local supercluster of galaxies, and so on. Forever. A growing sphere of influence that expands outward at a large fraction of the speed of light for billions of years, overtaking many millions of galaxies. Because these spacecraft can reproduce exponentially, the total cost of this project is just the cost to launch the first one. Depending on how the tech trends unfold, it might be initiated by someone alive today (or likely, many competing groups, when the tech begins to seem tantalizingly close-at-hand).
To be clear, we’re not talking about building fleets of Battlestar Galacticas, packing them with thousands of warm human bodies, and sending them on multi-million year voyages. Maybe that’s physically possible in a narrow sense, but also absurdly expensive and prone to failure. On the other hand, sending human minds will be absurdly cheap. Bodies are hardware — they can be built at the destination, when the time comes. What actually needs to be sent through space is information (e.g. instructions and mind content), and a minimal seed from which to build infrastructure — stuff that weighs mere grams, and will be dramatically more stable over a long voyage. Send the info, leave the meat behind.
Such a project will come to have serious resources at its disposal, to expend over cosmic time. Moons, planets, stars, dust, black holes, etc. Everything to be found in galaxies. Hundreds of millions of them, as the sphere of influence expands.
What will it do with those resources? I don’t know. For the argument I’m making here, we don’t need to know the end goal yet— I hope it’ll be something nice. But I can take a SWAG at how it gets there. There is a thing called the “maximum power principle” from ecology. It says that in a big, competitive environment (like an ecosystem, or an economy), the systems that survive and thrive tend to be those enabling the fastest use of energy. Maximum power. Gains in efficiency are always nice, but only insofar as they enable greater speed of energy use.
Imagine that principle, unleashed on a cosmic scale. The waste heat from such massive expenditures will, of course, have to be disposed of — radiated away into deep space. In fact, a very few SETI searches have looked for this kind of heat, coming from nearby galaxies. They haven’t looked far enough to see anything, but that’s a story for later.
The geometry of colliding civilizations is surprisingly simple at this scale, due to the uniform distribution of galaxies. Galaxy position data from the Sloan Digital Sky Survey.
The bottom line is this — if we do it, others will too — starting from their own homeworld in a galaxy far far away. They can’t be too close, else the universe would already be fully-packed with them (that principle limits how often they could appear, and thus how large our domain might grow). But if we do it, the question immediately shifts from “if” to “how often” other life appears and embarks on such massive projects. Perhaps our ever-expanding, intergalactic domains will collide, in the distant cosmic future.
|
https://medium.com/predict/youre-not-a-mote-of-dust-bfa9b993ddc4
|
['Jay Olson']
|
2021-09-12 19:58:12.778000+00:00
|
['Future', 'Astronomy', 'Cosmology', 'Technology', 'Future Technology']
|
2,005 |
How to Setup Push Notifications in React Native (iOS & Android)
|
Looking to set up Remote Push Notifications? I’ve got a post that covers exactly that using OneSignal. This post exclusively handles local notifications.
This week I wanted to try something different and, rather than write a blog post, record a screencast.
Video is a completely different beast than writing and something I’m not (yet) comfortable with. Especially the changed workflow. However I did enjoy putting it together and it can be nice looking over someones shoulder as they think through problems.
The screencast covers how to setup Push Notifications in React Native, for both Android and iOS. It leverages react-native-push-notification to handle the hard stuff. Check it out and I hope you enjoy! It’s just over 30 minutes in length. I would love feedback as to how I can improve these types of tutorials.
|
https://medium.com/differential/how-to-setup-push-notifications-in-react-native-ios-android-30ea0131355e
|
['Spencer Carli']
|
2020-02-05 15:52:28.596000+00:00
|
['Technology', 'Mobile App Development', 'Push Notifications', 'React Native', 'React']
|
2,006 |
Hiding data from humans and computers in GIF files
|
That’s right. No extra data, no changes to the image. All the pixels and colours are exactly the same. Any data you want inside a GIF without any trace. But how? I’m glad you asked.
During highschool I was interested in steganography, the study of hiding messages in plain sight. I had just finished a simulation project that encoded GIFs, so I was still in the headspace of the GIF codec. I began tinkering with the different elements and found an area I wanted to manipulate. Here’s how I encoded 127 bytes into a GIF without changing the file size.
Why is this special?
Many steganography algorithms and strategies change the pixels of an image in such a way that it is not visible to the human eye (for example, least significant bit steganography). However, to a computer this difference is visible when compared to the original image. This algorithm produces a GIF with identical pixels to its original, and does not take advantage of any metadata in this image either. Nothing is added, no pixels are changed, no metadata is changed. Just the order of a couple bytes. This makes detection much harder, and is why this algorithm is special.
The “residual image”: the small differences in the image that are detectable by a computer and not a human.
The GIF Codec: An Overview
The GIF codec is simple but powerful. It uses a very primitive form of compression, called a color palette. This limits the number of colours your images can use to reduce the size of each pixel. Instead of storing an R,G,B for each pixel, it simply stores an index (0, 1, … 255) corresponding to a colour in the palette. Here’s an example palette.
An Example Palette with only 2 colours
To the computer, this is what an image with this palette would look like.
What the computer sees
A GIF can either use a global palette, or local palettes. Local palettes are different for each frame, allowing for more colours and less compression, while a global palette is defined once for the entire animation, reducing redundancy of colours between frames to 0. This script focused on global palettes.
The Global Palette
The secret to my steganography lies within the global palette. Although the colours in the 256 element array are specific, their order is completely arbitrary. With this we have our first glimpse of the strategy: rearrange the elements in a certain way such that the ordering encodes some sort of message. We can do this by first sorting the array of colours as a sort of base key. The colours are sorted lexicographically, red taking the highest priority, and blue taking the lowest priority. As long as no colour occurs twice, we can always have a winner between two colours which produces this definitive order.
Now that the array is sorted, we now imagine a new empty array with indices from 0 to 255. When placing the first element of the sorted array, we may choose index 0 to 255, to encode 256 base 10 digits. Then when we place the second element, we have 0 to 254 for 255 base 10 digits encoded (ignoring the added element and viewing the array as a contiguous). We continue this pattern all the way until there is only 1 option to place the element in, with 0 base 10 digits encoded for element 256 of the sorted array. The sorting is important because it tells us the order of our digits. Wherever the first element of the sorted array is in the encoded array tells us what our first decoded value is.
When you line up this pattern, the low numbers can be matched with the high numbers to always equal 255, or 1 byte.
So what we are left with are 128 bytes of data. To encode the data, simply split the numbers into their large and small counter parts based on their order in the sequence. For example, if we are encoding the third number, we have already used 2 indices. This means that our max index is 253, or 254 possible base 10 numbers. This will be our “large” number. The small number then represents the missing 3, as 253 + 3 = 256. So if our number was 253, the 3 portion would be set to 0. If our number was 254, the 3 portion would be set to 1, and so on. This way we can still create a maximum of 255 and minimum of 0 (a whole byte).
But wait, what about the number in the middle? Given that the first number is 256 and the last is 0, that means in the middle we will have a 128 bit option with no counterpart. This exception evaded me for quite some time until a group of peers I was presenting my idea to came across the idea before me. So, it is in fact only 127 whole bytes of data we are able to encode with an odd 128 bit number in the middle.
After you have the data available to you like an array, you can stop thinking about steganography and open your mind up to all the possibilities of what you can store in that space! With specialized compression engines like smaz you can store up to 256+ characters. Additionally, you can encrypt these messages so they aren’t publicly decodable.
Reflection
After making this, I thought about it for some time. It’s quite an interesting topic that I never hear talked about in Computer Science: the data encoded in order. There is an immense amount of data encoded in the order in which you store things. Some choose to use it, some don’t. For example, a sorted list has an immense use case for the data it is storing with its indices: logn search time! In this case, the GIF codec was not making use of the ordering.
This kind of thing makes me wonder. There are petabytes of data flowing through the internet on a daily basis. What secrets are hidden within them? Even developers with trained eyes for data would see this pass straight by and think nothing of it. Maybe this algorithm is already in use, but is ill discussed. After all, the whole purpose of the algorithm is for it to be undetectable.
(The code this blog is about can be found here)
|
https://medium.com/@calderwhite/hiding-data-from-humans-and-computers-in-gif-files-6d95523bf9bb
|
['Calder White']
|
2020-11-27 01:56:02.951000+00:00
|
['Programming', 'Image Processing', 'Data Science', 'Technology', 'Steganography']
|
2,007 |
DeFi can become a safe and profitable environment for all.
|
Astra can give some certainty.
Astra uniquely provides a way for decentralized organizations to comply with rules and regulations worldwide while remaining decentralized. At a pivotal time for the crypto world, Astra can give some certainty.
DeFi platforms offer higher yields than traditional financial platforms, making them very attractive to users. A staggering 11% of young Americans invested their stimulus checks into cryptocurrency platforms, showing just how prevalent the technology has become. Innovative as decentralized finance is, the world of crypto is still in its infancy. With the rise in popularity comes an increase in hacks and other types of criminal activity, which authorities are keen to stamp out. Regulatory bodies are already getting their teeth stuck into those that appear to flout the rules. On 10 August 2021, the cryptocurrency derivatives trading platform, BitMEX, was ordered to pay $100 million in a civil monetary penalty. Acting director of enforcement at CFTC, Vincent McGonagle, says, “Cryptocurrency trading platforms conducting business in the U.S. must obtain the appropriate registration and must implement robust Know-Your-Customer and Anti-Money Laundering procedures.” If they don’t, they’ll pay the price.
Astra is a fully decentralized platform that performs the required KYC, AML, and other compliance checks on behalf of lending and borrowing applications and other DeFi protocols.
In a speech made before the European Parliament Committee on Economic and Monetary Affairs, Chair of the U.S. Securities and Exchange Commission (SEC) Gary Gensler said, “absent clear investor protection obligations on these platforms, the investing public is left vulnerable. Unfortunately, this asset class has been rife with fraud, scams, and abuse in certain applications.”
He goes on to say, “for those who want to encourage innovations in crypto, I’d like to note that financial innovations throughout history don’t long thrive outside of public policy frameworks. In finance, that’s about protecting investors and consumers, guarding against illicit activity, and ensuring financial stability.”
The message is clear, by following the rules and regulations set out for traditional financial institutions, DeFi can become a safe and profitable environment for all. However, the challenge of achieving compliance in a decentralized world is still a significant hurdle. Fortunately, Astra is here to help. Astra is a fully decentralized platform that performs the required KYC, AML, and other compliance checks on behalf of lending and borrowing applications and other DeFi protocols. Our technology allows decentralized organizations to comply with rules set out by the SEC and other regulatory bodies across the world without compromising the notion of decentralization. We aim to create a financial world that protects investors and consumers, enabling innovative platforms to flourish.
|
https://medium.com/@astraprotocol/defi-can-become-a-safe-and-profitable-environment-for-all-6ee50aa677e1
|
['Astra Protocol']
|
2021-09-03 16:07:16.239000+00:00
|
['Compliance', 'Crypto', 'Blockchain Technology', 'Defi']
|
2,008 |
10 Essential Gadgets For Any Ethical Hacker
|
10 Essential Gadgets For Any Ethical Hacker
Image by Pixabay
Sometimes, in security audits, you may face a scenario in which everything is managed correctly, which means; security patches, policies, network segmentation, antivirus and user awareness are well applied.
It is then that, to continue the analysis from the perspective of researcher or security consultant, Social Engineering and some other tools like the ones we will see throughout the post begin to have a more important value, being perhaps the only ones that will allow to penetrate the target system.
These are hardware mostly designed for security projects or research.
Here are the 10 tools every ethical hacker needs.
#1 Raspberry Pi 3
This is the third generation of these low-budget computers, which can be used for multiple purposes. A classic example in security audits is to use a Raspberry Pi with its appropriate batteries, a distribution like Kali Linux and applications like FruityWifi, which will become the Swiss pentesting knife.
#2 WiFi Pineapple
This set of tools for wireless penetration testing is very useful for different types of attacks, such as the classic Man-In-The-Middle. Through an intuitive web interface, it allows you to connect to any device such as smartphones or tablets. It highlights its ease of use, workflow management, the detailed information it provides and the ability to emulate various advanced attacks, which are always a couple of clicks away.
As a platform, pineapple WiFi allows numerous modules that are developed in the community which add features to expand its functionalities. Fortunately, these modules are installed for free directly from the web interface in seconds.
#3 Red Alfa Plates
A classic in terms of the Wi-Fi plates used for packet injection. They stand out for the quality of their materials, as well as using chipsets that allow them to be put into monitor mode, which is a requirement for wireless audits.
#4 Rubber Ducky
This “special” pendrive is a device that functions as a USB-shaped programmed keyboard. When you connect to a computer, you can start writing to the device in an automated way in order to launch programs and tools. These programs or tools may either be on the victims’ computer or loaded into the Micro SD memory, to get information out.
Image by Pixabay
(If you watched the American drama thriller Mr. Robot, you’ll remember that in the second season Rubber Ducky is a key ally for Angela and helps her get the login credentials of an E Corp executive)
#5 Lan-Turtle
This type of penetration testing and system management tool provides remote access stealthily, as it remains connected to a USB port covertly. In addition, it allows the collection of information from the network and has the ability to run Man-In-The-Middle.
#6 HackRF One
This tool implements a powerful software-defined radio (SDR) system — that is; it’s, essentially, a radio communication device that implements software usage rather than typically implemented hardware. In this way, it is capable of handling all kinds of radio signals between 10 MHz and 6 GHz from the same peripheral, connectable to the computer via a USB port.
#7 Ubertooth One
This device bases its source on a 2.4 GHz open source development platform suitable for Bluetooth experimentation so that you can appreciate the different aspects in new wireless technologies of this type.
#8 Proxmark3-kit
The Proxmark III is a device developed by Jonathan Westhues that has the ability to read almost any RFID tag (Radio Frequency Identification), as well as reproduce or snever them. In addition, it can be operated autonomously — without the need for a PC by using batteries.
#9 Lock picks
It is important to consider that in some countries your possession is considered illegal, so we do not recommend any action that goes against the law; please check the regulations in your country before purchasing such tools.
It earns the main lockpicking links, i.e. the art of opening a lock or physical security device by analyzing or manipulating its components, logically, without having the original key. There are a large number of sizes and formats or kits, which in many cases will help to check physical security.
#10 Keylogger Keyboard
An ancient classic in key capture. This device can be connected via USB or PS/2 and allows stealth connection between the keyboard and the PC, capturing all the keys used. Of course, it is usually undetectable for most security systems.
While we’re still away from Christmas, you may be happy to give yourself some of these devices, which will undoubtedly accompany you in many hours of testing.
Overall, in your next pentest, they could be the gateway to a target that seemed impenetrable :)
|
https://medium.com/@thegradeks/10-essential-gadgets-for-any-ethical-hacker-a9b8bd1f11f8
|
[]
|
2020-12-13 20:54:28.814000+00:00
|
['Beginner', 'Tools', 'Technology', 'Programming', 'Hacking']
|
2,009 |
How does Overwolf make money?
|
The deep dive
It feels appropriate to open this discussion with the way some people think we make most of our money — collecting user data and selling it to the highest bidder.
Well, actually, we DON’T and we won’t sell any data.
With that cleared up, it’s worth noting that in the past, we had a deal with NewZoo — a trusted research agency that publishes reports on the gaming market. NewZoo used aggregated usage data from Overwolf to create reports like “the top 20 core PC games”, or the most used hardware. No personal information, just aggregated stats like the ones you’d find on the Steam software and hardware surveys.
This partnership accounted for under 1% of Overwolf’s revenue in 2019 — so even back then this was definitely not where our money came in from. In June 2020 we realized we’re not comfortable sharing any data with anyone, not even a research agency. So we ended that relationship and amended our privacy policy to reflect that.
We’ve also added ways for users to opt-out from data collection done by the Overwolf client to personalize your experience and analytical purposes, which you can find on the “Privacy tab” on Overwolf settings.
So how DO we make money then?
The TL;DR is that Creators on Overwolf can add “in-app ads” or subscription plans to their apps. Whenever developers choose to monetize their apps we get a 30% cut — kind of like the Apple Appstore.
A little over 85% of Overwolf’s revenue this year has come from these ads and subscriptions, with about 70% of which being passed on to app creators. For mods and addons, 70% of the revenue generated by CurseForge is distributed to authors based on mod usage, in a similar way to how Spotify pays artists.
Considering how big ads are in the revenue pie, they’re probably worth explaining. A lot of people think Overwolf is the one that places ads in apps, but that’s not actually the case. Overwolf is an engine for building apps, just like Unity is an engine for building games. And just like Unity has a “Unity ads” service for game devs, Overwolf has, well, an “Overwolf ads” service. We allow creators to add “in-app ads”, which are muted videos or static banners shown inside the app window. Many app creators choose not to include ads in their app, but creators who do must comply with Overwolf’s dos and don’ts — meaning no intrusive ads, no popup ads, no ads that run when the app is minimized, and more. Basically, nothing that interferes with the gaming experience.
The remaining 15%
So what about the rest of our revenue? The next 11% came from the brands that run marketing projects and special activations on Overwolf — Like the Intel Gaming Access loyalty program, or The 2020 Alienware Games.
We like working on these activations because we get to design fun and engaging activations that make sense for the brands, we get paid for it, and we also share amazing rewards with the community. In the past year, alone gamers won over 600,000 game keys, $100,000 in in-game currency, and other amazing rewards like Alienware PCs and peripherals.
The last 3% came from mobile ads, originating from Overwolf’s mobile apps — most notably Brawl Stats and Stats Royale.
And that’s the big picture
And that’s it, that’s the big picture. We hope this video and post helped shed more light on how Overwolf makes money, and how much of that money is actually passed on to creators.
If you’d like to see more deep dives and videos like these from us, please share your feedback at ideas.overwolf.com.
|
https://medium.com/overwolf/how-does-overwolf-make-money-f70a195a4ea9
|
['Gil Tov-Ly']
|
2020-12-01 04:23:41.790000+00:00
|
['Gaming', 'App Development', 'Startup', 'Technology', 'Esport']
|
2,010 |
Everyone who doubts you will always come back around. That kid who used to bully you will come asking for a job
|
Everyone who doubts you will always come back around.
That kid who used to bully you will come asking for a job Olv Nov 19, 2020·6 min read
Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore.
Good times and bad times. Happy times and sad times.
But always, life is a movement forward.
No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career.
https://okt744.medium.com/live-rafael-nadal-vs-stefanos-tsitsipas-live-mubadala-tennis-full-game-broadcast-119e7b3cfd2b
https://gumroad.com/allsportslivest/p/atp-tour-finals-2020-rafael-nadal-vs-stefanos-tsitsipas-live-streaming
https://gumroad.com/allsportslivest/p/2020-nitto-atp-finals-live-stream-practice-court-1-thursday
https://gumroad.com/allsportslivest/p/nadal-vs-tsitsipas-live-stream-nitto-atp-finals-2020-free
https://gumroad.com/allsportslivest/p/live-tsitsipas-vs-nadal-live-stream-free-tennis-final-2020
https://gumroad.com/allsportslivest/p/nadal-vs-tsitsipas-live-stream-watch-atp-finals-2020-match-online-and-on-tv-today
https://gumroad.com/allsportslivest/p/live-rafael-tsitsipas-vs-nadal-live-stream-free
https://gumroad.com/allsportslivest/p/tsitsipas-vs-nadal-live-free-watch-tennissteam-guide-2020-nitto-atp-finals-on-reddit
https://gumroad.com/allsportslivest/p/how-to-watch-nadal-vs-tsitsipas-live-stream-free-finals-match
https://gumroad.com/allsportslivest/p/free-rafael-nadal-vs-stefanos-tsitsipas-live-stream-reddit-tour-finals-london-2020
https://steemit.com/live/@swbrwrentryt/rafael-nadal-vs-stefanos-tsitsipas-live-stream
https://olq767.medium.com/nadal-vs-tsitsipas-live-stream-4k-f91f55690fc0
https://olq767.medium.com/watch-live-tsitsipas-vs-nadal-mubadala-tennis-full-game-live-full-match-broadcast-eb1b809b8eeb
https://olq767.medium.com/nadal-vs-tsitsipas-free-livestream-tv-channel-e94b73f628f0
https://steemit.com/cgvjh/@swbrwrentryt/f-cdg-xfcgdx-xfcgd-xgfcgfgdx-fcgvhbxg-fc
https://paiza.io/projects/cy0YTVhPhMKtMgrlpETadg?language=php
What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.”
1. Most people are scared of using their imagination.
They’ve disconnected with their inner child.
They don’t feel they are “creative.”
They like things “just the way they are.”
2. Your dream doesn’t really matter to anyone else.
Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you.
3. Friends are relative to where you are in your life.
Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends.
4. Your potential increases with age.
As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way.
5. Spontaneity is the sister of creativity.
If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment!
6. You forget the value of “touch” later on.
When was the last time you played in the rain?
When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby.
Do that again.
You will feel so connected to the playfulness of life.
7. Most people don’t do what they love.
It’s true.
The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same.
Don’t fall for the trap.
8. Many stop reading after college.
Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.”
9. People talk more than they listen.
There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again.
10. Creativity takes practice.
It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable.
If you want to keep your creative muscle pumped and active, you have to practice it on your own.
11. “Success” is a relative term.
As kids, we’re taught to “reach for success.”
What does that really mean? Success to one person could mean the opposite for someone else.
Define your own Success.
12. You can’t change your parents.
A sad and difficult truth to face as you get older: You can’t change your parents.
They are who they are.
Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door.
13. The only person you have to face in the morning is yourself.
When you’re younger, it feels like you have to please the entire world.
You don’t.
Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that.
14. Nothing feels as good as something you do from the heart.
No amount of money or achievement or external validation will ever take the place of what you do out of pure love.
Follow your heart, and the rest will follow.
15. Your potential is directly correlated to how well you know yourself.
Those who know themselves and maximize their strengths are the ones who go where they want to go.
Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future.
16. Everyone who doubts you will always come back around.
That kid who used to bully you will come asking for a job.
The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way.
Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help.
17. You are a reflection of the 5 people you spend the most time with.
Nobody creates themselves, by themselves.
We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them.
18. Beliefs are relative to what you pursue.
Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs.
Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative.
Find what works for you.
19. Anything can be a vice.
Be wary.
Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them.
Never mistakes, always lessons.
As I said, know yourself.
20. Your purpose is to be YOU.
What is the meaning of life?
To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece.
Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish.
|
https://medium.com/@olv772/life-is-a-journey-of-twists-and-turns-peaks-and-valleys-mountains-to-climb-and-oceans-to-explore-e4bcb7f4b81f
|
[]
|
2020-11-19 17:10:46.087000+00:00
|
['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming']
|
2,011 |
How 5G Will Change Future Mobile Apps • WonderIT
|
Currently, the world is using the 4G network and people are pretty satisfied with it and its effect on mobile apps and websites. It offers us fast internet, fast downloading speed, etc. Let us give you all a brief on the development of the networks:
1G allowed connection between two cellular mobile phones and it emerged in the ’90s.
2G allowed us to send text messages.
3G gave us an internet connection.
4G gave us a faster network.
What will the 5G do then?
Nowadays, the 5G network is becoming an all-time favorite topic. People like to discuss it, whether they are experts or amateurs, people that have seen proof of 5G networking, and people that are just obsessed with random conspiracy theories. Either way, it all comes down to the 5G and its impact on people and the whole world in general. But you’re not here for the random small talks, you’re here to see how will the 5G network affect mobile apps, and we’re here to answer you that. after this, you might as well read our previous post on mobile apps vs. mobile websites- Mobile App Vs. Mobile Website: Choose The Right One
In this post, we will discuss the impact that the 5G network will have on existing mobile apps, apps that are in the process of development, and apps that are yet to be developed.
First and foremost- let’s answer the BIG question:
What is 5G network?
Some people have the wrong idea that 5G is a modified version of 4G. but — NO. 5G is an entirely new network type that is soon to be released worldwide.
The 5G network sounds quite promising- it should offer us better connectivity, better signal, and greater speed. But, we can all just assume and let time tell us.
The network offers 10 GB/s, which is a whole lot more than its ‘ancestors’ have ever offered us.
What are the benefits of a 5G network and how will it affect mobile apps?
[Continue Reading the Article…]
|
https://medium.com/@aneta-pejcinoska/how-5g-will-change-future-mobile-apps-wonderti-c5fec4d77246
|
['Pejcinoska Aneta']
|
2020-12-07 14:58:34.096000+00:00
|
['5g', 'Mobile Apps', '5g Technology', '5g Network', '5g Phone']
|
2,012 |
Advice for Building Successful Data-Driven Products
|
At Dialexa we partner with enterprises and startups alike to design, build, and deploy successful data-driven products from the ground up. A data-driven product is one that, at its core, is based on an intelligent engine that leverages data to automate decisions.
Take, for example, a platform that bids for online advertisement placements. This is a platform that would require some human input in the form of what advertisements they want to display, some optional hard bidding limits, and a configurable aggressiveness factor. The system itself could be driven by a machine-learned agent that makes bids, monitors the click-rates of an ad, and potentially makes online adjustments to itself to optimize bidding patterns.
Product-focused data science and machine learning comes with a whole new set of challenges that typical data science projects are not constrained by. In product-focused projects, data scientists work with multidisciplinary teams of designers, software engineers, and product owners to make sure their models are aligned with business objectives, created within the constraints of the system, and delivered in an agile timeframe.
Over the years we have seen some common scenarios across multiple projects and have accumulated some techniques on how to mitigate risk, deliver value quickly, and build a robust plan for the future.
Here are some points of advice for those looking to build a data-driven product.
Acquire and analyze data early
Successful machine learning and data science products live and die by data. In Kaggle competitions and some domains of academic research, data is clean, accessible, trustworthy, and abundant enough to train a model from. Industrial data science data, however, is typically unformatted, noisy, and strictly governed.
One of the biggest challenges we’ve faced has been cutting the red tape just to get our hands on the right dataset. Enterprises have a treasure trove of data sitting idly in their warehouses, but between your team and that data sits multiple departments (legal, IT, governance) who needs to approve the transfer, a potential negotiation process to buy the access rights, and a team of data engineers to settle on the data contract all before the data is transferred to your team.
Without access to this data, a data scientist can only conjecture what they can do with it. It’s impossible to correctly assume that this data is ready for modeling or even has the signal needed to hit a target KPI. Getting the data early allows the team to return quick feedback before going too deep down a potentially unfeasible modeling path.
We recommend starting each data-driven product with a short “proof of value” phase. This is where a small team goes through the ropes of acquiring the data needed, establishes a baseline with an initial naive model, and sets attainable model KPIs based on that model. This is a low-risk way to verify the problem you are solving is possible with a small pool of resources.
Empathize with your end-users
When you’re building a product, you’re really building a tool to solve a problem to be leveraged by end-users. Users work in and interact with a product in a multitude of ways. Data-driven products add a focus on the process of receiving suggestions from your models and giving models feedback to learn from. To build a successful data-driven product, it’s crucial to first understand how your user plans to interact with your product, what they expect to see from the model, what control they have over outputs, and how they can provide feedback to the system.
The web application space has refined its design process for successful products by heavily incorporating a research phase. This phase typically includes building personas, gaining an understanding of both users and the machine through empathy mapping, and conducting user interviews. The output of this phase is the design of an interface that a product owner can be confident about and a team of engineers (and data scientists!) can execute on.
At Dialexa we’ve successfully injected data-focused prompts and questions into these tools to get insight into what a user actually wants from a model. These new data points give the data scientists metrics to hit, requirements on model architectures, and many times new features that they may have never considered!
AirBnB’s price suggestion feature
One great example of an intelligent feature in a product is AirBnB’s listing price suggestion. Some great takeaways from this feature that could be discovered in the research phase are:
It’s just a suggestion, give the user control of the final price
They give top factors on why a price was selected
They allow users to give direct feedback on their pricing models
This feature isn’t perfect and has been criticized for pricing listings too low among other complaints. These could be addressed by again empathizing with your user’s concerns. I believe there are multiple areas for improvement on this feature based on the feedback. One way to gain trust in their users would be to invest in a model that outputs a confidence interval with the decision. They may have to sacrifice some accuracy but, as long as it’s still acceptably accurate, the end-users would likely be happier with the feature as a whole.
Empathize with your models
Just like the end-users, models need love too. These models aren’t standalone — the whole team has to get on the same page so the engineers can write supporting software, the designers can wireframe UIs, and the stakeholders can set their delivery expectations. It’s crucial to get the whole team on the same page by gathering requirements before going gung-ho on building a model or a model-based feature.
One of the approaches that we’ve picked up comes from our research and design team. We’ve adapted the user empathy map to empathize with a model-based feature. Here’s a great article describing the process in-depth. The gist of the exercise is to get the team thinking about the feature and take notes on the following:
Senses — What data and variables does the model need?
Does — What does the model output and what actions are taken?
Says — How does the user know why the model made a decision?
Thinks — What hard rules does the feature have to follow?
Feels — How do we know the feature is doing what we expect?
These are our interpretations of the categories that have worked well for our team. Some categories like “says” and “feels” can be particularly hard to wrap your head around. We prime the team to start thinking in the right direction by providing examples of a similar feature. For example, some sticky notes for the AirBnB price suggestion tool could be:
Senses
Location data of the rented unit
Day of the week of the listing
Does
A suggested price
A range of good prices
Says
Similar listings in the area
Breakdown of pricing factors
Thinks
Can’t go below the minimum break-even price
Are there legal considerations?
Feels
Direct user feedback from the tool
Are users staying within the range?
The output of this session is a shared understanding and a clear set of requirements for all players on this feature. At a high level, data scientists can start designing a model architecture, engineers can plan work for the new data feeds and API endpoints, designers can wireframe components, and the product owner knows exactly what’s going to be delivered. What an incredible exercise!
Start with simplicity, expand with complexity
On a product team, many times a data scientist’s work is a dependency of other team member’s work. A backend engineer can’t effectively develop and test software to support their model until they have access to it. On top of this, a product owner might want to push out the feature for beta testing sooner than the team can optimize the model.
The first and most important action to take in this situation is to communicate with your team. Document the expected inputs and outputs to work around and set expectations on when a model will be ready. This should be flushed out at a high level after a model empathy map! The next option to consider is to not use a machine learning model or drastically simplify the approach.
One of the hardest realities for a tried-and-true machine learning engineer to cope with on a product team is that machine learning is a means to an end, not the end itself. As a machine learning engineer who loves to read and learn about the bleeding-edge advances in the field — it pains me to write that. But in reality, most features are supported successfully by a naive model or even a heuristic — you don’t need deep learning to solve every problem.
Quickly deploying a simple model, or at least the interface for a model, unblocks the rest of the team to get their gears turning. Engineers can quickly start developing off of that model with confidence and stakeholders can monitor the KPIs of the model in the product and turn on the full feature when it’s acceptable.
Take for example the AirBnB price suggestion model again. After defining the full-fledged feature, the team can build and deploy a quick heuristic engine by using the averaging listing prices in the surrounding area. The engineers can develop off of that heuristic-based model and hide it behind a feature flag, waiting to be turned on in production. Meanwhile, the data science team, SMEs, and product owners can work together to iterate on the model until it’s ready and then release it to the end-user.
This is a process that has worked wonders for us. We’ve been able to quickly iterate through more and more advanced models, test the models in a production-like environment, and release model-driven features with complete safety.
Final thoughts
These are just a few of the many techniques and processes our teams have adopted for delivering data-driven products. There are many more lessons learned along the way that we’re eager to share and help other product teams implement.
|
https://medium.com/back-to-the-napkin/advice-for-building-successful-data-driven-products-fbc79f8a1be7
|
['Rowdy Howell']
|
2019-08-21 15:59:06.738000+00:00
|
['Technology', 'Machine Learning', 'Product', 'Data Science', 'Data Driven']
|
2,013 |
Can you teach me how to code? Why my answer always starts with no and end with yes
|
Can you teach me how to code? Why my answer always starts with no and end with yes
it’s not as glamorous as Hollywood makes it seem
Can you teach me how to code? Sit me down and flip open a book, going from beginning to end like a textbook? Can you teach me to code, so I can make games and maybe hack the Pentagon? Can you teach me to code, so I can make millions and billions, doing almost nothing but drag and drop?
Can you teach me to code?
The short answer, my young dreamer, is no.
The long answer is yes — just not how you want me to teach you.
The thing about code is that it’s not that glamourous. It’s long hours, days and nights spent arguing with a computer and the computer is always right. It’s a one-way relationship, with Google as your companion, and maybe a Stack Overflow friend.
It’s a series of one confusion after the next, silent failures and red errors.
Why?
Because code is a language and languages are tools of communication. You can route learn code, but to create something out of it — well, that’s a different story.
It’s the difference between tracing and learning to draw. Anyone can trace a picture, but not everyone can draw. The ability to draw includes the ability to identify shapes and replicate them on a medium. That’s what coding is — a process of identifying the different necessary parts to make a system. After that comes creating the pieces, the struts and the beams that make up the application.
When you code, you are the builder and the architect, the mastermind of your little plot of sandbox. But like in real life, you can’t just build a house and call it a day. There are resource consents, council permissions, engineering reports, provisioning, and even the weather forecast to take into account. The equivalent to this is your project manager, client demands, team member’s opinions and thoughts, and legacy code (if you’ve got any).
Over time, you find yourself spending more time in the process than the actual code itself. It’s easy to create code when you’re by yourself — but that’s rarely the case. Your work is absorbed into the multitude of moving parts that are supposed to work together like a seamless machine made of digital cogs mostly encased in curly brackets. Sometimes it gets edited by others, morphing into something completely different within a few months, if not weeks.
So can I teach you how to code?
Probably not — not the way you want me to teach you — the linear point A to B, x = y = z kind of formula. I’m not that kind of teacher.
You probably have a project in mind, a dream that you want to fulfill, an app, a game, a something that will materialize if only you knew how to code. Let me tell you this, you’re doing it backward.
If you’ve got an idea, work out the bits and pieces first. Figure out what you need as your features, why you need it, and how it's going to work. Figure out how the bits and pieces to group together and the relationships between them.
Then you can start coding. Pick the smallest set of features — the minimum collection of things you need to get your app, dream, idea running. Then jump right into it.
Get it working as quickly as possible. Learn to fail and fail often. Become friends with failure. Once you do, every time something falls apart, you get better at picking yourself up. You learn to fix things faster and recognize your potential failing points before they happen.
Because coding is nuanced to the language you choose. Learn the technical basics and get creating as quickly as possible. Read around topics that can supplement your app, dream, thing. Look up patterns. Create a series of dots that will eventually make sense.
And trust that these knowledge points will eventually make sense.
Code is a tool. It’s not some magical thing that will make all your dreams come true. Code is a thing that only materializes properly if you are clear on what you want to achieve from it.
So can I teach you how to code? Probably not.
Can I give you some of the pieces of the puzzles that you might need? Probably yes.
|
https://medium.com/madhash/can-you-teach-me-how-to-code-why-my-answer-always-starts-with-no-and-end-with-yes-7c94f7800f56
|
['Aphinya Dechalert']
|
2020-08-04 06:01:01.075000+00:00
|
['Software Engineering', 'Software Development', 'Ideas', 'Web Development', 'Technology']
|
2,014 |
The future of work is happening now thanks to Digital Workplace Services
|
Businesses, schools, and governments have all had to rethink the proper balance between in-person and remote work. And because that balance is a shifting variable — and may well continue to be for years after the pandemic — it remains essential that the underlying technology be especially agile.
The next BriefingsDirect worker strategies discussion explores how a partnership behind a digital workplace services solution delivers a sliding scale for blended work scenarios. We’ll learn how Unisys, Dell, and their partners provide the time-proof means to secure applications intelligently — regardless of location.
We’ll also hear how an increasingly powerful automation capability makes the digital workplace easier to attain and support.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.
To learn more about the latest in cloud-delivered desktop modernization, please welcome Weston Morris, Global Strategy, Digital Workplace Services, Enterprise Services, at Unisys, and Araceli Lewis, Global Alliance Lead for Unisys at Dell Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solution.
Here are some excerpts:
Gardner: Weston, what are the trends, catalysts, and requirements transforming how desktops and apps are delivered these days?
Morris: We’ve all lived through the hype of virtual desktop infrastructure (VDI). Every year for the last eight or nine years has supposedly been the year of VDI. And this is the year it’s going to happen, right? It had been a slow burn. And VDI has certainly been an important part of the “bag of tricks” that IT brings to bear to provide workers with what they need to be productive.
COVID sends enterprises to cloud
But since the beginning of 2020, we’ve all seen — because of the COVID-19 pandemic — VDI brought to the forefront in the importance of having an alternative way of delivering a digital workplace to workers. This has been especially important in environments where enterprises had not invested in mobility, the cloud, or had not thought about making it possible for user data to reside outside of their desktop PCs.
Those enterprises had a very difficult time moving to a work-from-home (WFH) model — and they struggled with that. Their first instinct was, “Oh, I need to buy a bunch of laptops.” Well, everybody wanted laptops at the beginning of the pandemic, and secondly, they were being made in China mostly — and those factories were shut down. It was impossible to buy a laptop unless you had the foresight to do that ahead of time.
And that’s when the “aha” moment came for a lot of enterprises. They said, “Hey, cloud-based virtual desktops — that sounds like the answer, that’s the solution.” And it really is. They could set that up very quickly by spinning up essentially the digital workplace in the cloud and then having their apps and data stream down securely from the cloud to their end users anywhere. That’s been the big “aha” moment that we’ve had as we look at our customer base and enterprises across the world. We’ve done it for our own internal use.
Gardner: Araceli, it sounds like some verticals and in certain organizations they may have waited too long to get into the VDI mindset. But when the pandemic hit, they had to move quickly.
What is about the digital workplace services solution that you all are factoring together that makes this something that can be done quickly?
Lewis: It’s absolutely true that the pandemic elevated digital workplace technology from being a nice-to-have, or a luxury, to being an absolute must-have. We realized after the pandemic struck that public sector, education, and more parts of everyday work needed new and secure ways of working remotely. And it had to become instantaneously available for everyone.
You had every C-level executive across every industry in the United States shifting to the remote model within two weeks to 30 days, and it was also needed globally. Who better than Dell on laptops and these other endpoint devices to partner with Unisys globally to securely deliver digital workspaces to our joint customers? Unisys provided the security capabilities and wrapped those services around the delivery, whereas we at Dell have the end-user devices.
You had every C-level executive across every industry in the U.S. shifting to the remote model within two weeks to 30 days, and it was also needed globally. Unisys provided the security capabilities and wrapped those services around delivery, whereas Dell had the end-user devices.
What we’ve seen is that the digitalization of it all can be done in the comfort of everyone’s home. You’re seeing them looking at x-rays, or a nurse looking into someone’s throat via telemedicine, for example. These remote users are also able to troubleshoot something that might be across the world using embedded reality, virtual reality (VR) embedded, and wearables.
We merged and blended all of those technologies into this workspaces environment with the best alliance partners to deliver what the C-level executives wanted immediately.
Gardner: The pandemic has certainly been an accelerant, but many people anticipated more virtual delivery of desktops and apps as inevitable. That’s because when you do it, you get other timely benefits, such as flexible work habits. Millennials tend to prefer location-independence, for example, and there are other benefits during corporate mergers and acquisitions and for dynamic business environments.
So, Weston, what are some of the other drivers that reward people when they make the leap to virtual delivery of apps and desktops?
Take the virtual leap, reap rewards
Morris: I’m thinking back to a conversation I had with you, Araceli, back in March. You were excited and energized around the topic of business continuity, which obviously started with the pandemic.
But, Dana, there are other forces at work that preceded the pandemic and that we know will continue after the pandemic. And mergers and acquisition are a very big one. We see a tremendous amount of activity there in the healthcare space, for example, which was affected in multiple ways by the pandemic. Pharmaceuticals and life sciences as well, there are multiple merger activities going on there.
One of the big challenges in a merger or acquisition is how to quickly get the acquired employees working as first-class citizens as quickly as possible. That’s always been difficult. You either give them two laptops, or two desktops, and say, “Here’s how you do the work in the new company, and here’s where you do the work in the old company.” Or you just pull the plug and say, “Now, you have to figure out how to do everything in a new way in web time, including human resources and all of those procedures in a new environment — and hopefully you will figure it all out.”
But with a cloud-based, virtual desktop capability — especially with cloud-bursting — you can quickly spin up as much capacity as you need and build upon the on-premises capabilities you already have, such as on Dell EMC VxRail, and then explode that into the cloud as needed using VMware Horizon to the Microsoft Azure cloud.
That’s an example of providing a virtual desktop for all of the newly acquired employees for them do their new corporate-citizen stuff while they keep their existing environment and continue to be productive by doing the job you hired them to do when you made the acquisition. That’s a very big use case that we’re going to continue to see going forward.
Gardner: Now, there were number of hurdles historically toward everyone adopting VDI. One of the major use cases was, of course, security and being able to control content by having it centrally located on your servers or on your cloud — rather than stored out on every device. Is that still a driving consideration, Weston? Are people still looking for that added level of security, or has that become passé?
Morris: Security has become even more important throughout the pandemic. In the past, to a large extent, the corporate firewall-as-secure-the-perimeter model has worked fairly well. And we’ve been punching holes in the firewall for several years now.
But with the pandemic — with almost everyone working from home — your office network just exploded. It now extends everywhere. Now you have to worry about how well secured any one person’s home network is. Do they have their password changed or default password changed on their home router? Have they updated the firmware on it? And a lot of these things are beyond the average worker to worry about and to be thinking about.
But if we separate out the workload and put it into the cloud — so that you have the digital workplace sitting in the cloud — that is much more secure than a device sitting on somebody’s desk connected to a very questionable home network environment.
Gardner: Another challenge in working toward more modern desktop delivery has been cost, because it’s usually been capital-intensive and required upfront investment. But when you modernize via the cloud that can shift.
Araceli, what are some of the challenges that we’re now able to overcome when it comes to the economics of virtual desktop delivery?
Cost benefits of partnering
Lewis: The beautiful thing here is that in our partnership with Unisys and Dell Financial Services (DFS), we’re able to utilize different utility models when it comes to how we consume the technology.
We don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. So, that’s extremely flexible.
You don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. It’s extremely flexible.
And by partnering with Unisys, they secure those VDI solutions across all of the three core components: The VDI portion within the data center, the endpoint devices, and of course, the software. By partnering with Unisys in our alliance ecosystem, we get the best of DFS, Dell Technology, VMware software, and Unisys security capabilities.
Gardner: Weston, another issue that’s dogged VDI adoption is complexity for the IT department. When we think about VDI, we can’t only think about end users. What has changed for how the IT department deploys infrastructure, especially for a hybrid approach where VDI is delivered both from on-premises data centers as well as the cloud?
Intelligent virtual agents assist IT
Morris: Araceli and I have had several conversations about this. It’s an interesting topic. There has always been a lot of work to stand up VDI. If you’re starting from scratch, you’re thinking about storage, IOPS, and network capacity. Where are my apps? What’s the connectivity? How are we going to run it at optimal performance? After all, are the end users happy with the experience they’re getting? And how can I even know that what their experience is?
And now, all that’s changed thanks to the evolving technology. One is the advent of artificial intelligence (AI) and the use of personal intelligent virtual assistance. At home, we’re used to that, right? We ask Alexa, Siri, or Cortana what’s going on with the weather? What’s happening in the news? We ask our virtual assistants all of these things and we expect to be able to get instant answers and help. Why is that not available in the enterprise for IT? Well, the answer is it is now available.
As you can imagine on the provisioning side, wouldn’t it be great if you were able to talk to a virtual assistant that understood the provisioning process? You simply answer questions posed by the assistant. What is it you need to provision? What is your load that you’re looking at? Do you have engineers that need to access virtual desktops? What types of apps might they need? What is the type of security?
Then the virtual assistant understands the business and IT processes to provision the infrastructure needed virtually in the cloud to make that all happen or to cloud-burst from your on-premises Dell VxRail into the cloud.
That is a very important game changer. The other aspect of the intelligent virtual agent is it now resides on the virtual desktop as well. I, as an at-home worker, may have never seen a virtual desktop before. And now, the virtual assistant pops up and guides the home worker through the process of connecting, explaining how their apps work, and saying, “I’m always here. I’m ready to give you help whenever possible.” But I think I’ll defer to the expert here.
Araceli, do you want to talk about the power of the hybrid environment and how that simplifies the infrastructure?
Multiple workloads managed
Lewis: Sure, absolutely. At Dell EMC, we are proud of the fact that Gartner rates us number one, as a leader in the category for pretty much all of the products that we’ve included in this VDI solution. When Unisys and my alliances team get the technology, it’s already been tested from a hyper-converged infrastructure (HCI) perspective. VxRail has been tested, tried-and-true as an automated system in which we combine servers, storage, network, and the software.
That way, Weston and I don’t have to worry about what size are we going to use. We actually have T-shirt sizes already for the number of VDI users that are needed that have been thought out. We have the graphics-intensive portion of it thought out. And we can basically deploy quickly and then put the workloads on them as we need to spin them up or spin them down or to add more.
We can adjust on the fly. That’s a true testament of our HCI being the backbone of the solution. And we don’t have to get into all of the testing, regression testing, and the automation and self-healing of it. Because a lot of that management would have had to be done by enterprise IT or by a managed services provider but it’s done instead via the lifecycle management of the Dell EMC VxRail HCI solution.
That is a huge benefit, the fact that we deliver a solution from the value line and the hypervisor on up. We can then focus on the end users’ services and we don’t have to be swapping out components or troubleshooting because all of the refinement that Dell has done in that technology today.
Morris: Araceli, the first time you and your team showed me the cloud-bursting capability, it just blew me away. I know in the past how hard it was to expand any infrastructure. You showed me where, you know, every industry and every enterprise are going to have a core base of assumptions. So, why not put that under Dell VxRail?
Then, as you need to expand, cloud-burst into, in this case, Horizonrunning on Azure. And that can all be done now through a single dashboard. I don’t have to be thinking, “Okay, now I have to have the separate workload, it’s in the cloud, this other workload that’s on my on-premises cloud with VxRail.” It’s all done through one, single dashboard that can be automated on the back end through a virtual agent, which is pretty cool.
Gardner: It sure seems in hindsight that the timing here was auspicious. Just as the virus was forcing people to rapidly find a virtual desktop solution, you had put together the intelligence and automation along with software-defined infrastructure like HCI. And then you also gained the ease in hybrid by bursting to the cloud.
And so, it seems that the way that you get to a solution like this has never been easier, just when it was needed to be easy for organizations such as small- to medium-sized businesses (SMBs) and verticals like public sector and education. So, was the alliance and partnering, in fact, a positive confluence of timing?
Greater than sum of parts
Morris: Yes. The perfect storm analogy certainly applies. It was great when I got the phone call from Araceli, saying, “Hey, we have this business continuity capability.” We at Unisys had been thinking about business continuity as well.
We looked at the different components that we each brought. Unisys with its security around Stealth or capability to proactively monitor infrastructure and desktops and see what’s going on and automatically fix them via the intelligent virtual agent and automation. And realizing that this was really a great solution, a much better solution than the individual parts.
We could not make this happen without all of the cool stuff that Dell brings in terms of the HCI, the clients, and, of course, the very powerful VMware-based virtual desktops. And we added to that some things that we have become very good at in our digital workplace transformation. The result is something that can make a real difference for enterprises. You mentioned the public sector and education. Those are great examples of industries that really can benefit from this.
Gardner: Araceli, anything more to offer on how your solution came together, the partners and the constituent parts?
Lewis: Consistent infrastructure, operations, and the help of our partner, Unisys, globally, delivers the services to the end users. This was just a partnership that had to come together.
We were getting so many requests early during the pandemic, an overwhelming amount of demand from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking.
We at Dell couldn’t do it alone. We needed those data center spaces. We needed the capabilities of their architects and teams to deliver for us. We were getting so many requests early during the pandemic, an overwhelming amount of demand from every C-level suite across the country, and from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking. But we knew if we partnered with them, we could give our community what they needed to get through the pandemic.
Gardner: And among those constituent parts, how is important part is Horizon? Why is it so important?
Lewis: VMware Horizon is the glue. It streamlines desktop and app delivery in various ways. The first would be by cloud-bursting. It actually gives us the capability to do that in a very simple fashion.
Secondly, it’s a single pane of glass. It delivers all of the business-critical apps to any device, anywhere on a single screen. So that makes it simple and comprehensive for the IT staff.
We can also deliver non-persistent virtual desktops. The advantage here is that it makes software patching and distribution a whole lot easier. We don’t have all the complexity. If there were ever a security concern or issue, we simply blow away that non-persistent virtual desktop and start all over. It gets us to our first phase, square one, and we would otherwise have to spend countless hours of backups and restores to get us to where we are safe again. So, it pulls everything together for us and being a user have a seamless interface for the IT staff who don’t have the complexity, and it gives us the best of our world while we get out to the cloud.
Gardner: Weston, on the intelligent agents and bots, do you have an example of how it works in practice? It’s really fascinating to me that you’re using AI-enabled robotic process collaboration (RPA) tools to help the IT department set this up. And you’re also using it to help the end-user learn how to onboard themselves, get going, and then get ongoing support.
Amelia AI ascertains answers
Morris: It’s an investment we began almost 24 months ago, branded as the Unisys InteliServe platform, which initially was intended to bring AI, automation, and analytics to the service desk. It was designed to improve the service desk experience and make it easier to use, make it scalable, and to learn over time what kinds of problems people needed help solving.
But we realized once we had it in place, “Wow, this intelligent virtual agent can almost be an enterprise personal assistant where it can be trained on anything, on any business process.” So, we’ve been training it on fixing common IT problems … password resets, can’t log in, can’t get to the virtual private network (VPN), Outlook crashes, those types of things. And it does very well at those sorts of activities.
But the core technology is also perfectly suited to be trained for IT processes as well as business processes inside of the enterprise. For example, for this particular scenario of supporting virtual desktops. If a customer has a specific process for provisioning virtual desktops, they may have specific pools of types of virtual desktops, certain capacities, and those can be created ahead of time, ready to go.
Then it’s just a matter of communicating with the intelligent virtual assistant to say, “I need to add more users to this pool,” or, “We need to remove users,” or, “We need to add a whole new pool.” The agent is branded as Amelia. It has a female voice, through it doesn’t have to be, but in most cases, it is.
When we speak with Amelia, she’s able to ask questions that guide the user through the process. They don’t have to know what the process is. They don’t do this very often, right? But she can be trained to be an expert on it.
Amelia collects the information needed, submits it to the RPA that communicates with Horizon, Azure, and the VxRail platforms to provision the virtual desktops as needed. And this can happen very quickly. Whereas in the past, it may have taken days or weeks to spin up a new environment for a new project, or for a merger and acquisition, or in this case, reacting to the pandemic, and getting people able to work from home.
By the same token, when the end users open up their virtual desktops, they connect to the Horizon workspace, and there is Amelia. She’s there ready to respond to totally different types of questions: “How do I use this?” “Where’s my apps?” “This is new to me, what do I do? How do I connect?” “What about working from home?” “What’s my VPN connection working like, and how do I get that connected properly?” “What about security issues?” There, she’s now able to help with the standard end-user types issues as well.
Gardner: Araceli, any examples of where this intelligent process automation has played out in the workplace? Do we have some ways of measuring the impact?
Simplify, then measure the impact
Lewis: We do. It’s given us, in certain use cases, the predictability and the benefit of a pay-as-you-grow linear scale, rather than the pay-by-the-seat type of solution. In the past, if we had a state or a government agency where they need, for example, 10,000 seats, we would measure them by the seat. If there’s a situation like a pandemic, or any other type of environment where we have to adjust quickly, how could we deliver 10,000 instances in the past?
Now, using Dell EMC ready-architectures with the technologies we’ve discussed — and with Unisys’ capabilities — we can provide such a rapid and large deployment in a pay-as-you-grow linear scale. We can predict what the pricing is going to be as they need to use it for these public sector agencies and financial firms. In the past, there was a lot of capital expenditures (CapEx). There was a lot of process, a lot of change, and there were just too many unknowns.
These modern platforms have simplified the management of the backends of the software and the delivery of it to create a true platform that we can quantify and measure — not only just financially, but from a time-to-delivery perspective as well.
Morris: I have an example of a particular customer where they had a manual process for onboarding. Such onboarding includes multiple steps, one of which is, “Give me my digital workplace.”
But there are other things, too. The training around gaining access to email, for example. That was taking almost 40 hours. Can you imagine a person starting their job, and 40 hours later they finally get the stuff they need to be productive? That’s a lot of downtime.
After using our automation, that transition was down to a little over eight hours. What that means is a person starts filling out their paperwork with HR on day one, gets oriented, and then the next day they have everything they need to be productive. What a big difference. And in the offboarding — it’s even more interesting. What happens when a person leaves the company? Maybe under unfavorable circumstances, we might say.
In the past, the manual processes for this customer took almost 24 hours before everything was turned off. What does that mean? That means that an unhappy, disgruntled employee has 24 hours. They can come in, download content, get access to materials or perhaps be disruptive, or even destructive, with the corporate intellectual property, which is very bad.
Through automation, this offboarding process is now down to six minutes. I mean that person hasn’t even walked out of the room and they’ve been locked out completely from that IT environment. And that can be even be done more quickly if we’re talking about a virtual desktop environment, in which the switch can be thrown immediately and completely. Access is completely and instantly removed from the virtual environment.
Gardner: Araceli, is there a best-of-breed, thin-client hardware approach that you’re using? What about use cases such as graphics-intense or computer-aided design (CAD) applications? What’s the end-point approach for some of these more intense applications?
Viable, virtual, and versatile solutions
Lewis: Being Dell Technologies, that was a perfect question for us, Dana. We understand the persona of the end users. As we roll out this technology, let’s say it’s for an engineering team where they do CAD drawings as an engineering group. If you look at the persona, and we partner with Unisys and look at what each end-user’s needs are, you can determine if they need more memory, more processing power, and if they need a more graphics-intensive device. We can do that. Our Wyse end-clients that can do that, the Wyse 3000s and the 5000s.
But I don’t want to pinpoint one specific type of device per user because we could be talking about a doctor, or we could be talking about a nurse in an intensive care unit. She is going to need something more mobile. We can also provide end-user devices that are ruggedized, maybe in an oil field or in a construction site. So, from an engineering perspective, we can adopt the end-user device to their persona and their needs and we can meet all of those requirements. It’s not a problem.
Gardner: Weston, anything from your vantage point on the diversity and agility of those endpoint devices and why this solution is so versatile?
Morris: There is diversity at both ends. Araceli, you talked about being able to on the backend provision and scale up and down the capacity and capability of a virtual desktop to meet the personas’ needs.
Millennials want choice on how they connect. Am I connecting from home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day. They don’t want to lose work in between. That all is entirely possible with this infrastructure.
And then on the end-user side, and you mentioned, Dana, Millennials. They may want choice of how they connect. Am I connecting in through my own personal laptop at home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day? And they don’t want to lose work in between. That is all entirely possible with this infrastructure.
Gardner: Let’s look to the future. We’ve been talking about what’s possible now. But it seems to me that we’ve focused on the very definition of agility: It scales, it’s fast, and it’s automated. It’s applicable across the globe.
What comes next? What can you do with this technology now that you have it in place? It seems to me that we have an opportunity to do even more.
Morris: We’re not backing down from AI and automation. That is here to stay, and it’s going to continue to expand. People have finally realized the power of cloud-based VDI. That is now a very important tool for IT to have in their bag of tricks. They can respond to very specific use cases in a very fast, scalable, and effective way.
In the future we will see that AI continues to provide guidance, not only in the provisioning that we’ve talked about, not only in startup and use on the end-user side — but in providing analytics as to how the entire ecosystem is working. That’s not just the virtual desktops, but the apps that are in the cloud as well and the identity protection. There’s a whole security component that AI has to play a role in. It almost sounds like a pipe dream, but it’s just going to make life better. AI absolutely will do that when it’s used appropriately.
Lewis: I’m looking to the future on how we’re going to live and work in the next five to 10 years. It’s going to be tough to go back to what we were used to. And I’m thinking forward to the Internet of Things (IoT). There’s going to be an explosion of edge devices, of wearables, and how we incorporate all of those technologies will be a part of a persona.
Typically, we’re going to be carrying our work everywhere we go. So, how are we going to integrate all of the wearables? How are we going to make voice recognition more adaptable? VR, AI, robotics, drones — how are we going to tie all of that together?
Nowadays, we tie our home systems and our cooling and heating to all of the things around us to interoperate. I think that’s going to go ahead and continue to grow exponentially. I’m really excited that we’ve partnered with Unisys because we wouldn’t want to do something like this without a partner who is just so deeply entrenched in the solutions. I’m looking forward to that.
Gardner: What advice would give to an organization that hasn’t bitten off the virtual desktop from the cloud and hybrid environment yet? What’s the best way to get started?
Morris: It’s really important to understand your users, your personas. What are they consuming? How do they want to consume it? What is their connectivity like? You need to understand that, if you’re going to make sure that you can deliver the right digital workplace to them and give them an experience that matters.
Lewis: At Dell Technologies, we know how important it is to retain our top and best talent. And because we’ve been one of the top places to work for the past few years, it’s extremely important to make sure that technology and access to technology help to enable our workforce.
I truly feel that any one of our customers or end users that hasn’t looked at VDI, and hasn’t realized the benefits across savings, and keeping a competitive advantage in this fast-paced world, that they also need to retain their talent, too. To do that they need to give their employees the best tools and the best capabilities to be the very best. They have to look at VDI in some way, shape, or form. As soon as we bring it to them — whether technically, financially, or for competitive factors — it really makes sense. It’s not a tough sell at all, Dana.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Dell Technologies.
YOU MAY ALSO BE INTERESTED IN:
|
https://medium.com/@danagardner/the-future-of-work-is-happening-now-thanks-to-digital-workplace-services-9d26a89cf2b0
|
['Dana Gardner']
|
2020-12-22 16:08:26.987000+00:00
|
['Remote Working', 'Information Technology', 'Security', 'Workspace', 'Bots']
|
2,015 |
E-Learning Art: Improve E-Learning Experience With Colors
|
E-learning arts used to animate the imagination of learners are more effective when the right color combinations are used.
Specific colors like green, orange, and blue target key motivation centers within the individual and improve conditions like focus, enthusiasm, and productivity.
What would life be like without color? One may have to imagine a dull and banal image to understand the importance of colors in our day to day life.
And it’s not just the lack of hue one would have to contend with because the color does more than add beauty.
Color Communicates
Traffic lights and red danger marks readily come to mind. But these are very basic. It is intriguing the massive role that color plays in communication every day and we may not be aware of it.
The choice of color is, therefore, very important in our environment and in most human settings. In fact, the effect colors have on humans is being harnessed to treat certain conditions through a procedure known as chromotherapy.
Likewise, color can be used to influence the mood and productivity of e-learning participants. The choice of color is a part of the broader aspect of e-learning design known as e-learning art.
Colors Are Important For E-learning Art
It does matter the color used for the background, elements, and other images in an e-learning module. As you will see later on, there are colors that are scientifically proven to motivate the learner in very specific ways.
When used rightly, color combinations can help the learner stay focused during an e-learning program and ultimately to achieve the learning objectives.
Poor colors for e-learning art, on the other hand, can be distracting, annoying, and depressing and can ruin the experience for the learner.
Here are some interesting points to keep in mind when choosing colors for your e-learning art.
Use Green For Concentration
Green — the color of life — is often associated with balance and calmness. Scientifically, it has a low wavelength which promotes focus and efficiency.
In e-learning design, green can be used to boost the concentration levels of learners especially in modules that emphasize clarity and where learners need to avoid errors.
Green is also proven to have restorative properties, so it is a good practice to include green in the e-learning art for modules that you sense would be exhausting.
Build Enthusiasm With Orange
Orange — the color of the sunset — is associated with pleasure and enthusiasm. This is the perfect color to brighten the mood of your learners.
When used for e-learning courses, orange-colored arts can be used to ease learners into new modules or new learning activities.
Foster Productivity With Blue
Blue is often used in corporate settings and academic circles because of its ties to productivity. The e-learning designer can also apply blue to help learners achieve more.
Blue-colored e-learning art is also important because the color triggers motivation for knowledge and communication.
While these specific color tips are very useful, it is important to control the general combination of colors used throughout the e-learning.
Monochromic colors though effective for singular ideas can get boring while warm enthusiastic colors can overstimulate the learner and hinder focus.
|
https://medium.com/@wizcabin/e-learning-art-improve-e-learning-experience-with-colors-b6048f69b347
|
['Naveen Neelakandan']
|
2020-10-19 16:20:31.402000+00:00
|
['Design', 'Elearning', 'Colors', 'Art', 'Technology']
|
2,016 |
Oompaville is offering a NordVPN discount — react to family friendly memes in a safe manner
|
Oompaville is offering a NordVPN discount — react to family friendly memes in a safe manner Noah Ascerdon Sep 4, 2019·3 min read
Oompaville is the YouTuber that makes content about how he lost his virgin innocence in Minecraft VR. He may seem like an ordinary YouTube shitposter, a cheap PewdiePie knock-off, but the more you watch his content the more you understand that he is bright-minded, modest and genuinely funny person. In fact, he’s so nice that he’s even offering a huge NordVPN discount for everyone.
How to get a NordVPN discount from Oompaville?
Oompaville’s NordVPN 3-year subscription 70% discount
Once you will click on the discount link you will be redirected to the official NordVPN website where you will be presented with a greeting page. After clicking on ‘Get 70% off’ button you will see that the discount code to your purchase is preapplied and all you need to do is enter your payment details.
What do you get from NordVPN?
Freedom online — this is what you get after connecting to a VPN server. You can change your public IP address and do anything online safely even when you’re connected to a public Wi-Fi network — NordVPN is using a military grade encryption standard called AES which is impossible to crack. In addition you can bypass multiple geo-restrictions out there which is great if you’re travelling in countries where Internet is censored like China or Turkey. A friendly customer support service which is available 24/7 will help answer any questions that you have in mind in a matter of seconds — you will never feel exposed and unsafe on the Internet again.
Never heard of Oompaville?
Also known as Papa Oompsie, he is a YouTube gamer best known for his Oompaville channel. He is known for his in-depth gameplay highlight and tutorial videos, which are often intertwined with his comedic commentary. He created his Oompaville YouTube channel on November 19, 2010 and posted his first video 4 years later. Since the creation of his YouTube channel he managed to gain more than 1 000 000 subscribers.
Papa Oompsie isn’t the only YouTuber that chose NordVPN to protect himself online. Anthony Fantano and Ashley Hardell chose NordVPN too.
Why NordVPN?
Here are a few more reasons why Oompaville/Papa Oompsie chose NordVPN:
NordVPN does not log your browsing activity. This VPN provider is not greedy and does not collect your data in order to study it/resell it. By operating from Panama where no mandatory data retention laws exist they can actually afford themselves such luxury of not snooping on their customers browsing activity.
There are more than 5000 servers located in 62 countries so you will never feel shortage of available spots for connection.
The connection speed will almost remain the same since NordVPN does not limit your data bandwidth — you can play online video games, stream movies without any connectivity issues despite your traffic is encrypted by strong encryption.
Inbuilt ad blocker called CyberSec will protect you from malicious adverts.
|
https://medium.com/@ascerdonsnoah/oompaville-nordvpn-discount-coupon-offer-387624a036aa
|
['Noah Ascerdon']
|
2019-09-30 09:59:20.743000+00:00
|
['Cybersecurity', 'YouTube', 'Privacy', 'Technology', 'VPN']
|
2,017 |
Azure: ¿qué es y por qué tengo que aprender acerca de Cloud Computing?
|
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
|
https://medium.com/flux-it-thoughts/azure-qu%C3%A9-es-y-por-qu%C3%A9-tengo-que-aprender-acerca-de-cloud-computing-3105c502de42
|
['Demian Sclausero']
|
2020-12-17 14:09:06.216000+00:00
|
['Kubernetes', 'Docker', 'Technology', 'Cloud Computing', 'Azure']
|
2,018 |
Create and Save PDF using Rotativa in MVC Application
|
In today’s article, we will look into converting your HTML page into a PDF using Rotativa also we will look into saving that file in a particular location.
Rotativa is an open-source package available in NuGet Package Manager using which We can easily generate the pdf file. Rotativa uses the web kit engine which is used by Chrome browser to render the HTML. Most of the HTML tags and styles are supported by this framework. The Rotativa framework provides Rotativa namespace. This namespace contains the following classes,
i) ActionAsPdf — This method accepts the action name as a string parameter so that it can be converted into PDF, also we can pass another parameter to it may include route values.
ii) PartialViewAsPdf — Returns partial view as PDF.
iii) UrlAsPdf — It enables us to return any URL as a PDF.
iv) ViewAsPdf — It returns the result as PDF instead of HTML response.
Follow the below steps to use Rotativa library in your MVC project to create PDF files,
Step 1) Create an empty ASP.NET MVC project
Go to visual studio -> Create new project -> Project name -> MVC template -> Create
Step 2) Install Rotativa library in MVC project
Project solution -> Right click on project solution -> Manage NuGet Packages -> Search Rotativa -> Install
Step 3) Writing method for PDF generation using ActionAsPdf
Source Code –
View:
<div style=”text-align:right;padding-top:20px!important;”> @Html.ActionLink(“Print About Page”,”PrintAboutPage”) </div> <div> <h2>@ViewBag.Title.</h2> <h3>@ViewBag.Message</h3> <p>Use this area to provide additional information.</p> </div>
Controller:
public ActionResult About() { ViewBag.Message = “Your application description page.”; return View(); } public ActionResult PrintAboutPage() { var report = new Rotativa.ActionAsPdf(“About”); return report; }
Output Screen –
Step 4) Writing method for PDF generation using ActionAsPdf with parameter
Source Code –
View:
<div style=”text-align:right;padding-top:20px!important;”> @Html.ActionLink(“Print Contact Page”, “PrintContactPage”) </div> <div> <h2>@ViewBag.Title.</h2> <h3>@ViewBag.Message</h3> </div>
Controller:
public ActionResult PrintContactPage() { var report = new Rotativa.ActionAsPdf(“Contact”, new { name = “Vaibhav” }); return report; }
Output –
Step 5) Writing method for PDF generation using ActionAsPdf and saving that file
Source Code –
View:
<div style=”text-align:right;padding-top:20px!important;”> @Html.ActionLink(“Print About Page”,”PrintAboutPage”) </div> <div style=”text-align:right;padding-top:20px!important;”> @Html.ActionLink(“Save About Page”, “SaveAboutPage”) </div> <div> <h2>@ViewBag.Title.</h2> <h3>@ViewBag.Message</h3> <p>Use this area to provide additional information.</p> </div>
Controller:
public ActionResult SaveAboutPage() { string fileName = “Test.pdf”; string fullPath = @”C:\Users\Vaibhav\Documents\RotativaPdf\” + fileName; var report = new Rotativa.ActionAsPdf(“About”) { PageOrientation = Rotativa.Options.Orientation.Portrait, PageSize = Rotativa.Options.Size.A4, PageMargins = new Margins(0, 0, 0, 0), }; if (!System.IO.File.Exists(fullPath)) { var byteArray = report.BuildPdf(ControllerContext); var fileStream = new FileStream(fullPath, FileMode.Create, FileAccess.Write); fileStream.Write(byteArray, 0, byteArray.Length); fileStream.Close(); } return report; }
Output-
Thank You, See you in the next article !!
|
https://medium.com/@vaibhavbhapkarblogs/create-and-save-pdf-using-rotativa-in-mvc-application-f341febeb28c
|
['Vaibhav Bhapkar']
|
2021-03-21 16:20:37.812000+00:00
|
['Programming', 'Mvc', 'Dotnet', 'C Sharp Programming', 'Technology']
|
2,019 |
How to remove an element from an array in JavaScript
|
How to remove an element from an array in JavaScript
Two ways to quickly remove an element from an array in JavaScript
So you want to remove an item from an array in JavaScript?
Well, you’re not alone!
It’s one of the most upvoted JavaScript questions on stack overflow and can feel a little unnatural considering how simple it is to array.push() .
Method 1 — remove an element with Array.filter()
If you want to remove an element while leaving the original array intact (unmutated) then filter() is a good choice!
Removing a single element:
const itemToRemove = 3 const originalArray = [2, 51, 3, 44] const newArray = originalArray.filter(item => item !== itemToRemove) console.log(newArray) // [2, 51, 44]
console.log(originalArray) // [2, 51, 3, 44]
Breaking that down.
We define a variable newArray and set it equal to the return value of our array filter. Inside of our filter, we pass an ECMAScript 6 arrow function.
This function tests each item in the array and returns:
|
https://medium.com/javascript-in-plain-english/how-to-remove-an-element-from-an-array-in-javascript-54612785295e
|
['Kitson Broadhurst']
|
2020-06-05 05:22:54.477000+00:00
|
['JavaScript', 'Programming', 'Arrays', 'Technology', 'Software Engineering']
|
2,020 |
Skydio and Arris Revolutionize Drone Design and Manufacturing
|
Silicon Valley innovators accelerate next-gen UAVs fusing AI-powered autonomy with breakthrough composites manufacturing
San Francisco, CA, December 17, 2020 — Skydio, a leading U.S. drone manufacturer and world leader in autonomous flight technology, and Arris, a leader in advanced manufacturing of high-performance products, have redefined airframe design leveraging Additive Molding™, Arris’s breakthrough carbon fiber manufacturing technology. Starting with the new Skydio X2 drone, enterprise, public sector and defense customers will benefit from lighter, longer-range, and more robust aircraft structures at scale. The collaboration has resulted in the first-of-its-kind production use of Arris’s technology in the UAV (unmanned aerial vehicle) industry, further extending Skydio’s technology leadership and enabling game-changing advantages:
Advanced airframe design with component consolidation allowing Skydio to replace a 17 part assembly with a single, multi-functional structure
with component consolidation allowing Skydio to replace a 17 part assembly with a single, multi-functional structure Strength and stiffness of titanium at a fraction of the weight , enabling the Skydio X2 to increase range, and speed
, enabling the Skydio X2 to increase range, and speed Optimized carbon and glass fiber layout based on functional requirements of individual regions of the airframe
based on functional requirements of individual regions of the airframe Scalable US-based manufacturing and innovation to bring peak aerospace performance at lower cost
“We are excited about the value that our partnership with Arris will bring to our customers. At Skydio, we pursue cutting edge innovation across all facets of drone technology. The unique properties of Arris’s Additive Molding carbon fiber allows us to optimize the strength, weight, and radio signal transparency of the Skydio X2 airframe to deliver a highly reliable solution that meets the needs of demanding enterprise, public safety and defense use cases” says Adam Bry, Skydio’s CEO.
Skydio X2 is Skydio’s latest autonomous drone solution for enterprise, public sector and defense. X2 pairs Skydio’s breakthrough autonomy software with a rugged, foldable airframe for easy “pack and go” transportation, and up to 35 minutes of flight time. The X2 airframe will include a newly designed core structural element manufactured with Arris’s Additive Molding™ technology. Arris’s first-of-its-kind Additive Molding leverages 3D-aligned continuous fiber composite materials for complex shapes where material composition can change within regions of a single part. As a result, Skydio has been able to use a single carbon fiber component with the structural results that would have otherwise required 17 parts.
“The evolution of aerospace design has been punctuated by breakthroughs in manufacturing and materials. Such a moment has come where manufacturing of optimized structures has converged with composite materials ideals to unlock previously impossible, high-performance aerospace designs,” says Ethan Escowitz, founder and CEO of Arris. “While we’re working with leading aerospace manufacturers to improve aircraft performance, sustainability and costs; Skydio’s culture and market have enabled an unsurpassed pace of innovation that has fast-tracked this transformation to deliver the next-generation of aerostructures. It’s simply amazing to see such a revolutionary product broadly available and flying today.”
Skydio X2 is the ultimate solution for a wide range of use cases, including situational awareness, asset inspection, security and patrol use cases. Designed, assembled, and supported in the USA, Skydio X2 is NDAA compliant and has been selected as a trusted UAV solution for the US Department of Defense as part of DIU’s Blue sUAS program. The partnership with Arris further validates Skydio’s commitment to innovation, secure supply chain security and US-based manufacturing.
Resources
Skydio Announcement Blog — Skydio Adds Arris’s First of its Kind Additive Molding Composites to the X2 Platform
Arris Announcement Blog — Arris & Skydio Redefine Aerospace Design to Achieve Ultimate Lightweight Performance
About Skydio
Skydio is the leading U.S. drone manufacturer and world leader in autonomous flight. Skydio leverages breakthrough AI to create the world’s most intelligent flying machines for use by consumers, enterprises, and government customers. Founded in 2014, Skydio is made up of leading experts in AI, robotics, cameras, and electric vehicles from top companies, research labs, and universities from around the world. Skydio designs, assembles, and supports its products in the U.S. from its headquarters in Redwood City, CA, to offer the highest standards of supply chain and manufacturing security. Skydio is trusted by leading enterprises across a wide range of industry sectors and is backed by top investors and strategic partners including Andreessen Horowitz, Levitate Capital, Next47, IVP, Playground, and NVIDIA.
About Arris
Arris is a California-based technology company enabling the design and manufacture of the highest-performance products at scale. Arris’s Additive Molding™ is a high-speed composites manufacturing technology combining continuous aligned fibers and electronic components within topology-optimized structures. Arris partners with the world’s most innovative companies to help them imagine, design, and manufacture lighter, faster, stronger, and more intelligent products. Learn more at www.arriscomposites.com
Arris Media Contact Skydio Media Contact
LMGPR Aircover Communications
Donna Loughlin Michaels Morgan Mason
[email protected] [email protected]
408.393.5575
|
https://medium.com/@skydio/skydio-and-arris-revolutionize-drone-design-and-manufacturing-ef63d89459a5
|
[]
|
2020-12-21 19:17:58.152000+00:00
|
['Technology', 'Tech', 'Autonomous Vehicles', 'Composite Materials', 'Drones']
|
2,021 |
What is correlation?
|
What is correlation?
Not causation.
Experiments allow you to talk about cause and effect. Without them, all you have is correlation. What is correlation?
IT’S NOT CAUSATION. (!!!!!)
Sure, you’ve probably already heard us statisticians yelling that at you. But what is correlation? It’s when the variables in a dataset look like they’re moving together in some way.
Two variables X and Y are correlated if they seem to be moving together in some way.
For example, “when X is higher, Y tends to be higher” (this is called positive correlation) or “when X is higher, Y tends to be lower” (this is called negative correlation).
Thanks, Wikipedia.
If you’re looking for the formula for (population) correlation, your friend Wikipedia has everything you need. But if you wanted that, why didn’t you go there straight away? Why are you here? Ah, you want the intuitive explanation? Cool. Here’s a hill:
On the left, height and (left-to-right) distance are positively correlated. When one goes up, so does the other. On the right, height and distance are negatively correlated.
When most people hear the word correlation, they tend to think of perfect linear correlation: taking a horizontal step (X) to the right on the hill above gets you the same change in altitude (Y) everywhere on the same slope. As long as you’re going up from left to right (positive correlation), there are no surprise jagged/curved bits.
Bear in mind that going up is positive only if you’re hiking left-to-right, same way as you read English. If you approach hills from the right, statisticians won’t know what to do with you. I suppose what statisticians are trying to tell you is never to approach a hike from the right. That will only confuse us.
But if you hike properly, then “up” is “positive.”
Imperfect linear correlation
In reality, this hill is not perfect, so the correlation magnitude between height and distance will be less than 100%. (You’ll pop a +/- sign in front depending on whether we’re going up or down, so correlation lives between -1 and 1. That’s because its formula (pasted from Wikipedia above) divides by standard deviation, thereby removing the magnitude of each variable’s dispersion. Without that denominator, you’d struggle to see that the strength of the relationship is the same regardless of whether you measure height in inches or centimetres. Whenever you see scaling/normalization in statistics, it’s usually there to help you compare apples and oranges that were measured in different units.)
Uncorrelated variables
What does a correlation of zero look like? Are you thinking of a messy cloud with no discernible patterns inside? Something like:
Sure, that works. You know how I know X and Y truly have nothing to do with one another? Because I created them that way. If you want to simulate a similar plot of two uncorrelated variables, try running this basic code snippet in R online:
X <- runif(100) # 100 regular random numbers between 0 and 1
Y <- rnorm(100) # Another 100 random numbers from bell curve
plot(X, Y, main = "X and Y have nothing to do with one another")
But there’s another way. The less linear the relationship, the closer your correlation is to zero. In fact, if you look at the hill as a whole (not just one of its slopes at a time), you’ll find a zero correlation even though there’s a clear relationship between height and distance (duh, it’s a hill).
X <- seq(-1, 1, 0.01) # Go from -1 to 1 in increments of 0.01
Y <- -X^2 # Secret formula for the ideal hill
plot(X, Y, main = "The linear correlation is zero")
print(cor(X, Y)) # Check the correlation is zero
Correlation is not causation
The presence of a linear correlation means that data move together in a somewhat linear fashion. It does not mean that X causes Y (or the other way around). They might both be moving due to something else entirely.
Want proof of this? Imagine you and I invested in the same stock. Let’s call it ZOOM, because I find it hilarious that pandemic investors intended to buy ZM (the video communications company) but accidentally bought ZOOM (the Chinese micro-cap) instead, leading to a 900% increase in the price of the wrong Zoom, while the real ZM didn’t even double. *wipes away laugh-tears* Anyways — in honor of that comedy — imagine that you and I invested a small amount in ZOOM.
Since we’re both holding ZOOM, the value of your stock portfolio ($X) is correlated with my stock portfolio value ($Y). If ZOOM goes up, we both profit. That does not mean that my portfolio’s value causes your portfolio’s value. I cannot dump all my stock in a way that punishes you — if my portfolio value suddenly becomes zero because I sell everything to buy a pile of cupcakes, that doesn’t mean that yours is now worthless.
Many decision-makers fall flat on their faces for precisely this reason. Seeing two correlated variables, they invest resources in affecting thing 1 to try to move thing 2… and the results are not what they expect. Without an experiment, they had no business assuming that thing 1 drives thing 2 in the first place.
Correlation is not causation.
The lovely term “spurious correlation” refers to the situation where where there’s no direct causal relationship between two correlated variables. Their correlation might be due to coincidence or due to the effect of a third (usually unseen, a.k.a. “latent”) variable that influences both. Never take correlation at face value — in data, things often aren’t what they seem.
For fun with spurious correlations, check out the website this prime example hails from.
To summarize, if you want to talk about causes and effects, you need a (real!) experiment. Without experiments, all you have is correlation and for many decisions — the ones based on causal reasoning — that is not helpful.
P.S. What is regression?
It’s putting lines through stuff. Think of it as, “Oh, hey! These things are correlated, so let’s use one to predict the other…”
|
https://towardsdatascience.com/what-is-correlation-975ea899aaed
|
['Cassie Kozyrkov']
|
2020-07-13 11:57:05.033000+00:00
|
['Towards Data Science', 'Statistics', 'Artificial Intelligence', 'Data Science', 'Technology']
|
2,022 |
4 Ways of the Wordsmith: How to Be Happy and Successful Now
|
Since we will never know the inner workings of the curation system, it leaves only one choice — figure out how to write better.
For the last two weeks, I’ve made a sincere effort to read articles on how to become a better writer.
While most of the posts were filled with generalities, hollow inspirational talk, a few quotes from famous writers — and the ever present whisper to subscribe here, or enroll in this class here, or join an online seminar here — I did find something of substance.
Here’s a real-world experiment which is so cool I’m going to interrupt my smart phone-friendly writing style. This is the first week of that series, written by an established professional writer who opens a secret account and starts out like as an unknown, new writer on Medium:
As I followed the weekly progress, I was shocked by the matter-of-fact tone of the list of things she did, that included the bullet point, “I wrote seven posts (four of them were curated).”
In the third week, she wrote, “I wrote seven posts. (Six of them were curated!).”
At least she seemed excited about this impossibly high percentage rate of curated articles.
I had to make a comment with some skepticism. She was gracious enough to respond, so we had a brief conversation where she told me that her normal curation rate is 70–80%.
OMG, Jesus walked on water, and he only got curated 41.6% of the time! FYI, it was Matthew, Mark, Luke, John, and Paul (with Ringo supplying an awesome syncopated back beat).
Here’s why I found her one of the few voices in the Medium’s online coaching world that I don’t dismiss as being just another shyster content marketer:
She takes the perspective of diving into process to break down why things work, instead of simply quoting someone famous and then babbling on in generalities. I just clicked randomly on one day in her 30-day writing challenge and found this explanation of reading like a writer to figure out why things work. It’s something I’ve done for years, but I haven’t seen any other writing expert talk about this perspective. Do a search for “reading intensely” and you’ll see. Just don’t go down to the bottom of the search page. She has creates her own publications and doesn’t try to glom onto those Medium publications to build her readership. Because of that, she doesn’t have one of those “best story picked by our editors.” She’s getting her stories curated the old fashioned way: “she earns it.” She doesn’t give away all the secret sauce for free, but she provides a closer look at the real process of becoming successful on Medium than anyone else I’ve found. Check out her experiment: “I blogged every day for a month. Here’s what happened.” It contained some real gems, including a link to the #1 Free Headline Analyzer. This is absolutely crazy. It’s not a perfect tool, but it is thought provoking and proved to be a lot of fun.
Since she is not getting her stories accepted by Medium publications each week, she has to be doing something right.
|
https://medium.com/the-word-is-not-enough/ways-of-the-wordsmith-how-to-be-happy-and-successful-now-ac449788f588
|
['Lon Shapiro']
|
2020-07-16 08:37:48.183000+00:00
|
['Technology', 'Humor', 'Writing', 'Social Media', 'Médium']
|
2,023 |
Getting Started with Xperi.NZ. Welcome once again to our new blog for…
|
Welcome once again to our new blog for the day, where we’re going to talk about the very force that drives this page’s purpose — Xperi.nZ.
Xperi.nZ, an AR-driven modeling solution designed and developed by Xplorazzi Technologies, is the marketing advancement that aims in advancing modern sales to its desired heights by the means of creating 3D Models of the products that are approached to us by our valuable clients as of date. But we’re here today to let our new and upcoming clients know how they can switch lanes to the modern era of sales development using our redefined AR solutions @ Xperi.nZ! Here are a few detailed steps on how to reach us either from our very website — xperi.nz, or via our social media handles such as Instagram, Twitter, FaceBook, and many more. Lets’ get started!
Our main website from where we promote our services in-detail, handle our usual day-to-day customer support and accept new orders, are none other than xperi.nz. Sounds familiar, isn’t it? In here, we have a few domains for you to explore through and make a move towards the new age of augmented sales, or as we call it — “Surreal Sales Approach”
· As we enter xperi.nz, you find the XPERI.NZ page load up in your browser, which looks like this.
Over here, you can find a few options to check out. On the top of the page, you can find the options “Sample Models” and “Request for Demo”, which kind of speaks for themselves. Sample Models allows you to visually witness some of the best-in-class 3D Models that we previously developed for our valuable clients and understand the working on the get-go. Whereas Request for Demo gives you a chance to witness a 3D Model of your very own product along with our top-tier services. This trial is a small token of commitment from us to you that your product’s sales can boost under the “Surreal Sales Approach” scheme.
· As you scroll down, you can find the working of our Xperi.nZ solutions in brief.
These brief informative of the in-line technical steps guide you with how we @ Xperi.nZ work behind the scenes to deliver the best out of your product using our AR mechanism. Feel free to take up this technological entourage!
· Scrolling down further, you are entitled to claim your trial services from Xperi.nZ by simply entering your contact details. You are just a step away from entering the “Surreal Sales Approach”!
Feel free to reach out to us after you’ve seen our services at this website by either connecting to us with our contact details given at the website, or simply dropping by our social handles — Instagram, Facebook, Twitter, and More! Email us at [email protected] for sign up requests.
“Xperi.nZ life to the fullest!”
|
https://medium.com/xperi-nz/further-into-xperi-nz-b64545980c07
|
[]
|
2020-11-30 15:27:16.875000+00:00
|
['3d', 'Information Technology', 'AR', 'Cameras', 'Technology']
|
2,024 |
SOLVED: Brother Printer Printing Blank Pages — Brother Printer UK
|
Is your Printer saying no to print pages for you? This may be the issue with anyone as we had heard many complaints about the Brother Printer Printing Blank Pages.
In this article, we are going to cover possible solutions for you so that your Brother printer will start working like before.
Steps to Solve Brother Printer Printing Blank Pages Issue:
Most people forget to refill the ink in the cartridge and that is why the printer stops printing pages. Make sure that you have enough ink to print a page. Try to switch off your computer and printer for 2 to 3 minutes and then try again. Check whether the problem is in your printer or in your driver. For that, you have to print a copy of a document from the printer. If it is printing well then you need to reinstall your Brother printer driver.
Other important reasons for Brother Printer Printing Blank Pages Issue
Cartridge Issue: No matter whether it’s a Brother printer or not but the cartridge issue is one of the most common reasons behind printing blank pages. You should be so sure that all the ink labels in your printer is totally up to the mark. If not, refill them immediately for perfect printing. In case they are totally filled then try to reconnect the cartridge. Communication Issue: Whenever there is a Brother Printer offline issue, we all start imaging that our printer needs replacement but many times the issue is with the communication. What you can do is to reconnect your router or put your router near the printer. Hardware Issue: After following the above steps correctly, you find that your printer is still not working then my friend it’s an issue with the hardware. Maybe your print head or your cartridge needs to be replaced.
Above we have discussed cartridge issue let’s know something about print head. What happens is after some time print head stops proper working as they get clogged. Almost every good printer has this feature to clean them automatically but, you can also check them manually by taking out the print head of the printer.
Hopefully, you may resolve the problem with the help of the above methods but if your printer is still not printing black then you can try one more technique.
Try to copy a page from the printer, if it can do so then there is the issue with your printer setting. For the same, you have to reboot the printer pool setting. If again you are fighting with the same problem then you have to reinstall your printer from your computer. And you will find that the Brother printer Printing Blank Pages issue is resolved.
If you’re unable to fix any issue about your brother printer then don’t get worried. Just grab your phone and dial Brother Printer UK helpline number: USA/Canada: +1–888–480–0288 & UK: +44–800–041–8324.
|
https://medium.com/@mathersmarshall728/brother-printer-printing-blank-pages-brother-printer-uk-f4d61aaca23d
|
['Brother Printer Uk']
|
2020-03-17 10:03:06.026000+00:00
|
['Printers', 'Tech', 'Internet', 'Technology', 'Computers']
|
2,025 |
CLG Partners With InfinityPad
|
We are happy to announce our partnership with InfinityPad that will be supporting us in becoming one of the world’s first Blockchain backed gaming portal to help the eSports industry tackle major problems such as insecurity of funds and fraud, absence of a dedicated portal, and lack of secure financial compensation for players and teams.
InfinityPad is a revolutionary launchPad that helps in raising capital across multiple blockchains on a single platform in a fully transparent and decentralized way. It is a one-stop solution for both: Projects launching IDOs and Investors holding $INFP tokens. Taking the lead, InfinityPad is the first launchpad to introduce guaranteed time-weighted allocation. This means the longer you’re in the InfinityPad pool and the more INFPs you hold, the bigger the allocation you will get. This would in turn eliminate the need for gas wars which happens in most of the launchpads in the Blockchain space.
Lately, CLG has secured a seed investment and is working closely with its team to accelerate the development of its eSports-only NFT marketplace and Decentralized Application. The team has built an extensive community of developers and partners to work move forward with the project development.
In order to make our upcoming IDO a success, team CLG is here to clear each and every confusion our investors might have. As for now, the IDO date and the process of participation are to be announced. Make sure not to miss this golden opportunity and follow all our social channels to stay updated with the happenings around CLG.
|
https://medium.com/@cryptoleaguegaming/clg-partners-with-infinitypad-6979da319aa3
|
['Clg - Crypto League Gaming']
|
2021-11-28 00:43:01.299000+00:00
|
['Smart Contracts', 'Gaming', 'Esport', 'Cryptocurrency', 'Blockchain Technology']
|
2,026 |
What I’ve Learned as a Research Scholar at UC Berkeley Blockchain Lab
|
Hong Kong
The People
From my experiences, the most prominent characteristics for the US and Taiwan community is the supportive and willing-to-share-knowledge-without-acting-superior atmosphere. Especially in the developer communities, people are very open minded when it comes to discussing the current development trend and interesting projects in the blockchain field. Also, in the US, you can easily meet people traveling from around the world just to attend a global event. “Where are you from?” thus became the usual ice-breaking question. Similarly, as people here are very used to grouping up with someone they just met, it is definitely the greatest place to meet like-minded developers at the hackathons there. :)
Compared to Hong Kong, I didn’t feel as supportive in the blockchain communities. Though maybe it was because I don’t speak Cantonese. What I see was there are more business-oriented people at large conferences but comparatively not as active in the developers communities. It is easy to tell from their dress code. People wear suits instead of t-shirt and slippers which is common in the US. They are also more interested in the business model, compared to the technology itself.
We sharing the top hacks on blockchain at ABC Blockchain Community XD
Building Valuable Connections
Another interesting fact thanks to the open-minded atmosphere in US is that it is relatively easy to grab a coffee and exchange ideas with someone you just found on Linkedin. My personal feeling is that the quality of the people met there is very high, but ths is usually not the case in Hong Kong. Take my story as an example, when I first arrived at Berkeley, I added lots of Berkeley students and blockchain enthusiast on Linkedin. 50% of them have messaged me back and wanted to meet and see how we can help each other. They have introduced and recommended my co-founder and I to speak at BASF, ABC Blockchain Community, Starfish Mission, etc, just to name a few. In exchange, we will advise them on the ecosystem in East Asia as we have been doing blockchain consulting in Hong Kong, Taiwan, Macau, and China since mid 2018.
“Most people in US are very interested in entering the Asia market. Leveraging this, we can easily exchange back the resources we want. :)”
In Taiwan, it is a little harder to gain new connections if you don’t have friends in common with others. Cold email and Linkedin messages are just not working here. People use Facebook to connect with each other more often. My recommendation is to attend local meetups and add people you met there on Facebook. It is also a plus to share things you’re proud of on FB to raise other’s interest in connecting with you. :P
Feel free to add me if you need help connecting! :) https://www.facebook.com/lee.ting.ting.tina
Breaking the Rules — The Common Key to Success in These Places
Be Proactive: Reach out to people who you think you can help each other. Opportunities won’t find you if you don’t take the first step.
Reach out to people who you think you can help each other. Opportunities won’t find you if you don’t take the first step. Ask Fearlessly : This is one of the most vital steps to really get the opportunity. For example, the key moment my co-founder and I got invited to Berkeley is we asked the question: “ Studying in the US is our life dream. Is there any way for us to have a chance to contribute to Berkeley and the lab? ” to the director of the Xcelerator directly. You’d be surprised of how people are willing to help if you just ask bravely!
: This is one of the most vital steps to really get the opportunity. For example, the key moment my co-founder and I got invited to Berkeley is we asked the question: “ ” to the director of the Xcelerator directly. You’d be surprised of how people are willing to help if you just ask bravely! Help And Share Whenever You Can: The principle behind resource exchanging is “How you are helpful to others determines how others will help you.” For example, one of my approaches is to share insights at local meetups. Though some of the events are unpaid for speakers, you will have a chance to show how you can be helpful at a specific domain. Writing articles (like what I’m doing right now) is also a way. After all, it is impossible to exchange with nothing on hand. Keep this in mind. ;)
Me speaking at Stanford’s Cafe Philosophy :D
Wrap up
Now you have a brief overview of how the blockchain communities in US, Taiwan, and Hong Kong are different from each other, including the list of meetups and events, cultural background, people’s characteristic, and the easier way to obtain resources from different local communities.
Hope you like it and feel free to let me know if I missed anything.
Thank you for your time reading! :)
|
https://medium.com/swlh/what-ive-learned-as-a-research-scholar-at-uc-berkeley-blockchain-lab-89fa28399647
|
['李婷婷 Lee Ting Ting']
|
2020-11-29 17:22:54.685000+00:00
|
['Blockchain', 'Entrepreneurship', 'Technology', 'Life Lessons', 'Startup']
|
2,027 |
Build a PWA Using Only Vanilla JavaScript
|
Progressive Web App (PWA)
“A Progressive Web App (PWA) is a web app that uses modern web capabilities to deliver a native app-like experience to users. These apps meet certain requirements (see below), are deployed to servers, accessible through URLs, and indexed by search engines.”
A Progressive Web App (PWA) works like any other normal app but with a lot of added features and a lot less hassle. They are fast, reliable, and can work perfectly in an offline environment.
Why should we use it?
Progressive Web Apps (PWAs) creates a very rich experience for the users because they are:
Responsive
A PWA can be built to fit into a desktop browser, mobile phone, or TV screen— any product that supports internet connection and has browser support.
Reliable
It uses a technology called a Service Worker which enables the users to load PWAs instantly in their environment. A PWA can give offline support for the application, and the user won’t face network related issues.
No App Store/Play Store
Users don’t need to visit an app store to download these Progressive Web Apps. They can be installed instantly and directly from the browser. Requires no waiting time as they are very quick and give a native application like simulation.
Engaging for developers as well as users
Developers can also add/play around with tons of features in the manifest files. One of the most well-known features is re-engaging users with push notifications enabled by the PWA.
Easy to share
Progressive Web Apps are very easy to share with your friends or colleagues. All that a user needs to share is the website/app URL. Users don’t need to share an installable apk or go through the process of verification followed by downloading tons of files. All that a user requires is a simple click.
To learn more about Progressive Web Apps visit this link by Google Developers
Creating a Basic PWA
In this tutorial, we are going to build a PWA using only vanilla JavaScript but to do that we’ll need to make a normal Web App first.
Before proceeding any further, let’s have a look at what our final UI would look like and the functionality we are trying to achieve.
Final Project UI
Final Project UI Overview
The UI will display colored boxes in the middle, and clicking on the boxes will play short music clips.
Similarly, each box produces different musical clips. The concept of this website is to mix different music and create your own while playing around with it.
GitHub Repository
All the files related to this project is present here: https://github.com/S-ayanide/MixCentro
During this tutorial, you’ll need to download certain assets which are available in the GitHub repo. If you want to create, change, or modify something, I’d suggest you do that once you’ve completed this tutorial.
Web App HTML
The HTML for this project will be very simple. We’ll be needing divisions for individual colorful pads as displayed and the audio for each one.
Let’s take a look at this code snippet:
All we are doing here is naming our title and header as MixCentro as that is the name we’ve chosen for this website (feel free to choose your own).
You’ll be needing sounds for this project to work, go ahead and download that from the GitHub repository mentioned above which contains all the sound files.
We’ve created a main division “pads” and that contains “pad-top” and “pad-bottom” which does nothing but create the pads we see in the UI, divided into two parts, each containing 3 pads.
The top pad is termed as “pad-top” with three pads consisting of different audio. Similarly, the bottom pad division is termed as “pad-bottom” and consists of three pads as well.
Although style.css and index.js have been imported, we aren’t using them as of now.
Web App CSS
Now we can build a stylesheet at the root of the directory. I’ve called it style.css .
By default, we get a margin and padding at the sides of our screen, in our case we don’t want that to happen. So let’s manually remove any padding and margins which have been added by default.
Since we’ve added headers in our HTML file and it also has no background, we need to make our website look attractive and subtle. To do that let’s upgrade the font’s styling and make it look better by adding a background which would go well with the theme.
In the GitHub repository mentioned above you can find an image already there for you in the path images/bg/background.jpg This is the same image as the one used in the UI preview.
Importing Google Fonts
To move ahead further we’ll need to choose a nice and subtle font for our website. To choose from a wide variety of fonts we’d be using Google Fonts.
This might take a while since everyone’s choices and tastes are different. Choose one font and click on the ‘+’ button at the top right of the selected font.
Once you click on that you’ll find a black bar appear on your screen which says “1 Family Selected”, upon clicking on that bar it expands and lets you see something similar to this.
The details might be a little different as it depends on the font you’ve selected but the rest remains the same. We’ll be using the standard way of importing the font here, so let’s go ahead and copy the whole <link href … > provided in the grey box.
To use the font in effect open your index.html file and paste this link anywhere between the <head> tags.
Adding Font
More Styling
After importing our font, it’s time to reflect that on the main browser as well
CSS Styling
In my case, I’ve used the font-family “Lexend Exa” and also added the background image mentioned earlier. To maintain a continuous, evenly spaced layout we’re using a flexbox with the contents justified with a space between property.
We also keep a class called “pads” which has a width of 60% so that it takes only a little more than half of the screen which makes it not too expanded and also keep it tight and gentle looking.
The divisions inside the “pad-top” and “pad-bottom” classes have been targeted so that they grow to their potential width and height and also keep the flex property equal to 1 so that they are projected out in front.
We finally assign different hex colors for all of our pads.
Adding Media Queries
Lastly, we add media queries to control the responsive nature of our application.
In this simple code, we only control the size of our fonts which we’re shrinking if the screen size drops below 480 pixels and also add some margin at the bottom to make it look better.
Building it mobile compatible
Add Vanilla JavaScript Functionality
At this point, we have our wonderful UI set up, but the pads don’t produce any sounds upon clicking them. Why is that?
Our division at this point already has the audio present inside of it but to play that particular sound upon user click, we call a play() function. That’s where JavaScript comes in.
Our basic JavaScript code would be very simple and would just consist of 11 lines only.
Gathering Sounds and Pads
In the beginning, we are gathering sounds and pads by storing the whole query which is targets the HTML classes .sounds and .pads respectively and stores them into variables.
But we want to perform this operation whenever the window loads first, therefore we wrap everything inside a window.addEventListener(‘load’) .
Next we add a forEach which loops through all the pad divisions. It has two parameters: one is pad which initializes itself to each individual pad every time it loops, and the other is an index which is required to play the sound of that particular pad.
We utilize the sound from the above sound variable to play the file associated with the individual pads with the help of the play() function.
We use something like a currentTime = 0 . the reason we do that is we are re-initializing the time back to zero every time a pad plays so that we can play one pad for multiple times upon multiple clicks on the same pad itself.
Congratulations, You’ve just built a Web App with vanilla JavaScript. You can play around with it or even deploy it online for others to use. But wait! we still have to convert this Web App into a Progressive one. Let’s dive in.
What is a Web App Manifest?
A Web App Manifest is a simple JSON file which contains the details of your Progressive Web App and tells the browser how it should behave when it is installed in a user’s device.
A manifest can contain information like the application name (full and short name), the app icon, the URL it points to which will open once the app launches, control theme colors, etc.
Build a Manifest File
Creating a Progressive Web App a manifest file is essential since it controls the behavior of the browser upon installing it in the user’s device.
Manifest File
To create your own Web App Manifest, you can create a new file and name it as manifest.json and add further details in the JSON format. However, there is a better way of doing it by using the tools which are already provided to us online.
Generate a Manifest Online
In this era, the internet already provides us with lots of time-saving options which come in handy once again when we create a manifest.json file.
Instead of typing the whole key-value pairs in a JSON format, simply navigate to this website: https://app-manifest.firebaseapp.com/
This is a Web App Manifest Generator and it only requires you to fill certain input fields and it will automatically generate a manifest for you.
Things to consider while filling the input fields:
Give your application a full and a short name.
A theme color and background color is important as it can modify the browser version of the normal Web App and provide more life to it.
Change the Display Mode to ‘standalone’ .
. Remove the “Application Scope” for now.
Make the value for the Start URL as a ‘.’ since we want to create the PWA at the root directory itself.
since we want to create the PWA at the root directory itself. On the right-hand side, you’ll see a button called ‘ICON’. Choose an image to use as your app icon then simply drag and drop/upload it here.
Note: Make sure your icon size doesn’t exceed 512x512 resolution since the icons undergo scaling for them to fit into different devices.
Once all of that is done, click on ‘GENERATE .ZIP’ and extract all the contents of the zip file into your project folder.
At this point, you’ve just added your manifest file in your project but aren’t using it. To actually reflect the manifest file in your project, add a link to it in the index.html file in-between the <head> tags.
<head>
<link rel=”manifest” href=”manifest.json”>
</head>
Viola! You’ve got your readymade manifest file in your project now.
Now we will add new dependencies using Yarn.
What is Yarn?
Yarn is a new package manager that replaces the existing workflow for the NPM client or other package managers while remaining compatible with the NPM registry. A package manager servers the purpose of installing some packages which serve a particular purpose.
Advantages of using yarn
Of course, there are other package managers, but for this project, we will use Yarn for a few reasons:
Package downloading occurs only once i.e. no requirement for a second-time download.
It is more secure.
Uses lock format which locks dependencies ensuring that both the system works on the same packages/dependencies.
Installing Yarn in your system
Installing yarn in your system is very simple. All you need to do is visit https://yarnpkg.com/lang/en/ and click on “Install Yarn”, and the download will start automatically.
Just follow the normal setup procedures and the Yarn installation will start in your system, and the path will also modify itself in your environment variables automatically.
Another alternative to this method is by using npm and this is easy as well. If you are already using npm then simply open a terminal and type:
To check if yarn installation was successful in your system or not, open a terminal and type:
If you get a response like 1.16.0 or anything similar, that means Yarn has been successfully installed and you are good to go.
Initializing a yarn project
To create a new yarn project the first step is to navigate into your project folder in your terminal and type this command:
Once you type the yarn init in your terminal, you’ll receive a lot of questions. When you answer these questions, the response will store itself in a package.json file which keeps all the information.
At first, a name will be asked to you. In there type the name of the project which you want to keep. As for other fields, you can keep them empty or type your own specification if you choose.
A very important question which should not be skipped is the entry point. It decides where the entry point of our project should be. Initialize it to index.js which we created earlier.
Once you finish this step you’ll find lots of new folders and files populated in your project. If that’s the case, pat your back because you’ve successfully initialized your first yarn project.
Note: yarn init should be done after navigating to the project folder so that index.js is accessible as an entry point
Installing a Package Called Serve
Since a Progressive Web App (PWA) requires a live server to run, we need at least a localhost to test our application. So, we need to install a package called serve .
serve works best when we want to test a static site using a server. We can check how it runs in localhost and then push it into a deployment later.
To install serve in your project we simply type the following in the terminal (after navigating into your project folder):
This will add the dependency into your project and you can view it inside the package.json file.
To run your static page in a local server, we need to start the server first using this dependency. To start the server type, yarn serve in your terminal.
You will see something similar in your case providing you the details about your localhost and network address. You can open any one of them to view your static website running on a local server.
What is a ServiceWorker?
“A service worker is a type of web worker. It’s essentially a JavaScript file that runs separately from the main browser thread, intercepting network requests, caching or retrieving resources from the cache, and delivering push messages.”
Basically, a service worker runs separately from the main thread and are completely independent of the application they are currently associating with.
A Service Worker can control network requests, can handle caching, and also provide offline resource support through the cache.
A Service Worker has three steps involved in its lifecycle:
Registration
Installation
Activation
Registering the ServiceWorker First
To install our Service Worker we need to register it first into our main JavaScript file which in our case is index.js. Before proceeding any further let’s create a Service Worker file and call it serviceWorker.js .
The first step in the lifecycle of a Service Worker is Registration:
Registering a Service Worker
First checks for browser support thus we need to add it inside our window.addEventListener() . The service worker then registers itself with navigator.serviceWorker.register , which returns a promise that resolves when the service worker has been successfully registered.
The scope of a Service Worker is very important as it determines which files the service worker controls. In other words, from which path the service worker will intercept requests.
Thus we always prefer a Service Worker in the root directory so that it can control requests from all files at this domain.
Creating Our serviceWorker.js Using Vanilla JS
Now that we already have our s erviceWorker.js file created and registered in our browser, we can confirm it by opening our inspect element tab and in the Developer Tools navigating to the Application menu.
You’ll find that your Service Worker registration is successful and has also written in the message log of your console.
To make all of your assets available in our local cache we need to dynamically mention the path of all the static assets.
Static Assets
As a Service Worker is completely event-driven, we need to add events like install and fetch for it to perform them in the browser side.
Install
An Install event calls itself every time the browser detects a new Service Worker. What we are targeting is to call the Cache API to retrieve all our static assets and save them for later.
Installing Service Worker
In this case, we are calling our cache by the name ‘static-cache’ . You can use any name you want, but for the ease of simplicity, “static-cache” is preferred.
Since the Service Worker is a low-level API, we always need to tell it what to do. If at this point you moved back to the Application menu in your browser and simulate an offline environment, you’d still find nothing is happening.
Let’s solve this problem with the help of the fetch API.
Fetch
In a Service Worker, we can decide how we want to respond to a given event. For that, we use a method called respondWith() .
In our case, we want to check whether there is something present in the cache first, and if not, we will fetch it from the network.
Static or Network Cache
To create our cache-first approach, we create a function which matches the request with the files present in the cache itself. Thus the request acts as a key.
Now, this returns either undefined if there is nothing in the cache or the cache response itself.
Cache First
To create a network-first approach, we want to create a dynamic cache where all the network assets if required, will be fetched. In the case that it’s unable to, it will fall back to the static cache.
Network First
This would fetch any request online if required by our browser and will be added using a put() method into our dynamic cache
With that now, when we visit our Application Menu we’d find two Cache Storage one Static and the other Dynamic which we just created.
Now if we go back to our Service Workers and simulate an offline environment by clicking on the “Offline” checkbox, we would face no internet issues and the application will run smoothly because of all the cache storages.
A Huge Congratulations for making it here, you now have your very own Progressive Web App which you can use/install on all the platforms.
To install your PWA, simply deploy your project on an online server (free hosting would work as well). Once your website loads completely, you’ll find a small + sign at the right side of your address bar.
Click on it to see your wonderful PWA install and work in your local system.
|
https://levelup.gitconnected.com/build-a-pwa-using-only-vanilla-javascript-bdf1eee6f37a
|
['Sayan Mondal']
|
2019-09-30 15:05:08.669000+00:00
|
['JavaScript', 'Web App Development', 'Pwa', 'Technology', 'Web Development']
|
2,028 |
Top Selling Sony Home Theater System USA 2021
|
Introduction: Top Selling Sony Home Theater System USA 2021
Nowadays it is easier to find high-quality and affordable options for the home theater. All of today’s picks are from a trusted brand SONY. So all the Sony home theater system will guarantee to give you big end offerings.
It doesn’t matter how big your tv screen is if it is lacking in sound quality. Which means you are only getting half of the experience. Trust us your tv speaker won’t be doing much. Fortunately, you can upgrade to a home theater system regardless of budget.
We have basically focused today on high audio and high video quality. The best home theater system provides an excellent balance of good quality sound and easy installation. And well many people prefer that.
In 2021 a sony home theater speaker can give you a great audio experience. And for that, you don’t even need a complicated setup. So do read further and find the best Sony home theater for yourself.
Top Selling Sony Home Theater System USA 2021: List
The quality of the product is excellent, and it is easy to use. The compact contemporary design of this system fits anywhere in your home. The built-in power can easily fill a bedroom, kitchen, or office space with its great sound.
Feature a tiny powerful device that converts any aux speaker into a Bluetooth speaker. So that you can stream your music or take phone calls. You will enjoy the convenient Bluetooth connectivity with compatible Bluetooth devices.
And you will be able to stream music without wires. The near field connections technology takes Bluetooth connectivity to a next level. Allowing users to simply align their enabled device. And tap them together to pair and activate the connection. Well, you can also use the integrated AM/FM tuner to receive the local broadcast signals.
You can also play your CDs or your personal recorded CD-R by using the integrated motorized slot CD player. Well, you can also play MP3 files that have been recorded to CD discs.
Striking the Pros of using Sony Compact Stereo Sound System for House with Bluetooth Wireless Streaming
The classic three-box design makes a statement in any room.
Allows a separate place for the placement of the speakers for a wider stereo effect.
Has a built-in CD/DVD player for your disc collection.
Striking the Cons of using Sony Compact Stereo Sound System for House with Bluetooth Wireless Streaming
The build and control are average.
Equipped with 4 woofers and 1 tweeter, the Sony SS-CS8 2-Way 3-Driver Center Channel Speaker handles up to 145 watts. The woofer of the speaker uses a mic-reinforced diaphragm. The upper surface of which is fashioned to provide faithful sound.
While the lower layer is designed to provide a powerful bass response. The cabinet is built up of wood. Which is designed to provide a natural resonance. The network crossover of the speaker is mounted directly to the cabinet. So that it becomes vibration isolated. The foot of the speaker has rubber pads to avoid shelf vibration.
The crossover network in the SS-CS8 is intended to assure minimal signal loss. For an energetic vocal response with even the most delicate nuisance. It is mounted directly above the cabinet to avoid vibration.
Striking the Pros of using Sony 5.1-Channel Surround Sound Multimedia Home Theater Speaker Bundle
Has a powerful bass response.
The rubber pads make it vibration isolated.
Has a powered subwoofer.
Striking the Cons of using Sony 5.1-Channel Surround Sound Multimedia Home Theater Speaker Bundle
A lit bit costlier.
Enjoy the wireless audio streaming with the Sony 7.2-Channel Wireless Bluetooth 4K 3D HD Blu-ray A/V Surround Sound Home Theater System. Features Bluetooth with NFC connectivity. And also have four HDMI inputs with one HDMI output. All support 4k resolution.
It has a 7.2 channel that surrounds sounds and a two-channel stereo. Everything is huge. All speakers and receivers are huge. Whenever you play the music the sound quality would be great. The sound is crystal clear and with so many options music and movies are awesome.
Striking the Pros of using Sony 7.2-Channel Wireless Bluetooth 4K 3D HD Blu-ray A/V Surround Sound Home Theater System
It is very versatile.
The sound is crystal clear.
Offers dramatic and cinematic sound.
It is compatible with blu ray 3D movies.
The setup of the speaker is very easy.
Striking the Cons of using Sony 7.2-Channel Wireless Bluetooth 4K 3D HD Blu-ray A/V Surround Sound Home Theater System
There is a lack of sound adjustment.
Enjoy the clear mid and high frequencies from the soundbar. Brings every music and movie to life. In the volume and clarity, with a total 320watt power output. The contours of the soundbars fit perfectly with the design of your tv.
And it is also very simple to connect it. The seven sound modes enhance your entertainment experience. The Cinema mode is for movies, game studio mode is developed by PlayStation developers. Music mode helps you to listen to every detail clear. And the news mode is designed for clear dialogue.
Hear the sound that will be coming from all around. The virtual sound surround technology puts you right in the heart of movies. That is done by emulating the wide stage of cinema-style surround sound. Even without the need for additional rear speakers.
Striking the Pros of using Sony HT-S350 Soundbar with Wireless Subwoofer
The sound is very powerful.
Solid Bass.
Supports Dolby digital.
Bluetooth supported.
HDMI and ARC capable.
It is easy to set up.
Quite affordable.
Striking the Cons of using Sony HT-S350 Soundbar with Wireless Subwoofer
Has no Dolby Atmos, but it features S-PRO front surround instead.
There is a need for an HDMI splitter for multiple connections.
This Sony home theater system gives your favorite shows and movies the sound they deserve with a 2.1inch soundbar. This space-saving solution is designed to match the decor of your home. The compact one-bar design with a built-in woofer completely matches your room.
There is no need for another box and extra cables around your room. With HDMI, ARC, one cable can give an easy connection for all your tv audio. You can also connect it Wireless to your tv via Bluetooth. And can control tv and soundbar with the help of only one remote.
Virtual sound technology just puts you right at the heart of movies and music. The low profile design of the soundbar does not obstruct the view of your tv. The voice enhancement features strengthen the listening experience.
Striking the Pros of Sony S200F 2.1ch Soundbar with built-in Subwoofer and Bluetooth Home Theater
It is great for dialogue content.
Performs well even on high volume.
Striking the Cons of Sony S200F 2.1ch Soundbar with built-in Subwoofer and Bluetooth Home Theater
Doesn’t get too loud.
It does not support DTS.
Always stays on sound surround feature.
Well, that sums up our list for Sony home theater. I hope it would help you out to find “which Sony home theater is best?”
Acknowledging Questions
How To Setup Sony Home Theater System?
The two most common connections used to hear TV sound from the A/V receiver or from the home theater system is:
Option 1: HDMI connection using the ARC feature.
Option 2: Connection with the help of an HDMI cable, coaxial digital cable, or audio cable.
Which option you will be going to use depends upon the ports of your products. Suppose if your tv and audio system both support the ARC feature. I will recommend then using option 1 to connect your products. Otherwise, you can use option 2.
Originally published at https://shoppingpossible.com/ on August 16, 2021
|
https://medium.com/@shoppingpossiblenow/top-selling-sony-home-theater-system-usa-2021-fb8c5980feb3
|
['Shopping Possible']
|
2021-08-16 17:53:14.239000+00:00
|
['Sony Home Theater', 'Home Theatre System', 'Technology', 'Electronic Items', 'Home Theater']
|
2,029 |
What Kind of Company Is Coinbase?
|
I read on:
Everyone is asking the question about how companies should engage in broader societal issues during these difficult times, while keeping their teams united and focused on the mission.
Huh. I have found it only natural for my team to not be “focused” on the same things they were last year. I’ve been much more worried about how little tech companies have changed course or rethought their purpose.
Our industry has already long focused on shipping as fast as we could, and scaling as big as we could, ignoring the “external” societal issues our work inflamed. We said we wanted to “change the world”, and maybe we should’ve learned more about it first.
But I’m afraid those aren’t the challenges this post was about. I read on:
Coinbase has had its own challenges here, including employee walkouts.
… huh.
I didn’t know Coinbase was having employee walkouts. What are employees challenging with these walkouts; what are they objecting to?
The day after I read the post, I came across a Twitter thread by Erica Joy that tells at least some of that story. It’s context I wish I’d had when first reading Brian Armstrong’s post about “difficult times” at Coinbase, and I’m going to reprint it here almost verbatim:
so let's start back in june. in the wake of the horrendous murder of george floyd, companies were doing their white text on black background “standing with the Black community” statements. even google, who had been mum on racism for their entire existence, said something. you know what’s coming next. coinbase did not release one of these statements. the only hint at support of Black lives, or even the Black community, was the coinbase twitter account retweeting [this tweet] from brian [where he says “Black Lives Matter”]. now you may read that tweet and the following thread and think how did the person who wrote that thread also write this blog post. i’ll tell you how: he had no choice but to post* that thread. […] why do i think he had no choice? well, something happened at coinbase that likely forced his hand: a large portion of the coinbase engineering team walked off the job just before that thread went up because brian wouldn’t say “black lives matter.” now i said “post” because i don’t think he wrote that thread. i’d bet my paycheck on it. i strongly believe he posted the thread that a (crisis?) comms person wrote for him so his engineering team would get back to work. the blog post is more reflective of his true feelings. i believe having his hand forced by that walkoff pissed him off and he’s retaliating: with the blog post, with the policy changes, and the other internal coinbase changes (slack channels for asking execs questions have been deleted, all hands questions must now be pre-vetted).
This is why I’ve been much more worried about how little change we’ve seen in tech companies, than worried our recent discussions are too “disruptive”.
Remember when we were proud of the word “disruption”? Proud of the stodgy, bloated incumbents we would sweep aside, and unleash whole tides of human creativity and ingenuity?
Why is it so hard to look in the mirror, and see who we’ve become?
|
https://lyra.medium.com/what-kind-of-company-is-coinbase-372cf1da4621
|
['Lyra Naeseth']
|
2020-10-03 07:09:27.565000+00:00
|
['Leadership', 'Fascism', 'Culture', 'Technology', 'Cryptocurrency']
|
2,030 |
Local Storage In JavaScript / HTML5 Tutorial
|
Photo by Glenn Carstens-Peters on Unsplash
SaaS providers have gotten really good at letting you use their applications despite the constant loss of Internet connectivity as you move to and fro with your mobile device. This seamless experience wouldn’t be possible without the use of some kind of client side storage. In the proceeding post, I’ll attempt to give a high level overview of the different forms of persistent storage on browsers.
Cookies
When receiving an HTTP request, a server can send a Set-Cookie header with the response. The cookie is usually stored by the browser, and then the cookie is sent with each subsequent request made to the same server. An expiration date or duration can be specified, after which the cookie is no longer sent.
Cookies are mainly used by servers to distinguish one user from the hundreds or thousands of users making requests at any given point in time. There are three main reasons why we’d want to know what user is making what kind of request.
Session management — Logins, shopping carts, game scores or anything else the server should remember
Logins, shopping carts, game scores or anything else the server should remember Personalization — User preferences, themes or other settings
User preferences, themes or other settings Tracking — Recording and analyzing user behaviour
Cookies generally have a maximum size of around 4 KB, which is not much good for storing any kind of complex data. With HTML5, you have several choices for storing your data, depending on what you want to store. The storage limits depend on what system your browser is running on (desktop vs mobile) and what you’re using to store your data (localStorage vs indexedDB).
Session Storage
Session storage is relatively easy to work with. You can access and mutate key/value pairs using a simple javascript API. For example:
sessionStorage['name'] = 'Cory';
You can view the contents of the session storage at any time by opening up the DevTools and navigating to the application tab.
To retrieve that data from storage, you simply pass the key as an argument.
alert(sessionStorage['name']);
If we were to try closing and reopening the tab, the data in session storage would be gone.
Local Storage
LocalStorage is similar to sessionStorage, except that data stored in localStorage doesn’t get cleared when the browsing session ends (i.e. when the window or tab is closed).
The interface for localStorage is the same as sessionStorage, for example:
localStorage['cat'] = 'Rose';
To retrieve that data from storage, you simply pass the key as an argument.
alert(localStorage['cat']);
The data persists after closing and reopening the tab.
In browsers like FireFox and Chrome, localStorage is deleted when the user manually clears their browsing history or cookies and other side data.
IndexedDB
You could obviously use localStorage for simple requirements but IndexedDB has a larger storage capacity and scales better.
IndexedDB is particularly useful if you want a client that can be used without an internet connection. For example Google Docs allows users to work offline and synchronizes their changes when the network becomes available again.
Another use case is to store redundant and seldom modified but often accessed data in the IndexedDB to avoid having to fetch it from the server. It adds too much latency to make a HTTP request every time you want to access the data. In addition, making frequent requests for data that is unlikely to have changed on the server side would place unnecessary strain on the database.
I don’t actually recommend interacting with the IndexedDB interface directly. There a numerous npm packages (i.e. localforage) that wrap the functionality in a promise based API. For those of you who want to learn more about how it works under the hood, here’s a link to the documentation on MDN.
In the proceeding section we’ll walkthrough a basic example of how to store and retrieve data using IndexedDB. We’ll use create-react-app for our boilerplate.
npx create-react-app indexeddb-example
Once inside the repository, install localforage using either yarn or npm.
yarn add localforage
We can store and retrieve key/value pairs using the following code.
It’s also possible to store and retrieve images in the form of blobs.
You can find the source code here.
|
https://medium.com/swlh/client-side-storage-an-overview-of-cookies-session-storage-local-storage-and-indexeddb-517f172c88c8
|
['Cory Maklin']
|
2019-07-22 13:25:45.025000+00:00
|
['Technology', 'JavaScript', 'Software Engineering', 'Programming', 'Tech']
|
2,031 |
Installing and Configuring Distributed File System (DFS)
|
Subjects covered in this summary note:
Installing and Configuring Distributed File System (DFS)
Managing Files
Managing File Security:
NTFS File Permissions:
Encrypting File System:
Sharing Files Protected with EFS:
Configuring EFS by Using Group Policy Settings:
BitLocker:
Sharing Folders:
Quotas:
Configuring Disk Quotas by Using the Quota Management Console:
Configuring Disk Quotas by Using Group Policy:
older Sharing:
Sharing Folders from Windows Explorer:
Distributed File System:
Installing DFS:
Configuring DFS:
Create DFS folders:
Managing Files
When you share some documents on your network, they must remain protected from unauthorized access. To control access, use NTFS file permissions and Encrypting File System (EFS). To provide redundancy, create a Distributed File System (DFS) namespace and use replication to copy files between multiple servers. You can use quotas to ensure that no single user consumes more than his or her share of disk space (which might prevent other users from saving files). To accomplish these, you need to learn the following skills:
Managing File Security
Sharing Folders
Backing Up and Restoring Files
Managing File Security:
Windows server provides three technologies for controlling access to files, folders, and volumes: NTFS file permissions, EFS, and BitLocker.
NTFS File Permissions:
NTFS file permissions determine which users can view or update files. The default permission for different file types are:
User files: Users have full control permissions over their own files. Administrators also have full control. Other users who are not administrators cannot read or write to a user’s files.
System files: Users can read, but not write to, the %SystemRoot% folder and subfolders. Administrators can add and update files. This allows administrators, but not users, to install updates and applications.
Program files Similar to the system files permissions, the %ProgramFiles% folder permissions are designed to allow users to run applications and allow only administrators to install applications. Users have read access, and administrators have full control.
The default file and folder permissions work well for desktop environments. File servers, however, often require you to grant permissions to groups of users to allow collaboration. Administrators can assign users or groups any of the following permissions to a file or folder:
List Folder Contents
Read
Read & Execute
Write
Modify
Full control
To protect a file or folder with NTFS, follow these steps:
Open Windows Explorer (for example, by clicking Start and then choosing Computer). Right-click the file or folder, and then choose Properties. The Properties dialog box for the file or folder appears. Click the Security tab. Click the Edit button. The Permissions dialog box appears. If the user you want to configure access for does not appear in the Group Or User Names list, click Add. Type the user name, and then click OK. Select the user you want to configure access for. Then, select the check boxes for the desired permissions in the Permissions For Users list. Denying access always overrides allowed access. Repeat steps 5 and 6 to configure access for additional users. Click OK twice.
Additionally, there are more than a dozen special permissions that you can assign to a user or group. To assign special permissions, click the Advanced button on the Security tab of the file or Administrator Properties dialog box.
A user who does not have NTFS permissions to read a folder or file will not see it listed in the directory contents. This feature, known as Access-based Enumeration (ABE), was introduced with Windows Server 2003 Service Pack 1.
Encrypting File System:
NTFS provides excellent protection for files and folders as long as Windows is running. However, an attacker who has physical access to a computer can start the computer from a different operating system (or simply reinstall Windows) or remove the hard disk and connect it to a different computer. Any of these very simple techniques would completely bypass NTFS security, granting the attacker full access to files and folders.
EFS protects files and folders by encrypting them on the disk. If an attacker bypasses the operating system to open a file, the file appears to be random, meaningless bytes. Windows controls access to the decryption key and provides it only to authorized users.
To protect a file or folder with EFS, follow these steps:
Open Windows Explorer (for example, by clicking Start and then choosing Computer). Right-click the file or folder, and then click Properties. The Properties dialog box appears. On the General tab, click Advanced. The Advanced Attributes dialog box appears. Select the Encrypt Contents To Secure Data check box. Click OK twice.
If you encrypt a folder, Windows automatically encrypts all new files in the folder. Windows Explorer shows encrypted files in green.
The first time you encrypt a file or folder, Windows might prompt you to back up your file encryption key. Choosing to back up the key launches the Certificate Export Wizard, which prompts you to password-protect the exported key and save it to a file. Backing up the key is very important for stand-alone computers, because if the key is lost, the files are inaccessible. In Active Directory environments, you should use a data recovery agent (DRA).
Sharing Files Protected with EFS:
If you need to share EFS-protected files with other users on your local computer or across the network, you need to add their encryption certificates to the file.
To share an EFS-protected file, follow these steps:
Open the Properties dialog box for an encrypted file. On the General tab, click Advanced. The Advanced Attributes dialog box appears. Click the Details button. The User Access dialog box appears, Click the Add button. The Encrypting File System dialog box appears. Select the user you want to grant access to, and then click OK. Click OK three more times to close all open dialog boxes.
The user you selected will now be able to open the file when logged on locally.
Configuring EFS by Using Group Policy Settings:
Users can selectively enable EFS on their own files and folders. However, most users are not aware of the need for encryption and will never enable EFS on their own. Rather than relying on users to configure their own data security, you should use Group Policy settings to ensure that domain member computers are configured to meet your organization’s security needs.
Within the Group Policy Management Editor, you can configure EFS settings by right-clicking the Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies\Encrypting File System node and then choosing Properties to open the Encrypting File System Properties dialog box.
BitLocker:
BitLocker encrypts entire volumes and helps prevent operating system files from being maliciously modified.
EFS encrypts folders and files for individual users. You cannot use EFS to encrypt system files. To encrypt entire volumes and protect system files, use BitLocker Drive Encryption.
When you enable BitLocker protection for a volume, BitLocker encrypts every byte on the volume, including system files and the paging file. When you start the computer, BitLocker loads before Windows, acquires a decryption key, verifies the integrity of the system, and then transparently decrypts files on the volume until Windows shuts down. In this way, BitLocker provides protection that can be completely transparent to end users.
In addition to helping protect data, BitLocker also helps reduce the risk of an attacker altering system files. If BitLocker detects that a system file has unexpectedly changed or that the hard disk has been moved to a different computer, BitLocker prevents Windows from starting. This can help protect users from rootkits, which are a type of malware that runs beneath the operating system and are very difficult to detect or remove.
Individual users can enable BitLocker from Control Panel, but most enterprises should use Active Directory Domain Services (AD DS) to manage keys.
To enable BitLocker from Control Panel, perform these steps:
Add the BitLocker feature. In Server Manager, right-click Features, and then click Add Features. The Add Features Wizard appears. On the Select Features page, select BitLocker Drive Encryption. Click Next. On the Confirm Installation Selections page, click Install. On the Installation Results page, click Close. Click Yes to restart the computer. After the computer restarts, the Resume Configuration Wizard appears. Click Close. Perform a full backup of the computer. Even though BitLocker is very stable and corruption is unlikely, there is a possibility that you will be unable to access the protected volume once BitLocker is enabled. Run a check of the integrity of the BitLocker volume. To check the integrity of a volume, right-click it in Explorer, and then click Properties. On the Tools tab, click Check Now. Select both check boxes, and then click Start. Open Control Panel, and then click the System And Security link. Under BitLocker Drive Encryption, click the Protect Your Computer By Encrypting Data On Your Disk link. On the BitLocker Drive Encryption page, click Turn On BitLocker. When prompted, click Yes to start BitLocker setup. If the Turn On The TPM Security Hardware page appears, click Next, and then click Restart. On the Set BitLocker Startup Preferences page, select your authentication method. The choices available to you vary depending on whether the computer has TPM hardware. Additionally, the available choices can be controlled by the Group Policy settings contained within Computer Configuration\Administrative Templates\Windows Components\BitLocker Drive Encryption. If you chose to require a startup key, the Save Your Startup Key page appears. Connect a USB flash drive, select it, and then click Save. On the Save The Recovery Password page, choose the destination (a USB drive, a local or remote folder, or a printer) to save your recovery password. The recovery password is a small text file containing brief instructions, a drive label and password ID, and the 48-digit recovery password. Save the password and the recovery key on separate devices and store them in different locations. Click Next. On the Encrypt The Volume page, select the Run BitLocker System Check box, and click Continue. Then, click Restart Now. After Windows restarts, BitLocker verifies that the volume is ready to be encrypted. BitLocker displays a special screen confirming that the key material was loaded. Now that this has been confirmed, BitLocker begins encrypting the C drive after Windows starts, and BitLocker is enabled.
BitLocker encrypts the drive in the background so that you can continue using the computer. After enabling BitLocker, you can choose to turn off BitLocker from the Control Panel tool.
You have two options:
Disable BitLocker Drive Encryption
Decrypt The Volume
Sharing Folders:
One of the most common ways for users to collaborate is by storing documents in shared folders. Shared folders allow any user with access to your network and appropriate permissions to access files. Shared folders also allow documents to be centralized, where they are more easily managed than they would be if they were distributed to thousands of client computers.
For the purpose of sharing folders and managing them, windows server offers the feature of File Service Server Role. So first install it. From Server Manager, add this role. On the Select Role Services page, select from the following roles:
File Server
Distributed File System
File Server Resources Manager
BranchCache for network files
Quotas:
When multiple users share a disk, whether locally or across the network, the disk will quickly become filled—usually because one or two users consume far more disk space than the rest of the users. Disk quotas make it easy to monitor users who consume more than a specified amount of disk space. Additionally, you can enforce quotas to prevent users from consuming more disk space (although this can cause applications to fail and is not typically recommended).
Configuring Disk Quotas by Using the Quota Management Console:
After installing the File Server Resource Manager role service, you can manage disk quotas by using the Quota Management console. In Server Manager, you can access File Server Resource Manager. The Quota Management console provides more flexible control over quotas and makes it easier to notify users or administrators that a user has exceeded a quota threshold, or to run an executable file that automatically clears up disk space.
Configuring Disk Quotas by Using Group Policy:
You can also configure simple disk quotas by using Group Policy settings. In the Group Policy Management Editor, select the Computer Configuration\Policies\Administrative Templates\System\Disk Quotas.
Folder Sharing:
You can share folders across the network to allow other computers to access them, as if the computers were connected to a local disk.
Sharing Folders from Windows Explorer:
The simplest way to share a folder is to right-click the folder in Windows Explorer, choose Share With, and then choose Specific People. The File Sharing dialog box appears and allows you to select the users who will have access to the folder. Click Share to create the shared folder, and then click. Done.
Using the appeared dialog box, you can select either Read or Read/Write permissions.
Distributed File System:
Large organizations often have dozens, or even hundreds, of file servers. This can make it very difficult for users to remember which file server specific files are stored on.
DFS provides a single namespace that allows users to connect to any shared folder in your organization. With DFS, all shared folders can be accessible using a single network drive letter in Windows Explorer. For example, if your Active Directory domain is contoso.com, you could create the DFS namespace \\contoso.com\dfs. Then, you could create the folder \\contoso.com\dfs\marketing and map it to shared folders (known as targets) at both \\server1\marketing and \\server2\marketing.
Besides providing a single namespace to make it easier for users to find files, DFS can provide redundancy for shared files by using replication. Replication also allows you to host a shared folder on multiple servers and have client computers automatically connect to the closest available server.
Installing DFS:
First login to your Windows Server 2016 DC machine and open server manager.
Open Add Roles and Features Wizard and move on to Server roles.
Expand File and Storage Services.
Under File and Storage Services you can find File and iSCSI Services expand it and select File Server, DFS Namespaces, DFS Replication, and File Server Resource Manager.
Configuring DFS:
After the DFS role has been installed, open the DFS Management console, and right-click Namespaces and choose New Namespace.
Type the name of the server that will host the namespace.
Click on Next. Choose a name for your namespace in the following screen. This will be the name of your domain sharing path. For example forevergeeks.com\files.
Choose a name for your namespace in the following screen. This will be the name of your domain sharing path. For example forevergeeks.com\files. Click on Edit Settings to edit permissions on the share. By default everyone only has “read” permissions. Click on Next on the following screen. Choose the Namespace Type .
to edit permissions on the share. By default everyone only has “read” permissions. Click on on the following screen. Choose the . Choose the Domain-based namespace and click on Next. Review the settings and then click on Create.
and click on Review the settings and then click on Test your DFS Namespace is working by typing the network path in Explorer (e.g. forevergeeks.com\files).
It works! But nothing there yet though.
Create DFS folders:
We will add folders to the DFS namespace now. From your DFS Management console, right-click the namespace we just created, and choose New Folder.
Type the name of the folder, then click on Add.
Type the path of the Shared folder you want to add to the Namespace.
Click on OK.
Let’s go to the network path again (e.g. forevergeeks.com\files) and you should see the folder we just added.
Yeah!
Conclusion
DFS Namespaces is a great feature in Windows server to organize your network shares. When using DFS namespaces it does not matter where the shared folders are located, they are all accessible from a single path. it makes it easier to move file servers around too without breaking the access paths.
Note: this text is a summary of DFS implementation on window server 2016 from ‘Exam Ref 70-741 Networking with Windows Server 2016, MCTS Self-Paced Training KIT Exam 70-642, and ittutorials.net’
|
https://medium.com/@shtabesh02/installing-and-configuring-distributed-file-system-dfs-c62667cd2218
|
['Shir Hussain Tabesh']
|
2020-12-07 19:08:29.786000+00:00
|
['Windows Server 2016', 'Information Technology', 'DFS', 'Distributed File Systems', 'Networking Tips']
|
2,032 |
Este olho biônico experimental poderá ajudar cegos a enxergar
|
in In Fitness And In Health
|
https://medium.com/futuro-exponencial/este-olho-bi%C3%B4nico-experimental-poder%C3%A1-ajudar-cegos-a-enxergar-570067f1816
|
['Futuro Exponencial']
|
2018-09-04 00:49:16.308000+00:00
|
['Eyes', 'Technology', 'Light']
|
2,033 |
A replacement for the surgical mask
|
Engineers from the University of Michigan may have found the next big thing in healthcare. Recently published in Journal of Physics D: Applied Physics, civil and environmental engineer professor Hereck Clack and members of his team have designed a non-thermal plasma device that has the potential to replace a century old device: the surgical mask.
The University of Michigan engineers have measured the virus-killing speed and effectiveness of nonthermal plasmas — the ionized, or charged, particles that form around electrical discharges such as sparks. In their research they have found that the device was able to inactivate or remove from the airstream 99.9% of a virus, with the vast majority due to inactivation.
It should be noted that airborne spread of disease is the most difficult to guard against. To this very day we have very little to protect us when we breathe. Just imagine a world, where we could walk around and not worry about spreading illness to one another by breathing on them.
To gauge the effectiveness of the device researchers, pumped a model virus into flowing air as in entered the reactor. In the void spaces of the reactor borosilicate glass beads are packed into a cylindrical shape. The viruses in the air flow through the spaces between the beads, and that’s where they are inactivated.
Clack and his team determined that more than 99% of the air sterilizing effect was due to inactivating the virus that was present, with the remainder of the effect due to filtering the virus from the air stream.
Although the research is just in early development it is seen by some in the science community as a breakthrough.
According to Krista Wigginton, assistant professor of civil and environmental engineering. “There are limited technologies for air disinfection, so this is an important finding.”
The research team over at the University of Michigan have recently begun testing their device on ventilation air systems in livestock farms. The team hopes that the installation of their device can stop the spread airborne pathogens that can wipe out cattle relatively quickly.
For more information please visit ScienceDaily.
Questions: How do you prevent the spread of disease in your workplace? Would you be interested in more technology like this coming out?
Follow Medication Health News on social media — Facebook, Twitter, LinkedIn and Instagram.
Have a question? We can help you to answer it — Give us a call at (617) 732–2759.
|
https://medium.com/sustainable-health/a-replacement-for-the-surgical-mask-3e4059034355
|
['Matthew Plante']
|
2019-04-10 19:53:24.847000+00:00
|
['Health', 'Science', 'Design', 'Virus', 'Future Technology']
|
2,034 |
[Taklimakan Blog] Banks Do Not Believe in the Potential of the Digital Ruble
|
Large financial institutions in Russia were not ready for the appearance of the digital ruble and are waiting for more specifics from the regulator
Domestic financial institutions “do not take seriously” the proposed specifics based on the digital ruble model.
From the side of the regulator, the meeting was attended by the first deputy chairman Olga Skorobogatova. Among the representatives of the banking sector were Sberbank, Gazprombank, Credit Bank of Moscow (MCB), Pochta Bank, Russian Standard Bank, and VTB.
According to one of the bank representatives, there is no clear understanding of what the digital ruble will be. Most representatives of banks “do not take seriously” the digital ruble as a third form of the national currency, said the source of the Kommersant newspaper.
|
https://medium.com/taklimakan-network/taklimakan-blog-banks-do-not-believe-in-the-potential-of-the-digital-ruble-d288aa641ba9
|
['Elena Jefferson']
|
2020-12-18 07:02:13.206000+00:00
|
['Blockchain', 'Cryptocurrency', 'Finance', 'Technology', 'Russia']
|
2,035 |
Internal motivation of employees. When to raise the salary in IT business?
|
Employee’s paradise in one moment can turn into the CEO’s hell. One such moment is a day of the salary raise. How to manage an IT company motivating employees and saving the balance in P&L?
An illustration was designed at TheRoom design boutique by Alexandra Marchenko
According to Julian Birkinshow’s course for London business school, there are 2 types of management: traditional and innovative. Talking about IT business, traditional management appears being no longer effective (for 5 years already). Neither bureaucracy nor hierarchy and linear alignment make work easier. But what shall we do with the fourth bases of the traditional approach — external (financial) motivation?
Internal motivation of employees
A type of motivation, based on the primary interest to the work, self-development and skills improvement. Internal motivation can come only from the employee and can be increased via gathering new knowledge, getting a name and authority in the sphere of work. Is related to the innovative approach in management.
External motivation
A type of motivation, based on the creation of the employee’s interest in the work via the praise from the office managers and CEO in a form of financial prize, additional weekend or vacation. It can come only from the company’s administration and is related to the traditional management approach.
A presentation of Julian Birkinshow
There is plenty of motivation factors for designers, developers, and sales managers. All of them depend on humans and society.
That’s why good managers look for the employees, interested in personal growth and development, not in the salary.
Enthusiasm and a willingness to grow in the sphere you’re occupied at — a direct counterweight to the financial motivation of employees.
There’s still a risk, for example:
if a young genius, who is a great enthusiast and the perfect developer gets to your company and you put him on the project he is not interested in at all for a year or so, and keep on increasing a salary once per 4 months, he will lose all of his enthusiasm with time. He will be completing all of the projects normally and come to the office on time. But from the creative genius, he’ll turn into another “plankton”.
No matter how it seems, that happiness is made of gold and career stairs are made of dollar bills, for the CEO of the company these stairs can become a scaffold one day. Let’s check out some more examples of how working with enthusiast volunteer companies were getting a perfect result.
Example #1
Costa Coffee. Less than in a year, Costa created a coffee machine of the third generation. They simply asked a third-party company to create something for them without any payment and contract. That company was so interested in cooperation with Costa, that it created an innovative product for an extremely short time.
Example #2
Topcoder. This is a company without developers. Every project turns into a lot on the auction, that is played between different teams of developers. The one, who presents the best result gets a payment. Every time the company gets more and more high-quality solutions.
Presentation of Julian Birkinshow
Money is not an engine for good work. Marx and Engels mentioned a capitalist issue from the point of the human alienation from the product of his work. Working on the smart enterprise, people started losing satisfaction from the product of their work. This led to the idea of the power of financial motivation and it became the most popular motivation type for decades.
But the thing is a traditional model turned being outdated in 1975.
Right after the creation of IT segment new motivating factors became way more effective, than the same old banknotes.
fame: working on the product, designer or developer should get his doze of popularity. A good manager will at least tell about the progress of the designer (developer) in the company and also pay one post in social networks to this achievement.
working on the product, designer or developer should get his doze of popularity. A good manager will at least tell about the progress of the designer (developer) in the company and also pay one post in social networks to this achievement. a voice: let designers and developers form teams for the projects, choose projects and vote for those they want to work at. Give more power to your employees or create an illusion of it and they will work better. First of all, “they’ve chosen a project themselves”. Secondly, “they formed a team themselves”. Thirdly, all decisions were made by the team autonomously and a measure of responsibility grew up.
Is it a good idea to use only a conception of internal motivation?
Many companies have already removed job positions and work schedules, allowing employees to choose projects and work hours themselves. But does it always work? No. Working with the internal motivation it’s important to look at the psycho-emotional state of employees and social climate in the collective. Once you hire a person, not motivated enough or the one with the “creative crises” — it will affect the whole team. As usual, companies look for a balance between internal and external motivation.
To find this balance:
make the salary of every employee a taboo theme .
. discuss all the conditions: at the very beginning of the cooperation, set a deadline for the salary revision. For example, once per 4 months you will be discussing achievements with the employee and make a decision based on his (her) success.
at the very beginning of the cooperation, set a deadline for the salary revision. For example, once per 4 months you will be discussing achievements with the employee and make a decision based on his (her) success. remove KPI — at the end of every month make notes about the achievements of each employee.
— at the end of every month make notes about the achievements of each employee. run inner company’s projects and let employees form teams on these projects, handing the whole workflow and project themselves.
on these projects, handing the whole workflow and project themselves. be attentive to the real success — if an employee comes to the workplace on time it’s not enough reason for the salary raise. A salary raise is an additional promotion, but not the main goal of your cooperation.
How much should an employee’s salary be raised?
The fact that every employee has a different payment is okay. Everyone just can’t work on the same level, which means people should get money depending on their level of skills. Otherwise, a system of junior, middle and senior specialists will make no sense at all.
While hiring, you offer a salary to the specialist. It should respond to your expectations about the skills of the employee. The development of these skills takes no longer, then 4 months. That’s why we consider this period to be optimal for the revision of the salary.
However, this period can’t remain the same. Not to develop a Pavlov’s dog reflex, review salaries in different periods — something earlier or later. It will allow you saving “an effect of surprise” and a willingness to get a reward for the good work. Don’t raise salaries that often — otherwise you’ll kill internal employee’s motivation.
A good manager rises an employee before a specialist will even ask.
To make money be a surprise, but not the only reason to work, it’s important to outstrip a moment when your employee will ask to raise him. Notice cool achievements. If your specialist won a competition (related to the job sphere) or brought a big client to the company — raise him before the “deadline” comes.
How much should you add to the salary is a difficult topic. Some people prefer to raise salaries on the fixed-price bases. It usually works with designers and developers.
When we talk about those, who work with clients directly, it’s better to raise such specialists on a certain percent from the salary. To define this percent — make an analogy between the cashflow of the company and employee’s income and compare these numbers with the moment a contract between you two was signed. Depending on the stats changes you may find a needed percent and raise a salary.
Which management model to choose and when to raise a salary is a personal choice of every manager. Sometimes it seems, that money can make specialists more loyal. However, if there is no interest in the work from the very beginning, the money will only become another useless waste of the company’s budget.
|
https://themakingofamillionaire.com/internal-motivation-of-employees-when-to-raise-the-salary-in-it-business-7b1169a58856
|
['The Room']
|
2020-02-20 14:28:32.875000+00:00
|
['Work', 'Jobs', 'Technology', 'Personal Finance', 'It']
|
2,036 |
what i like about instagram
|
Instagram, the image sharing app created with the aid of using Mike Krieger and Kevin Systrom from Stanford University, spins the story of achievement capitalized the proper manner. Launched manner again withinside the 12 months 2010, Instagram nowadays boasts of seven hundred million registered customers, with extra than four hundred million human beings touring the webweb page on a everyday basis. Out of the seven hundred million customers, round 17 million are from the UK alone! When the 2 founders began out speakme approximately their concept, they quick realised that they’d one purpose in mind: to make the biggest cell image sharing app. However, earlier than Instagram, the 2 had labored collectively on a comparable platform referred to as Burbn. For Instagram to work, Krieger and Systrom determined to strip Burbn right all the way down to the naked necessities. Burbn became pretty just like Instagram and had functions which allowed customers to feature filters to their pictures. The social networking webweb page Instagram reached 1000000000 energetic customers in 2019. The US-primarily based totally video and image-sharing app is a achievement tale that has spread out because its release in October 2010 with the aid of using Stanford University college students Mike Krieger and Kevin Systrom. Systrom majored in control technological know-how and engineering, at the same time as Krieger studied symbolic systems — a department of pc research mixed with psychology. When the 2 founders met, they began out discussing their concept for a brand new app and realised they shared a purpose: to create the world’s biggest cell image-sharing app. Budding entrepreneur Fellow college students recalled Systrom as being clearly gregarious and a budding entrepreneur from a younger age. He in short ran a market that became just like Craigslist for fellow Stanford college students. Krieger had exclusive capabilities and one in all his college tasks were designing a pc interface that could gauge human emotions. Prior to Instagram, that they’d collaborated on a comparable platform referred to as Burbn. They determined to strip it down and use it as the premise for Instagram. Burbn had functions that enabled customers to feature filters to their photographs, so the duo studied each famous image app to peer how they might development further. Eventually, they determined it wasn’t operating and scrapped Burbn in favour of making a totally new platform. Their first attempt became Scotch, a predecessor to Instagram, however it wasn’t a achievement, because it didn’t have sufficient filters, had too many insects and became slow. Once Instagram became launched for Android phones, the app became downloaded extra than 1,000,000 instances a day. Interestingly, the web social media platform became ready to get hold of an funding of $ 500 million. Furthermore, Systrom and Zuckerberg had been in talks for a Facebook poised takeover. In April 2012, Facebook made a proposal to buy Instagram for approximately $ 1 billion in coins and stock, with the important thing provision that the enterprise could continue to be independently managed. Shortly thereafter and simply previous to its preliminary public offering, Facebook received the enterprise for the whopping sum of $ 1 billion in coins and stock. After the Facebook acquisition, the Instagram founders have completed little to alternate the consumer phase, sticking to the simplicity of the app. The remarkable upward push of Instagram’s recognition proves that human beings agree with in actual connections as opposed to the ones primarily based totally on simplest words. Since the acquisition, Instagram’s founders haven’t made many adjustments to the consumer experience, who prefer to paste to the app’s simplicity. Its upward push in recognition proves that human beings revel in the manner the app works and just like the image-primarily based totally connections it provides. One of the maximum essential instructions of Instagram’s achievement is that the founders didn’t waste time seeking to keep their authentic concept, Burbn. Once they determined it wasn’t going to work, they moved on quick and invented Instagram. Systrom stated its call became primarily based totally on “immediate telegram”. The app became released at simply the proper time — and with simplest 12 personnel initially, the consumer base had increased to extra than 27 million earlier than Instagram became offered to Facebook. Today, maximum celebrities use it as a platform for promotions and with 1000000000 customers, it keeps to head from power to power.
|
https://medium.com/@5frash-abah/what-i-like-about-instagram-d1b4bd52e67d
|
['Frashy Abah']
|
2020-12-17 11:49:49.450000+00:00
|
['SEO', 'Instagram', 'Technews', 'Technology', 'Success Story']
|
2,037 |
Most Honest & Helpful Reviews for Bowers&Wilkins PX7 Wireless Headphones — Curated by Rosi
|
How do I even start to share how awesome is the music coming out from my #bowerswilkins #px7? Definitely a legend on its own. My travel and audio editing buddy.
By @danieltaiwh, sourced from Instagram
It’s lighter than the PX and P7 thanks to the new carbon fiber design in the headband arms.
Battery life of the PX7 is up there with the best in this class. You can get up to 30 hours of playback with a 15-minute quick charge giving you 5 hours of playback. Also, the method of charging is via USB-C.
The PX7 has excellent noise cancelling which is about as good as that of the Bose NC 700 but not the Sony XM3.
The have excellent detail but there was something lacking in the mids (I suppose) where vocals seemed a bit washed out. Also, the sound quality took a dive when the ANC was turned on.
The ear cups fold in but they are not flat in relation to the headband so if the headband is flat on a surface then the ear cups are raised quite a bit. Carrying these on your commute will IMO be a burden and for me it is odd as the key use case for ANC headphones is surely ‘On the move’.
|
https://medium.com/@rosi.reviews/most-honest-helpful-reviews-for-bowers-wilkins-px7-wireless-headphones-curated-by-rosi-e2c38509667f
|
['Rosi Reviews']
|
2021-09-09 23:51:52.142000+00:00
|
['Startup', 'Review', 'Headphones', 'Technology', 'Mobile Apps']
|
2,038 |
Why Efficiency Isn’t Always the Holy Grail in Customer Service
|
A recent market study by the CCW, entitled ‘The Future of the Contact Center in 2019’, brings up some interesting observations about the pursuit of efficiency in the customer experience and customer journey. Conventional wisdom dictates the importance of driving efficiency to boost Customer Satisfaction Scores (CSAT). Most experts agree on the need for initiatives to reduce customer effort. However, it shouldn’t be at the expense of the quality of the customer journey or how
customer-friendly the business is perceived to be.
The need of the hour, as Adrian Swinsoe points out in his article, is to balance the human element and the efficiency-enhancing technology in the customer experience. Imbalance in either direction creates deficiencies. Here are some points to consider to ensure that the focus on efficiency doesn’t backfire or drive away customers in your business:
Over-reliance on Technology
An over-reliance on technology can significantly impact the way customers view your business. We’ve all encountered the frustration of having to deal with a business that doesn’t seem to have an actual human that we can talk to. Or situations where it took many days to get a response to a support ticket. When tech is not balanced with human interaction, multiple issues can crop up for consumers:
Dissatisfaction with a lack of human connection.
Anxiety at not having the solutions and the conversations they want to have to quickly resolve an issue.
The perception of not being important enough to the business.
Frustrations about having to deal with other forms of support that may not be suited to them, such as technical documents, which they may not understand.
While the customer support process may look efficient from the outside, the customers may not necessarily be having the type of experience they are looking for as part of the overall journey.
The Importance of the Human Touch
While businesses understand the need to integrate technologies to improve the efficiency of the customer interaction, there is consensus that the human touch has to be an equally important partner and
co-facilitator. Therefore, businesses continue to invest in live chat and phone support, which lends itself to human connection and the building of more rapport with the company. Ultimately, customers are looking for comfort and the freedom to choose their experiences.
According to the CCW study, ‘the idea of reducing agent effort (40%) is more popular than the idea of emphasizing customer effort (25%).’ When agents have to spend less time navigating tools, they have more time to focus on the customer. Having the human touch is paramount to customer-centricity.
The Power of Choice
Last but not the least, consumers must be given the power of choice. Right-channeling, or driving consumers into a pre-designed conversion process, isn’t always the best path, especially when it isn’t employed judiciously or with sensitivity.
At the end of the day, there are some important questions to ask when it comes to weighing efficiency against the larger interest of keeping the consumer happy and loyal:
Is there an adequate amount of personalization in the customer journey?
How easily can consumers source and manage information on your business?
How does your current level of efficiency impact your brand likeability and trust?
Do you have feedback systems in place where consumers can detail their experiences on what they liked and what they did not like?
How open are you to adapting to changes based on on-going feedback and engagement with consumers?
How is your current level of efficiency perceived by your consumers?
CH Consulting Group provides unparalleled expertise for today’s omni-channel contact center, driving exponential growth, astute change management, and enhanced profitability. Our team of Customer Experience (CX) Consultants, veterans in the field, understand the need for creating innovative and sustainable solutions that really work. For a comprehensive CX assessment and strategic plan customized for your unique business needs, connect with us here today.
|
https://medium.com/@chcg/why-efficiency-isnt-always-the-holy-grail-in-customer-service-eab6051632b6
|
['Ch Consulting Group']
|
2019-06-06 14:39:32.453000+00:00
|
['Call Center', 'Technology', 'Customer Service', 'Efficiency', 'Customer Experience']
|
2,039 |
Optimizing the Hiring Process for Hiring Remote Employees
|
Source : Manage HR Magazine
There are a plethora of tools available within the market to assist the HR professionals hire the proper candidate, and HR professionals must be willing to require a leap of religion if the Return on Investment (ROI) is superb . The digital era is upon us, and hiring the foremost qualified and motivated professionals isn’t possible without the utilization of digital recruiting tools
Fremont, CA: the value of living across major tech hubs like San Francisco and ny is skyrocketing, and therefore the market is becoming increasingly competitive. This has cause a rise within the demand for remote employees. Remote labor brings along a surplus of advantages , but at an equivalent time are often risky if the proper hiring practices aren’t in situ . Organizations got to work on fixing the proper system to require advantage of this contemporary labor dynamic. Having a big remote labor pool helps reduce the general office footprint and overhead costs. At an equivalent time, this system also allows employers to rent a number of the foremost talented minds within the country.
There are a plethora of tools available within the market to assist HR professionals hire the proper candidate, and HR professionals must be willing to require a leap of religion if the Return on Investment (ROI) is superb . The digital era is upon us, and hiring the foremost qualified and motivated professionals isn’t possible without the utilization of digital recruiting tools. checking out candidates are often a time-consuming process. this is often where most organizations come short within the hiring process. Dedicating employees to the task of sorting and filtering through candidates are often a waste of human resources because it might not be as productive as employing them with other essential duties. this is often where organizations got to leverage the services provided by recruiting agencies. By outsourcing these tasks to a 3rd party, organizations can still specialise in their core business operations.
Regardless of how the candidate reached the company’s doorstep, it’s crucial to possess a strong interview process in situ . this is often even more applicable within the case of remote employees. Organizations should be ready to find candidates with the proper qualifications for the work , especially parturient intensive market. Software like Google Hire are often employed by employers to form the task easier. Google Hire allows the interviewers to trace the candidate at every stage of the method , schedule interviewer-candidate time, place job posts on multiple platforms, and make the general process of reporting appear as if a cakewalk. By organizing all hiring operations under one software tool allows interviewers to form the foremost of their time with candidates, resulting in better quality interviews, and hence better hires. Most organizations practice engaging structures where the candidate may undergo three to as many as eight interview stages. This makes it necessary to possess all the knowledge associated with a candidate stored in one place, as multiple interviewers would wish to access this information at various stages. an equivalent becomes even more important when both the interviewer and interviewee are operating from remote locations.
Organizations got to build a robust hiring team which will scent out the candidate’s skillset, demeanor, personality, communication skills, and cultural fit. This helps to filter candidates for the ultimate stage of the hiring process. HR professionals should attempt to get the candidates excited about being a part of a high-pressure environment. Having an intense final round helps assess the candidate’s performance under high .
The interviewer is as essential because the candidate. Organizations should specialise in having quality interviewers in situ who can understand the dynamics of remote work. Let team leaders and managers conduct interviews. Having interviewers who are going to be responsible of the new hire helps to create a relationship even before the candidate is hired. Besides, the candidate would better understand the role and nature of the work offered when the interviewer is that the future boss. At an equivalent time, interviewers shouldn’t be scared of asking bold and hard questions. they ought to be receptive to feedback and have a proven understanding of the way to achieve success . This drives aspirations and ambitions among candidates, which frequently becomes the driving think about the end of the day .
News Source : Optimizing the Hiring Process for Hiring Remote Employees
Check Out : Twitter | LinkedIn
|
https://medium.com/@emmabaker8018/optimizing-the-hiring-process-for-hiring-remote-employees-861854334dde
|
[]
|
2020-03-03 10:31:18.775000+00:00
|
['Employee', 'Hiring', 'Hrtech', 'Technology', 'HR']
|
2,040 |
ZERO Interest Club: Who and Why is Not Really Into Crypto?
|
When we hear people saying “ICO is a scam”, it does not even make news anymore. With fraud coin offerings of the last year, everyone sort of got used to such talk. However, there is a category of people whose opinion about cryptocurrencies actually matters. When CEO and founder of Wikipedia goes to the media saying about having zero interest in ICO, it catches attention.
Who else among big cats don’t like ICOs and why?
Jimmy Wales is not the only tech community leader to publicly display his mistrust in ICO. In fact, it seems to be a thing now: occasionally one CEO of a big company has to stand up and criticize ICO, blockchain, and cryptocurrencies. Who else doesn’t like ICO and blockchain much and why?
Google
The question that bugs the entire blockchain community is why Google is not doing pretty much anything with blockchain? Having a status of one of the most innovative companies (if not the most innovative), it is at the very least strange that Google does not bother with taking blockchain on the next level.
The real wave of the talk about Google’s attitude towards blockchain and ICO started when the company followed Facebook’s example and banned cryptocurrencies’ ads and ICO’s promotion. The web was tormented by a question: Does Google not like ICO very much?
There was no official proof of Google executives saying that they don’t particularly trust ICO, however, it could be that actions speak louder than words. When a company like Google remains indifferent towards a big innovation and takes a step towards banning ICO promotion, it gives some nice food for thought.
Why could it be that Google doesn’t particularly like ICO?
Sergey Brin says”We probably already failed to be on the bleeding edge, I’ll be honest”. He does not explain why it happened but we are free to try and guess. Perhaps, Google gave the technology the time to prove itself and while they were waiting, others were “moving fast and breaking things”.
That said, CNBC publication indicated that Google still might join some very promising blockchain product in the future. Some say Google can seriously revisit its ICO ban as well. We’ll see.
Facebook
When Facebook banned crypto ads from the platform, there were all kinds of guesses of the company’s attitude towards technology. Some said it’s because Facebook doesn’t like ICOs very much. Others thought it indicated the company’s plans on starting their very own cryptocurrencies.
This ban became news right away — and rightfully so. Source: https://images.cointelegraph.com/images/740_Ly9jb2ludGVsZWdyYXBoLmNvbS9zdG9yYWdlL3VwbG9hZHMvdmlldy8yOWI4MTU1OTQ2MmRiNTMyZmQwMTM2MDkzMzZhODE0My5qcGc=.jpg
Official Facebook’s explanation of the matter was brief:
“We want people to continue to discover and learn about new products and services through Facebook ads without fear of scams or deception. That said, there are many companies who are advertising binary options, ICOs and cryptocurrencies that are not currently operating in good faith.”
This, however, is not necessarily a proof that Facebook exactly doesn’t trust blockchain and initial coin offerings. The truth is, there is definitely a lot of scam going on so the decision is hardly questionable.
Now, of course, when the ban is gradually being reversed from the platform, we have even fewer reasons to conclude that Facebook doesn’t like cryptocurrencies.
Rumor has it, the company is even launching something big with the technology. The motives are quite transparent through: Facebook has the fear of missing out (but bear in mind, it’s just a speculation).
The risk of missing out — just in case the crypto evangelists are correct and blockchain technology turns out to be bigger than the internet revolution — is too great to ignore.
WIRED’s expert Erin Griffith
J.P. Morgan
Jamie Dimon, CEO of J.P. Morgan, is known to hate Bitcoin but love blockchain (perhaps, it would be wiser to include him not to the zero-interest group but rather to the love-hate club). Mr. Dimon repeatedly said that Bitcoin is a fraud that will be definitely crashed by the government but he also added that blockchain as a technology, is very promising.
Moreover, J.P. Morgan’s tech team is even working on a huge blockchain solution. As Financial Times reported, it could be the improved version of Quorum, a blockchain platform for cross-border payments and settlement of derivatives.
The club is getting smaller
With the growth of blockchain and development of ICO market, it’s difficult for companies to maintain that zero-interest attitude. Inevitably they find themselves in a situation where it’s crucial to invest in blockchain whether they like it or not. Just in case.
In fact, Deloitte research proved that in average 3 of 4 companies find blockchain to be a compelling tech solution for their business.
Deloitte asked 1000 respondents who are the representatives of worldwide companies with annual sales of $500 million and more. 34 percent indicated that they already have a blockchain solution developed and launched, 41 percent said that it will be launched in less than a year.
The amount of made investments in the technology, according to Deloitte research
What do we make of these results?
Three things are apparent here.
We will see a lot of blockchain innovation this year or the next one. Should be fun. The competition is getting tough and those companies who wish to stay at the edge of technology, have to innovate and innovate now. Blockchain is not going anywhere even if several big companies and their CEOs still remain skeptical.
This, however, is not entirely good news
2017 and 2018 have proven to be the years of blockchain madness and as we saw from statistics, the situation is likely to continue. While such development will definitely bring exciting improvements, we’ll also see a lot of empty promises and hear sales pitches with no meaning behind them.
Long Blockchain is just another proof that events developed not how anyone expected.
On January 2018, SEC Chairman Jay Clayton warned companies that from now on they have to be way more careful with pinning in the word “blockchain” to right and left. This started plenty discussions in blockchain and ICO community but after a beverage company Long Island Iced Tea renamed to Long Blockchain and had a stock increase of 500% in just one day, everyone knew something is not right.
All this token thing is unclear
A few years ago, when Triggmine was only at the beginning of its path, tokens resembled a kind of exotic fruit for most of us. And now, there are …
[Bet you can hardly imagine the number of different tokens circulating in today’s digital universe]
1771. That’s the number. At least, Coinmarketcap claims so. It turns that the usage of these tokens is at least doubtful. “Buy our tokens,” they say. “Be a digital winner.” “Look to the future.” “Don’t miss your chance”. We all know how it works.
The major drawback of blockchain projects is that they have difficulties in providing mathematical and economical grounds to make their tokens attractive.
A final thought
To maintain a positive reputation of ICO and blockchain, it’s logical to oversee whether the business owners’ intentions are legit or not. One thing is apparent though: sooner or later zero-interest club will disappear. Blockchain is not to be ignored anymore.
|
https://medium.com/hackernoon/zero-interest-club-who-and-why-is-not-really-into-crypto-cd53129cf64e
|
[]
|
2018-08-17 10:30:58.525000+00:00
|
['Technology', 'Tech', 'Blockchain', 'Cryptocurrency', 'Bitcoin']
|
2,041 |
Do we Need Math? Imagine our Life Without It
|
Do you remember your math classes in school? I definitely do, and on my mind was always:
“Why do I need this formula? Where will I use that theorem?”
You heard or even asked the same questions. After that, the teacher used to explain that we can use integral to find the area and derivative to get the acceleration from speed.
“And what?” — the question always appeared but was newer asked. Most people will never be trying to find a minimum/maximum of the function in daily life.
However, all of us need math, and it’s essential in your career, decision making and any field of life.
The reason is abstractions
Do you remember that day when the teacher started writing letters instead of numbers? Was this easy for you?
Photo by Roman Mager on Unsplash
I was always pretty good at math, but it took me a lot of effort to stop fearing letters in formulas.
Why do I write fancy letters in formulas instead of numbers?
A familiar feeling? But the idea is that everything in math is an abstraction.
You understand what is apple, but what is the number? Right, it’s the abstraction to help us to count apples. The same with any other idea in math, either it’s multiplication or Newton–Raphson root-finding algorithm
Photo by Ben White on Unsplash
Math is abstract because numbers are not real entities. They are purely imaginary concepts. We cannot experience numbers. We can make up stories about them, such as “1+1=2”. But, we can never experience such operation since there is no such thing as ONE of anything in our experience. If there is no such thing as ONE of anything in our experience, there can be no such thing as TWO of anything in our experience, etc. When we do math, we are playing a game in a world of imagination. Like fairy tales, numbers are the characters in that imaginary world, while the operations are the activities that those fairy tale characters perform. And just like fairy tales, math can be beneficial, even though neither numbers nor fairy tale characters really exist. Berj Manoushagian, Philosopher.
Why are abstractions crucial?
Because it’s an amazingly powerful tool. Even more powerful than mathematics itself.
Photo by Hunter Harritt on Unsplash
Let me back up a little bit. Forget mathematics. When was the last time you made pasta? You know how to make pasta, right? Boil water, add pasta, wait a little bit, then take out the pasta. If you’re feeling fancy, add some other stuff.
Already, you’re taking advantage of abstractions. How do you know how long to boil that pasta? Isn’t it conceivable that one noodle of pasta differs in some way from another noodle of pasta, even in the same box? But you avoid those questions by abstraction. You don’t worry about the specific noodles of pasta you have, but you treat it like this abstract collection of stuff that you only need to boil for, say, 10 minutes.
Photo by Christine Sandu on Unsplash
As you go forward with your pasta making, you learn that there are yet other shapes. Maybe you boil those shapes differently, but you realise that they’re pretty interchangeable from a culinary point of view. In other words, if you have a recipe for bucatini all’amatriciana, and you have all the ingredients for the dish except you have spaghetti instead of bucatini, by the power of abstraction you can still make an excellent dish: spaghetti all’amatriciana.
It goes on: what if your all’amatriciana recipe calls for guanciale and you have none? You know that pancetta is pretty close, while tofu is not close at all. And on, and on. When you study cooking, you don’t even understand a recipe in terms of ingredients, you understand it in terms of roles. You identify the balance of sweet, salty, acid, fat, spiciness, umami, or other flavour categories. In other words, you understand a recipe abstractly. The abstract approach lets you identify solutions quickly.
|
https://medium.com/swlh/do-we-need-math-imagine-our-life-without-it-c458fda152b3
|
['Andrew Zhuravchak']
|
2020-01-19 18:08:02.315000+00:00
|
['Technology', 'Mathematics', 'Tech', 'Learning', 'Science']
|
2,042 |
The Future of Blockchain Technology
|
When Bitcoin and Blockchain technology occurred nearly ten years ago, a new era began. The idea of organizing data in a decentralized manner means nothing less than giving power back to the people. Since then, many blockchain networks have been created, all with different properties. Although the technology itself is great, it faces some issues, that will determine its future. In this article I would like to discuss those issues and how the future of blockchain might look like, seeking a way to combine the three future key technologies — Distributed Ledgers, Artificial Intelligence and Quantum Computers.
Energy efficiency
As many of you already heard for sure, in a decentralized network it is mandatory to somehow achieve consensus between the participants (often also called nodes). Therefore many algorithms have been developed, to achieve that very goal. The most used one is the Proof of Work. Miners have the task to solve a cryptographic puzzle in order to be able to create a block and append it to the blockchain. But to be honest, solving that puzzle aims at no real goal, except for forcing miners to kind of waste their computing power. Wasting computing power means wasting energy, which does not seem too smart in a time like we live in now, where energy is a scarce resource. So other solutions had to pop up — and they did. For example the Proof of Stake algorithm. But the problem with this one is, that it gives power to the rich and makes them even richer. What other possibilities might exist, to reach consensus in a decentralized network without wasting energy or making the rich wealthier than they already are?
Nowadays, artificial intelligence is used in many areas, where a lot of computations need to be done. So one way is, to use a sort of Proof of Work algorithm, in which miners do not waste their computing power, but provide their power to an artificial intelligence solving important tasks. There is a consensus model based on that idea, and it is called the Proof of Cognitive Work. Using this, might lead to a combination of two of the three future key technologies — blockchain and AI.
Scalability
Another issue that appears when dealing with Blockchain is the problem of scalability. The Blockchain, in a simplified manner, is a list of blocks. Each block contains data, data require storage. But the storage each block has, is finite, so only a finite amount of data can be stored in one single block. In a cryptocurrency, those data are transactions. For transactions to be confirmed, they have to be part of a block. With increasing size of network participants, the number of transactions grows. Thus, the time it takes for a transaction to be fully confirmed increases as well. This is nothing that any creator of a blockchain would desire. How solve that problem?
There are solutions for that! One of those has been proposed by the IOTA foundation, introducing the Tangle instead of a blockchain. In the Tangle system, each transaction stands for its own, and in order to be confirmed, it has to first confirm at least two other transactions. This leads to the fact, that transactions get confirmed quicker, the more participants there are in the network. For a more detailed introduction to IOTA’s tangle, take a look at one of my previous articles “How IOTA solves Blockchain’s scalability problem”.
Another possible solution to the scalability problem is the so called block lattice structure, that is used by the cryptocurrency NANO. It can be seen as a hybrid between a blockchain and a directed acyclic graph (which is the underlying structure of the Tangle). I will dedicate an article to NANO’s way of solving the scalability issue soon.
Quantum Computers
Let’s begin to consider a future even further away than just a few years. A future, in which technology has reached a standard so advanced, that quantum computers are no longer only existent in our minds. The most obvious difference between a quantum computer and a classical one is the fact, that they are able to complete computational tasks faster than computers nowadays. A lot faster. They do so by using the laws of quantum mechanics. In this model of the subatomic world, particles don’t behave in a deterministic way anymore. A subatomic particle can, for example, be in many different states at one time — which is an extremely simplified description. Using this fact, computational operations can be done a lot faster. Now what is the issue concerning blockchain technology?
The blockchain is a cryptographically ensured database. The used cryptography is based on the math of elliptic curves, to be more specific, on the discrete logarithm problem. This problem states, that it is infeasible for classical computers to calculate the private key of a user only knowing his or her public key. But when it comes to using quantum computers, the world looks a lot more different.
The so called Shor algorithm has been designed, to factor large integers into a product of primes. This is the foundation of the RSA encryption method. The Shor algorithm has also been slightly changed, to be applicable to the discrete logarithm problem. To really use that algorithm, one needs to have a quantum computer of a kind, that has not been developed yet. But the time will come. So how can we adapt to that change?
This is the problem, that Post-Quantum cryptography researchers deal with. They work on cryptographic methods, that may be secure, even when being attacked with the immense power of a quantum computer. One of those methods includes using so called Winternitz signatures (which I may also dedicate an article to). You may not believe it, but there already exists a cryptocurrency using that kind of Post-Quantum cryptography system — IOTA.
|
https://medium.com/@schaetzcornelius/the-future-of-blockchain-technology-572461e76277
|
['Cornelius Schätz']
|
2019-02-12 11:33:56.820000+00:00
|
['Scalability', 'Distributed Ledgers', 'Future Technology', 'Blockchain', 'Quantum Computing']
|
2,043 |
Personal branding for Tech CEOs
|
Photo by davisuko on Unsplash
There has been a dominant Tech PR trend in the last couple of years — Personal branding and PR of Tech Executives. With this post, I would like to discover what makes personal branding for executives important now and how to do it the right way.
Where does the trend come from?
When I started working in Tech, and especially in the mobile industry in 2005, it was much easier for startups to differentiate. There were not that many products, technology wasn’t that advanced, and only the early adopters really knew the difference between Windows Mobile-First Edition and Second Edition. Then the companies discovered visual design and UX as their core differentiator.
Today though, in the world where we see generations growing up with technology and breathing and living it, we see UX becoming a piece of common knowledge and beautiful visual design - “a must,” it becomes harder and harder to stand out.
Simultaneously, the adoption of social networks and the recent COVID-shift into Online life changed the way we get to know each other. Online-first or even online-only.
Employees, clients, partners, potential, and existing are looking at what you’ve built and who has built it, and how. And if your story is not coherent enough. They will know. Going into the first session with an executive client, I would first watch youtube videos, read articles, and Twitter posts. And your employees, partners, and clients do the same.
Tech, as an industry, grew up and became a real part of the creative industry. It used to be only a prerogative of fashion companies, music labels, visual artists, etc., to build a brand around their founders (see Tory Burch, Alexander McQueen, Jay-Z, etc.). It is not a surprise that the CEO’s persona influences culture the most. So Tech needs to act as a part of the creative industry. Not only in terms of “sex, drugs, and rock’n’roll.”
Natural Resistance
Some of the Tech founders never thought they would end up in a situation where their personality becomes a unique differentiator for their company.
The majority of them started their companies because they wanted freedom, challenge, or were not really employable within standard hierarchies. They wanted to play with the new ideas in their own way. The real makers and geeks rarely understand the “unmeasurable” (in their eyes) nature of a personal brand and are very resistant to self-exposure.
What value branding brings to products and services is clear, PR for fundraising, clear, but how do I measure the value of my personal brand as the CEO and why it makes sense to take the valuable time out of other activities such as product, technology, hiring, finances, etc. Why bother?
Somewhere down the line, they find their answer in hiring and employee retention, and as a result, the speed and quality with which the company can grow and innovate. It becomes important.
And not knowing how to approach the topic, they rely on their communication department and consultants.
Traps
There are two extremes that I see when it comes to the personal branding of the tech CEO:
thinking they have nothing special about themselves and, as a result, nothing to share;
oversharing: showing all sides of them, everywhere all the time.
I don’t think that we need to discuss those extremes; they are not where we want to land.
Another trap is to fully trust that their internal communication team and consultants will define their personal brand aka. “tell you who you need to be to present a company in the right way.” Advice helps, and it can inform the decision. You can’t outsource it fully, though. It is like hiring a nanny to fully take care of your kids and wondering why the kids don’t build a trusting relationship with you personally.
In both cases (extremes we find ourselves into and the responsibility that we hand over to our coms people), we are missing a personal, deep inner discovery — aka Getting to Know yourself and understanding that what you have to say in the way you want to say it and is it valuable for the market and community you are in. It is not a workshop. It is a process.
Self-exploration at the core of the brand
Getting to know yourself might be quite a challenge for someone who has never done this work before. Personal branding is a skill; it is a muscle you build over time. It is a mix of self-awareness, empathy, and self-regulation.
A strong personal brand of the founder and the executive holds a balance between personal boundaries and authenticity. It is an art of knowing and showing who you are with a purpose. Understanding how you influence the system (company, market, any relationships really) and the system influences you.
The “HOW” or What to do next?
We spoke about “why” and “what.” Let us talk about “How”.
In order to find that balance, you need to embark on an interesting journey - a personal discovery. This discovery should look into who you are on the surface and the inside and how all parts of your personality intertwine.
Here coaching and especially vertical leadership development comes in place by expanding your meaning-making system, aka “how you see the world and make decisions”. (I won’t go deep into this topic, here is a good article for you). It also informs what you might need to learn to formulate and present a coherent personal brand that also influences your company culture.
As a coach, I work and partner with several companies and Tech communication agencies that want to support executives and started looking into coaching as a valuable competence to help a client discover their own personal brand. It helps to unpack whatever the clients hold with them and select what the clients are ready to present to the world.
So when to start? As soon as Series A hits your bank account. It will take time to define and grow into your personal brand; it will take time to iterate and get comfortable with the “stage” and grow into a new action-logic.
How to start? Find a coach that will enable this self-discovery or find an agency that partners with coaches or trains their own people to guide you through the journey, not drive the journey.
Last words
I want to come back to what I wrote earlier — building a personal brand is a process, not a single decision or a “result,” and needs to be treated like one.
We all have a personal brand; do we want it or not. A week personal brand informs the outside world about your company as much as a strong personal brand. There is no way to hide things anymore.
The strong personal brand of the founder or the CEO influences and feeds the whole company and needs to be looked at as a strategic investment of your time and funds. And with the same rigorousness, you build your product; you need to build your personal brand.
|
https://medium.com/@olgaskipper/personal-branding-for-tech-ceos-3c656d0e42d2
|
['Olga Skipper']
|
2020-12-01 10:30:17.978000+00:00
|
['Branding', 'CEO', 'Personal Branding', 'Personal Development', 'Technology']
|
2,044 |
Politicians Are Making Things Easy for COVID-19 Scammers
|
No one knows what tomorrow may bring, but I can say with absolute confidence that we’re going to be inundated with COVID-19 scams, and that the fear, uncertainty and doubt being spread by politicians is making those scams harder to detect.
By Max Eddy
The Coronavirus outbreak has affected millions of Americans. Most of us are stuck at home, apparently watching porn and not walking very much. Others are on the front lines pushing back against the outbreak, or at least lending their unused computer power. It’s a time of enormous uncertainty, and it’s only helping the bad guys that our politicians are putting out contradictory, confusing, and sometimes misleading information.
Fear, Uncertainty, and Doubt
In security circles, the concept of using “fear, uncertainty, and doubt,” or FUD, is often leveled against unscrupulous companies. A company is spreading FUD when it makes urgent claims there are boogeymen around every corner, and its product is the only thing that can save you. The boogeymen are often exaggerated and sometimes completely imaginary. FUD doesn’t even need to be particularly convincing; it just needs to create doubt in otherwise evident truths. In that context, FUD distorts reality to sell you a product, but we’ve all experienced some intense FUD reality distortion as of late.
My subjective reality about the severity of the pandemic situation shifted many times. I vividly remember hearing about the coronavirus for the first time in my kitchen, listening to the NPR Up First podcast. Since then, I (perhaps foolishly) attended the international RSA security conference in San Francisco, which has since resulted in several confirmed COVID-19 infections, and I continued to commute in Manhattan by crowded subway until March 11th. Since then, I have been working from my couch, kitchen table, and “standing desk” made out of a box of wine. I haven’t been more than 17 blocks from my home in more than a month.
That’s the kind of reality-bending that’s the background noise of any major natural disaster. Unfortunately, my own shifting experience has been mirrored by FUD from elected officials in the form of uncertain, contradictory messaging. On February 10th, Donald Trump said the virus would “miraculously” go away, and he has since veered wildly between somber warnings and impatient pronouncements. The Surgeon General first advised us to “ STOP BUYING MASKS!” on the last day of February; then recently (although it seems like several months ago), Governor Andrew Cuomo of New York ordered that masks be worn in all public spaces when social distancing is not possible. The back and forth between New York City’s mayor and the state’s governor over whether or not to close schools, issue stay-at-home orders, and so on is a head-spinning timeline on its own. I could offer any number of examples since then, on everything from miracle drugs to bleach, but by the time you read this, those examples will be out of date.
The warped reality brought on by this absence of coherent advice and basic facts has created the perfect breeding ground for online criminals. When people think of cybercrime, an image of someone in a hoodie with fast-moving fingers and green text scrolling on a black screen probably comes to mind. But most cybercrime is more akin to a traditional con: It’s just easier to convince someone to give up their private information willingly than to try and steal the information from a server (where it should be encrypted anyway).
Cybercriminals thrive on emotion because it short-circuits our better judgment. Sporting events and holidays are popular targeting times for scammers, because people want to watch their favorite games and get free or cheap stuff. And rampant FUD makes scams harder to spot. You don’t need a well-made phishing site when a population is left confused and frightened from a natural disaster. We’ve already seen quite a few COVID-19-related scams pop up. With the dearth of testing in the US, some scammers claim to offer coronavirus testing. Google reports that it’s blocking 18 million COVID-19-related scam emails per day.
In one of his rare sensible moves, Attorney General William Barr sent a letter to US attorney offices encouraging the investigation and prosecution of coronavirus-related scams. In the letter, Barr called out ransomware posing as a COVID-19 tracker for Android devices, phony cures, and bogus calls for aide. Barr wrote, “The pandemic is dangerous enough without wrongdoers seeking to profit from public panic and this sort of conduct cannot be tolerated.”
That’s good, but reality is still dangerously warped.
And Now There’s Money
Tax season is another time that’s great for scammers. It’s a time when people are doing things they almost never do, like sending tons of personal information to organizations they almost never interact with, with the hope of a big financial reward in the form of a tax return. Money and fear are the key motivators for tax scams, along with a healthy dollop of confusion that comes anytime you have to deal with a faceless bureaucracy like the IRS.
Now that the Congress has freed up money for just about every American, which is being disseminated by the IRS, the FUD of coronavirus is colliding with the annual fears and desires of tax season.
Unlike the rest of our political apparatus, the IRS was quick off the mark to identify potential scams. The $2 trillion aid package, signed into law on March 27th, included provisions for payments to individual Americans. On April 2nd, the IRS had an exhaustive post warning of various scams that could arise.
The post has lots of information and is worth reading if only to gain insight into the mind of a scammer. For instance, the IRS says to avoid any messages that, “emphasize the words ‘stimulus check’ or ‘stimulus payment.’ The official term is economic impact payment.” Also, as always, the IRS tends to communicate by USPS and not by email, phone, text, or social media.
But while the IRS works to put out good, clear information, it’s hampered in its efforts. An official IRS site to help you find your payment is frequently unavailable. Payments have been delayed or sometimes made in incorrect amounts. I am certain that scammers are watching and adapting their tactics.
The Best Disinfectant
Sunshine, unfortunately, may not kill off the coronavirus. Clear, authoritative messaging and accurate information given to the public, however, can greatly combat online scams by cutting through the FUD reality distortion. That information needs to come from all levels of elected government, but especially the White House, which has a unique position of authority in American life. Weeks into this disaster, however, the highest office in the land seems more interested in finding scapegoats, fomenting confusion over the stay-at-home orders that appear to be saving lives, and denigrating the media organizations that can deliver information on the scale required to protect us from scams and illness.
The end result is that it’s difficult to know what is real and who is telling the truth. The lack of reliable information from elected officials, particularly the White House, has created an enormous opportunity for criminals to prey on us. I’ve often written advice on how to spot these scams, but that kind of cool-headed analysis isn’t possible when going to the grocery store for food needs to be weighed against the risks of getting sick. It’s hard to check the sources when the information changes hourly, and we’re all desperate for protection.
Whether this unimaginable suffering could have been prevented is a debate better left to historians. But America’s haphazard, uneven response to COVID-19 is visible everywhere. There aren’t enough tests. Hospitals are overwhelmed. My home of New York City has seen more than 12,000 deaths (as of this writing) since the outbreak began. Illness and death have already touched my family, and by the end of this, most people will say the same. The virus is with us now and is likely to stay. Beating back the coronavirus will likely require years of work, but if elected officials can agree on the facts for just a short time and lay off the FUD, it could protect the most vulnerable of our population from opportunistic criminals.
Until that day comes, give yourself the time to think critically about the claims you read and hear and the messages you receive. We’re in this for the long haul.
|
https://medium.com/pcmag-access/politicians-are-making-things-easy-for-covid-19-scammers-df5f3de2c8b1
|
[]
|
2020-05-01 19:01:00.989000+00:00
|
['Technology', 'Covid 19', 'Cybersecurity', 'Scam']
|
2,045 |
Crypto & Us : Bite 2 (Blockchain)
|
Think of Blockchain as a public ledger where all the cryptocurrency transactions that you do are recorded. These are recorded on blocks and these blocks are linked or ‘chained’ together forming a string of blocks. Now every transaction needs to be verified, like they are done in banks, except that here, these are verified by a network of P2P computers spread globally, also knows as ‘Miners’. Think of them as auditors who keep a check on the legitimacy of these transactions and in return are rewarded for the same in crypto. In this way, the whole process is ratified without any involvement of banks or financial institutions. Pretty cool right? How safe is blockchain, you may ask? Since the transactions on a block are verified from multiple networks or ‘nodes’ , even if the hacker tries to alter one transaction, he would need to hack at least half of the distributed nodes and on top of that, if he still manages to do that, since the blocks are independent, he would not be able to go past one block. So the net result is a publicly secured network with almost no room for error. Still need a reason to invest? The most effective and perhaps a life changing use case for blockchain technology is in the countries where banking facilities do not reach all the citizens or where inflation rates have soared to such an extent that their government currencies are worthless. Zimbabwe, Argentina and Lebanon are just a few examples from the long list of countries who are forced to switch to crypto and blockchain to survive. Transactions on blockchain are superior to the current system that we have seen throughout our lives. Be it in terms of security, privacy or efficiency, blockchain technology is a tremendous alternative for people who are looking to move to a more secure platform for people who want to have control on their money and privacy
PS: El Salvador, a small country in Central America took the historic decision to make Bitcoin a legal tender in the country which resulted in the whole country getting instant access to financial services, dramatically reducing the remittance fees and transaction time as compared to the earlier wire transfers. If that was not enough, the citizens will not be paying any tax on the capital gains, that means, as and when the value of bitcoin rises, the citizens benefit from the gains just buy holding bitcoin in their wallets.
|
https://medium.com/@kohliaditya/crypto-us-bite-2-1b9030651524
|
['Aditya Kohli']
|
2021-07-12 12:03:46.170000+00:00
|
['Decentralized Finance', 'Financial Freedom', 'Bitcoin', 'Cryptocurrency', 'Blockchain Technology']
|
2,046 |
Flying into the future: How drones are getting the job done during the pandemic
|
From contactless medicine delivery to helping kids learn STEAM principles that will guide their future careers, drones are a sustainable tool in the fight against the novel coronavirus.
Drones have been whirring into the civilian space at a breakneck pace.
Both recreational and industrial applications of drones have the exciting potential to make certain jobs easier, particularly those that would have previously required a costly helicopter trip — like aerial photography.
While these applications have been around the corner for several years, one new trend is the advantage gained by using drones in the middle of a contagious health pandemic. Because they can be operated remotely, drones can make certain trips obsolete and aid with social distancing — all while still nonetheless enabling functional collaboration. Best of all, as low-impact monitoring devices, drones are actively conserving energy and contributing to more sustainable practices.
READ MORE DRONE COVERAGE12 Examples of Rescue Robots You Should Know
Fighting the virus while preparing for the future
From archaeology to construction, potential drone applications run the gamut. One use case that skyrocketed during the pandemic is the airborne delivery of groceries. A number of pilots are currently run, such as Walmart in the United States and Tesco in Ireland. Drone delivery relieves people from the need to shop in-store, which is especially useful for protecting people who might be quarantining due to their status in high-risk groups. We’ve also seen successful medicine deliveries to remote locations.
Though COVID-19 accelerated development in this area, the delivery of essential items by drones will continue to evolve long after this all over. The technology is a practical way for those with impaired mobility to get food and other products, and I’m forecasting that drone-based deliveries will be the driving factor in scaling up online grocery retail. It is only through the use of drones, once mature, that unmanned delivery of groceries can be achieved at scale.
With grocery delivery, there are challenges that go beyond the drone’s software, management and steering capabilities. Some residents have objected, citing noise complaints. Others are worried about safety issues or feel that they are being spied on by unmanned aerial devices. For commercial delivery by drone to really take off, improvements both to the hardware and to informing the public will be necessary.
On the hardware front, the technology must be completely safe — regardless of weather, the weight of the load and potential obstacles in the airspace. Further, noise emissions from air flowing over the blades needs to be addressed. Commercial drones have taken major strides toward mitigating this issue, as their higher-quality aerodynamic blades help reduce the flow of air through the propellers and minimize vibration and sound.
These technological advancements aside, the widespread commercial application of drones such as for the delivery of groceries will only succeed if the public is informed and supportive. Residents need to feel confident that the technology is safe and silent — and that their privacy isn’t being compromised during delivery.
How kids are using drones to unlock career paths
Although they’re stuck inside, kids are still hungry for real-world experiences that keep them engaged. There is no better time than the present to empower youngsters with the skills they need to comply with the rules of the sky by encouraging responsible play and learning with purpose.
There are a number of educational kits on the market that teach kids to build, code and configure their own drone. Most of them follow a STEAM approach to learning, which brings science, technology, engineering, arts and mathematics into the real world. Such kits help build curiosity, dialogue and critical thinking, ultimately teaching children to take thoughtful risks, engage in experiential learning, persist in problem solving, embrace collaboration and work through the creative process. All of these skills set kids up to succeed in various areas of their future lives — and hopefully to solve some of the most pressing issues of our time.
As drone laws evolve, however, it is increasingly important to know when it’s OK to capture that cool beach shot while on vacation or what the rules are when flying over crowds at that travel baseball tournament or local festival. The fear of being watched can deter people from public participation and regulations often vary by jurisdiction.
Affordable, speedy and convenient surveying
Even uses that are not directly affected by the pandemic profit from drones’ inherent feature of being controlled remotely. They contribute to fewer trips having to be made, and enable collaborative work despite social distancing.
One of drone technology’s major benefits is the ability to survey huge swaths of land in very little time, which was previously only possible at a prohibitive cost. This feature is used in many different industries, from agriculture and construction to the protection of the environment.
City planners are using drones to evaluate potential building sites and monitor the use of urban spaces, while forestry and agricultural applications range from soil analyses to the detection of plant diseases to aerial spraying and seeding.
For these purposes, excellent flying capabilities and sufficient image quality alone are not enough; depending on the exact use case, drones will only fulfill their potential if the hardware is combined with other technologies such as GPS and artificial intelligence. While GPS is already being used to steer the devices and to match data with precise locations, there is still immense opportunity for artificial intelligence.
By leveraging AI in analyzing image and video data from the drones, users can detect anomalies such as leaks in water pipes or faulty power supplies. These kinds of AI-powered visual analytics can prevent shortages and outages and significantly reduce repair time. Images and videos can be live-streamed to multiple stakeholders’ devices, enabling smooth collaboration between individuals working from different places — a major asset in times of social distancing.
There’s also a lot of potential for automated image analysis in agricultural and environmental industries. By layering images taken at different times users can detect changes in soil quality, vegetation or large land features much earlier. Farmers can rely on these insights to take action on plant diseases, while environmentalists can map developments in natural habitats.
Innovators have long recognized the immense potential that drones offer, particularly when combined with additional technologies. But recent months have inspired and accelerated the development of even more use cases. The COVID-19 pandemic has shown that the delivery of groceries, medicine and items of daily use by drones is more than a matter of convenience — it can ensure the safety of individuals and entire societies.
|
https://medium.com/@thomas-falk/flying-into-the-future-how-drones-are-getting-the-job-done-during-the-pandemic-de562a5fa9cc
|
['Thomas Falk']
|
2020-11-18 16:49:28.416000+00:00
|
['Investing', 'Drones', 'Drones Technology', 'Sustainability']
|
2,047 |
GMOs- Can they really help the future, or propel us further back than the present?
|
Illustration by Heidi Wong
“Any politician or scientist who tells you these products are safe is either very stupid or lying” — David Suzuki
David Suzuki is an environmental activist, rocking 83 years, who had earned his Ph.D. in Zoology from the University of Chicago in 1961. Being a science broadcaster, he ridiculed the world’s government for not calling the shots to stop Global warming and many other problems. After some time, he declared the world was naturally stupid and started his own organization which was to take his name.
You know people mean business when they put their own name for a whole organization. That’s the big stuff.
He continued his journey working as the head of one of the biggest Genetic labs in Canada, fiddling with the world's coding which makes us exist, the whole shebang.
Until doomsday happened.
David Suzuki, the man of science, turned his back on changing the world forever. Imagine how the conversation would’ve gone:
“Boy!”
“Yes sir?”
“BOY!”
“Yes, sir I’m right here-”
“BOYYYY!”
“Sir what is it?”
“Ah, there you are! So, uh, Imma head out after years and years of research and effort put into this. Good luck without a lead scientist, Byeeeeeeeeeee!”
Okay, fair enough. It wasn’t entirely out of a sudden.
The main reason Mr.Suzuki abandoned his work and studies in the early 1980s was that he believed the outcomes were too risky to even try. The dangers outweighed the benefits.
He was convinced it would ruin the world.
But then, why do we still use it?
It helps the future.
Illustration by Kurzgesagt- In a nutshell
GMOs are a simpler way of saying “Genetically Modified Organisms”. It has a very important part in the title of Biotechnology. It is one form of artificial selection, where humans meddle with the internal DNA of species to create new “better” versions of them. Better to survive, adapt, and serve purposes more effectively.
To explain it in simpler words:
If you were put into a scenario where you had the option to change everything about you for the better; increase height, strength, brainpower, how would you react?
Would you say yes, or no?
This is the dilemma with Genetic engineering.
Just… on a relatively bigger issue than your looks and grades in school: Food.
Genetic engineering, although practiced for years, is still a relatively new concept. In fact for the foreseeable future, it might just be a solution to help farmers around the world in the long run.
As no scientists are ready to turn humans into creatures with twenty limbs with webbed skin to fly and gills to breathe underwater that can survive even outer space. All that sci-fi you’ve read from time to time is still a very very very far fantasy.
So, we stick to the simple things.
In 1994, tomatoes were the first fruit to be genetically modified to have a longer shelf life, having better resistance to an enzyme which is responsible for rotting. Tomatoes started living longer, got to marry, have kids, and enjoy life until plucked from their plants.
This discovery of increasing yield might not have a fancy scientist name behind it, or be the newest Newton law, but it had opened up many opportunities for all kinds of services and countries around the world.
Information by Backyard Farmers
In the USA, 50% of the Soyabean crop and 30% of the maize harvest consists of genetically modified plants that are resistant to herbicides and insect pests, reducing the need for chemicals that improve our health, and reducing the cost which was previously used for those chemicals.
If I had the choice of choosing fresher fruit in a salad naturally than chemicals injected in them which could go full war on me, I would be inclined to choose the GMO.
Sadly, biology as a whole is weird.
It might seem ‘safer’ to avoid all man-made fertilizers and rely only on food makeovers, but maybe those foods might make other foods jealous and could create really annoying products we really don't need in our lives. Like superweeds. They’ll just KO all the other plants in your garden now leveling up.
This is also another reason why we can’t “poof” all our food to make them better. We can only practice our mixing talents in specific processes and plants to eradicate as many possibilities of superweeds taking over the world. That would be a new low for earth after 2020.
However, GMOs can still help us. Another country leading with their crop army is Brazil. Their GMOs fields take over almost 26 percent all over the world, just trailing behind the states. Being the largest exporter of soybean and second-largest of corn, Brazil keeps using its biotech crops for produce.
Apparently, according to ISAAA (International Service for the Acquisition of Agri-biotech Applications) reports from the past twenty years, Brazilians see GMOs as tech which can rid all pests and increase production.
Believe it or not, this helped modernize Brazil to its current economy. As long as it did take, farmers were now able to develop their secondary industries after having a byline cheat code for their nutrients. Which country will rise higher to fame next?
illustration by Camtec Electrical Services
If we refer to the table of advantages and disadvantages above, it seems like GMOs are a double-edged sword. It might remove diseases, but start new ones.
It’s kind of like taking one step ahead, but risk getting twice as behind.
We have countries like the USA and Brazil using science to its most useable potential, and some actively branding their food with a shiny gold sticker saying “No GMO crops were used in the making of this”, as if it were an achievement instead of showing their embarrassment of not doing the same.
If the whole world can’t come to a decision of whether GMO crops are on Santa's good or bad list, where do we go from here?
It’s up to us to decide. Take a break from your work, sit down, and search the net for more information on what could possibly be the savior or death of humanity. Get your parents or kids with you, cuddle up in the living room, and watch some TED talks from experts in this.
Learn about the problems and benefits of genetically modified organisms.
Educate yourself more.
Our crew of writers has barely scraped the surface of this whole controversy. There are millions and millions of articles that give in-depth research and data for you to read about. Reports, case studies, interviews, you name it.
As GMOs, whether you like them or not, are as relevant as elections taking place for a new president, or the mysterious death of a famous actor.
Aren’t you curious if David Suzuki actually became a menace to science after all his contribution previously, or was correct? Or if countries are taking the effort to follow science for the betterment of the future, or the wrong idea all along?
And most importantly, the main question we all have today, does everyone like a happy tomato in a sandwich than a sad one?
Or not?
|
https://medium.com/@backyardfarmers-co/gmos-can-they-really-help-the-future-or-propel-us-further-back-than-the-present-f9820a369611
|
['Backyard Farmers']
|
2020-12-05 09:47:08.494000+00:00
|
['GMO', 'Biotechnology', 'Biotech', 'Genetics', 'Agriculture Technology']
|
2,048 |
Can collaboration deliver purpose and profit
|
Written by Caroline Hyde, CEO, Allia Future Business Centre
I heard Paul Polman, Unilever’s departed CEO, speak at an event recently. Against a fevered backdrop of global frustration over climate degradation, increasing social inequality, political uncertainty and the rise of nationalism, Polman talked about bringing humanity back into business. “We’re in such a rat race to satisfy [short-term business demands] we’ve forgotten to do the things that are the foundations of society.”
But is it really up to business to provide the solutions to these global and local challenges?
In his annual letter to the CEOs of the firms in which they invest, Blackrock CEO Larry Fink followed up his 2018 call for companies to articulate and pursue their social and environmental purpose, by noting “society is increasingly looking to companies, both public and private, to address pressing social and economic issues”.
When we opened the first Allia Future Business Centre in Cambridge in late 2013, cleantech was an emerging cluster and we were one of the first incubators to champion social innovation. Much has changed over the past 5 years and its hugely gratifying to see the growth in social and tech for good ventures. In this time we’ve supported over 1,600 ventures to start and grow their business and deliver greater impact — addressing issues from food poverty to energy storage solutions, female education to small farm agritech. And while rubbing shoulders with these purpose driven founders and teams continues to inspire me daily — it’s too simple to say that innovation and start-ups are the sole solution.
For me, the answer has to be rooted in collaboration — between governments, private corporates, civil society and start-ups — joining the missing dots and building lasting sustainable solutions. The world needs strong multi-sector partnerships to achieve the type of change envisioned by the UN Sustainable Development Goals. Achieving the Global Goals is unquestionably a moral imperative but also presents a significant commercial opportunity estimated at $12 trillion a year in revenue and cost savings and 380 million new jobs by 2030.
In difficult times the entrepreneurial spirit comes into its own. Founders are born problem solvers: time and again I see entrepreneurs fuelled by their own personal experiences to solve issues they have encountered. Necessity is the mother of invention after all, so it makes perfect sense that in times where there are many problems, consummate problem-solvers will come into their own.
But how do we ensure that social innovation and technology with purpose is fully supported? What do they need in order to easily start, confidently grow and successfully scale their impact and their financial sustainability? Collaboration. We need government to champion and support this level of innovation prioritising impact alongside GDP; we need big business to listen, adopt, innovate alongside, financially support and invest in new solutions; and we need civil society to articulate the needs and demands of those we need to find solutions for.
Collaboration is key. Allia has worked for over 20 years bringing together business, government, civil society and other organisations to accelerate the solutions for the change we need. From pioneering community bonds in 1999, to launching the Retail Charity Bond platform enabling charities to access funding on London Stock Exchange and creating the first centre dedicated to supporting social and environmental innovation. As we launch a new programme next month, Future 20 — designed to identify and scale 20 of the UK’s most promising tech for good ventures, a key aim has been connecting them with a network of multi-sector partnerships, all of whom are equally passionate about addressing the UN Sustainable Development Goals. Through such collaboration our individual ability to accelerate the impact of new solutions grows exponentially. And hopefully inspires the next generation to find further solutions.
As Paul Polman so eloquently said, “I believe in purpose and doing purpose very well will ultimately lead to profit. Businesses’ prime objective is to serve society, we need to make it better. On an environmental level that means regenerative; on a human level that means inclusion. “
www.futurebusinesscentre.co.uk | [email protected] | @ftrbusiness
|
https://medium.com/digitalagenda/can-collaboration-deliver-purpose-and-profit-57a22dea82cc
|
[]
|
2019-05-21 14:12:16.959000+00:00
|
['Tech For Good', 'Technology', 'Collaboration', 'Sustainable Development', 'Innovation']
|
2,049 |
Count Items in Python With the Help of Counter Objects
|
Count Items in Python With the Help of Counter Objects
The easy way to count objects in a data container
Photo by Djim Loic on Unsplash
The Premise
When we deal with data containers, such as tuples and lists, in Python we often need to count particular elements. One common way to do this is to use the count() function — you specify the element you want to count and the function returns the count.
Let’s take a look at some code for its use:
The count() function
As you can see above, we used the count() function with a list of scores.
One thing to note: When the elements specified in the function aren’t included in the list, we’ll get a count of zero, as expected. If we want to count the occurrences of all the elements, we’ll have to iterate them, as shown in the following code snippet:
Count All Elements
Several things are worth highlighting in the above:
To avoid counting elements of the same value, we use the set() constructor to convert these iterables to set objects. This means duplicate elements are removed — the for loop will only go over distinct elements to get their correct cumulative counts.
constructor to convert these iterables to set objects. This means duplicate elements are removed — the for loop will only go over distinct elements to get their correct cumulative counts. The count() function doesn’t only work with the list objects, it can also with tuples and strings. More generally, the count() function works with sequence data in Python, including strings, lists, tuples, and bytes.
As shown above, we have to use a for loop to iterate the elements to retrieve the counts for each individual element. It’s a bit tedious.
|
https://medium.com/better-programming/count-items-in-python-with-the-help-of-counter-objects-c08d8d486e45
|
['Yong Cui']
|
2020-08-07 15:43:04.087000+00:00
|
['Python', 'Software Development', 'Technology', 'Data Science', 'Programming']
|
2,050 |
Faster Algorithm for FRTB SBM Risk Aggregation
|
Faster Algorithm for FRTB SBM Risk Aggregation
Building regulatory risk application with python and atoti
In this post, I want to discuss a faster approach to compute the variance-covariance formulae present by regulatory capital models — FRTB SBM and CVA Risk Framework — as well as ISDA SIMM and internal sensitivity-based VaR-type models, and illustrate this approach with a sample implementation of SBM Equity Delta aggregation in atoti. I would like to thank Robert Mouat for sharing his ideas on multi-threading in the FRTB Accelerator and on the matrix formula optimization.
All of the above-mentioned methodologies use a series of nested variance-covariance formulae — see an example below — to compute a VaR-like risk measure. For the justification of the nested variance-covariance formulae please refer to “From Principles to Model Specification” document by ISDA SIMM, March 3, 2016.
One of the roll-up steps involves a high-cardinality operation — aggregating risk factors into buckets — since there might be thousands of risk factors in certain risk classes, for instance, in credit spreads and equities, brute-force application of the formula is expensive. Since we want to be able to recompute SBM dynamically, explore portfolios and apply simulations, the efficiency of the calculation is critical.
The naïve approach to the bucket-level aggregation has O(N2) time complexity:
The trick is to leverage the fact that many of the risk factor pairs use the same value of the correlation 𝞺kl — so that it can be taken out of the double sum and the formulae can be rearranged.
Equity Delta Bucket-Level Rollup Optimisation
We’ll group the pairs of equity delta risk factors sharing the same correlation value. The Equity Delta risk factor correlations are set in paragraph [MAR21.78] of the Consolidated Basel Framework and allow us to break the pairs as follows:
Group 1: same name, different type (spot/repo): constant value 0.999 per [MAR21.78]
Group 2: different name, same type: a single value depending on the bucket, for example, 0.15
Group 3: different name, different type: value depending on the bucket and multiplied by 0.999, for example, 0.15 x 0.999
Now let’s look at how the formulae can be rearranged for groups of risk factor pairs having the same risk factor correlation.
Let’s start with the pairs where both risk factors are either spot or repo — cases 1. and 2. above. Since for any 𝑘 and 𝑙 the correlation 𝜌𝑘𝑙 will be equal to the correlation defined per bucket 𝜌𝑛𝑎𝑚𝑒𝑠, their contribution can be rewritten — “reduced formula”:
This calculation is O(N) complexity.
Python implementation
I used python and atoti to implement this formula as follows:
The contribution of the risk factors where one risk factor belongs to “SPOT” and the other belongs to “REPO” — case 3 above — can be rearranged as follows and computed with O(N) time complexity:
where:
- J — is a matrix of ones,
- first term in the above formula performs aggregation of all sensitivities as if they all are correlated at 𝜌𝑛𝑎𝑚𝑒𝑠,
- the second term is to correct the first term and to account for the fact that risk factors, where spot and repo risk factors have the same equity name, must be correlated at 0.999.
This is my code snippet implementing the contribution of pairs having spot and repo risk factors — called “cross repo spot contribution”:
Combining the spot, repo and cross spot/repo paris into Kb measure and checking whether the bucket is 11 or not (see Kb formula for bucket 11 in [MAR21.79]):
Dynamic aggregation in atoti
I used the above rearranged formulae to implement measures for the on-the-fly aggregation with python and atoti, you can download my example here: Notebook example for Equity Delta SBM.
My sample data has about 1000 different names, and most of them sit in bucket #2,
and the whole SBM chain from sensitivities to weighted sensitivities to bucket level charges and equity delta risk charge is re-aggregated interactively:
Let’s look at how the chain of SBM measures can be implemented.
I’m using atoti “where” function to check whether I should use the basic or formula for the Delta margin as prescribed in [MAR21.4(5)(b)(ii)]:
When Sb is equal to the net weighted sensitivity — as per the basic formula in MAR21.4(5)(a) — I’m using square_sum and sum aggregation functions in atoti:
To implement the alternative formula from [MAR21.4(5)(b)], I’m first computing the Sb and Sc and then aggregating them.
We’ve discussed the bucket-level calculation — “Kb” — in details in the previous section, so the only measure that we haven’t discussed so far is the weighted sensitivity calculation which is trivial:
Please refer to the notebook example if you wish to learn more about input data and model that I used.
Conclusion
In this post, we discussed how to rearrange the variance-covariance formula so that it can be computed more efficiently. Two additional notes on that:
|
https://medium.com/atoti/faster-algorithm-for-frtb-sbm-risk-aggregation-4d77f0895562
|
['Anastasia V Polyakova']
|
2020-12-15 02:52:10.131000+00:00
|
['Fintech', 'Technology', 'Regulation', 'Analytics', 'Risk Management']
|
2,051 |
Beginners Guide to Cloud Computing
|
Imagine you would like to train a deep learning model where you have thousands of images, but your system does not have any GPU. It would be hard to train large training models without GPU, so you will generally use google collab to train your model using google’s GPU’s.
Consider your system memory is full, and you have important documents and videos to be stored and should be secured. Google drive can be one solution to store all your files, including documents, images, and videos up to 15GB, and offers security and back-up.
Above mentioned scenarios are some of the applications of Cloud Computing, one of the advantages of using cloud computing is that you only pay for what we use.
What is Cloud?
Cloud computing refers to renting resources like storage space, computation power, and virtual machines. You only pay for what you use. The company that provides these services is known as a cloud provider. Some examples of cloud providers are Microsoft Azure, Amazon Web Servies, and Google Cloud Platform.
The cloud’s goal is to provide a smooth process and efficiency for the business, start-ups, and large enterprises. The cloud provides a wide range of services based on the needs of the enterprises.
Types of Cloud Computing
IaaS — Infrastructure as a Service
In this, cloud suppliers provide the user with system capabilities like storage, servers, bandwidth, load balancers, IP addresses, and hardware required to develop or host their applications. They provide us virtual machines where we can work. Instead of buying hardware, with IaaS, you rent it.
Examples of IaaS include DigitalOcean, Amazon EC2, and Google Compute Engine.
Saas — Software as a Service
Most people use this as a daily routine. We get access to the application software. We do not need to worry about setting up the environment, installation issues, the provider will take care of all these.
Examples of Saas include Google Apps, Netflix.
Paas — Platform as a Service
It provides services starting from operation systems, programming environment, database, tests, deploy, manage, and updates all in one place. It generally provides the full life cycle of the application.
Examples of Paas include Windows Azure, AWS Elastic Beanstalk, and Heroku.
Credits: Microsoft
Benefits
Most of the enterprises are moving to the cloud to save money on infrastructure and administration costs and most of the new companies are starting from the cloud.
Image by Tumisu from Pixabay
cost-effective
If we are using cloud infrastructure, we do not need to invest in purchasing hardware, servers, computers, buildings, and security. We do not even need to employ data engineers to manage the flow. Everything is taken care of by the cloud.
scalable
One of the best things in the cloud is decreasing or increasing the workload depending on the incoming traffic to our webpages. If it has high traffic, we can increase the workload by adding some servers. If the traffic suddenly starts showing a declining movement, it is flexible to dispatch the added servers.
We have two options in order to provide flexible services depending on the needs
Vertical Scaling Horizontal Scaling
In Vertical Scaling, we add resources to increase the servers’ performance by adding some memory and Processors.
In Horizontal Scaling, we add servers to provide a smooth process for sites when we have more incoming traffic.
reliable
In case of a disaster or grid failures, the cloud ensures that your data is safe and will not be depleted away. Redundancy is also added in the cloud to ensure that we have another same component to run the same task if one component fails.
global
Cloud has a lot of data centers all around the world. If you want to provide your services to a particular region that is far away from your place, you can make it happen with no downtime and less response time with the cloud’s help.
Thank you for reading my article. I will be happy to hear your opinions. Follow me on Medium to get updated on my latest articles. You can also connect with me on Linkedin and Twitter. Check out my blogs on Machine Learning and Deep Learning.
|
https://medium.com/towards-artificial-intelligence/beginners-guide-to-cloud-computing-af2e240f0461
|
['Muktha Sai Ajay']
|
2020-10-18 00:03:07.434000+00:00
|
['Future', 'Technology', 'Cloud Computing', 'Information Technology', 'Computer Science']
|
2,052 |
Here Are The Most Controversial AI Moments of 2020
|
Here Are The Most Controversial AI Moments of 2020
List of Artificial Intelligence Controversies
Artificial intelligence has been the buzzword in 2020 and with the benefits of this technology evident around us; AI has had its own share of controversies. From algorithms¹ unfairly discriminating women in hiring and students complaining about unrealistic grades, there is no doubt that AI has evolved in 2020 and as 2021 beckons, it is time to take stock of what the year has been. With GPT3, deepfakes, and facial recognition making headlines in 2020, there are many arguments surrounding privacy and regulations.
Photo by Bernard Hermant on Unsplash
In this article, I will explore the following controversial AI incidents in 2020 and explore the future prospects of artificial intelligence² and how 2021 is shaping up:
-Facial recognition
-Deepfakes
- AI-based grading system
- NeurIPS Reviews
- GPT 3
Facial Recognition
Clearview AI provides organizations, predominantly law enforcement agencies, with a database that is able to match images of faces with over three billion other facial pictures scraped from social media sites.
The company has recently been hit with a series of reprisals from social media platforms, who have taken a hostile stance in response to Clearview AI’s operations. In January, Twitter sent a cease letter and requested the deletion of all collected data Clearview AI³ had harvested from its platform. YouTube and Facebook followed up with similar actions in February.
Clearview AI claims that they have a First Amendment right to public information, and defends its practice on the basis of assisting law enforcement agencies in the fight against crime. Law enforcement agencies themselves are exempt from the EU’s #GDPR.
Clearview has received multiple cease-and-desist orders from Facebook, YouTube, Twitter, and other companies over its practices, but it is not clear if the company has deleted any of the photos it’s used to build its database as directed by those cease-and-desist orders. In addition to the lawsuit in Illinois, Clearview is also facing legal action from California, New York, and Vermont.
Deepfakes
Deepfakes supplant people’s faces onto existing bodies. While many look near-genuine, the technology still hasn’t reached its potential. Still, experts have noted its misuse in pornography and politics.
The start of 2020 came with a clear shift in response to deepfake technology⁴, when Facebook announced a ban on manipulated videos and images on their platforms. Facebook said it would remove AI-edited content likely to mislead people, but added the ban does not include parody. Lawmakers, however, are skeptical as to whether the ban goes far enough to address the root problem: the ongoing spread of disinformation.
The speed and ease with which #deepfakes can be made and deployed, have many worried about misuse in the near future, especially with an election on the horizon for the U.S. Many in America, including military leaders, have also weighed in with worries about the speed and ease with which the tech can be used. These concerns are heightened by the knowledge that deepfake technology is improving and becoming more accessible.
Microsoft announced the release of technologies to combat online disinformation on their official blog. One of these technologies was the Microsoft Video Authenticator, which analyzes photo or video to provide a confidence score as to whether the media is fake. It has performed well on deepfake examples from the above mentioned Deepfake Detection Challenge dataset.
AI-Based Grading System
The UK exam regulation department chose to start using an AI grading system in place of the A-level examination for university entrance, which was canceled. The U.K. has since dropped it after parents and students complained that it was unethical and biased against disadvantaged students.
Thousands of A-level students were given a grade that was lower than their teacher predicted, though, sparking a nation-wide backlash and protests on the streets of London. Now, the government has buckled and announced that it’s abandoning the formula and giving everyone their predicted grades instead.
Photo by Philippe Bout on Unsplash
The backlash to Ofqual’s algorithm⁵ was only matched by its complexity. The non-ministerial government department started with a historical grade distribution. Then, Ofqual looked at how results shift between the qualification in question and students’ previous achievements.
The number of downgrades wasn’t the only problem, though. The reliance on historical data meant that students were partly shackled by the grades awarded to previous year groups. They were also at a disadvantage if they went to a larger school, because their teacher’s predicted grade carried less weight.
At a time when society is examining how technology is reinforcing its race and class issues, many realized that the system, regardless of Ofqual’s intentions, had a systemic bias that would reward learners who went to private institutions and penalize poorer students who attended larger schools and colleges across the UK.
NeurIPS Reviews
This year, the thirty-fourth annual conference on Neural Information Processing Systems, NeurIPS 2020⁶ is going to be held virtually from 6th to 12th December. The paper submissions for this year is 38% more than last year. Additionally, 1,903 papers were accepted, compared to 1,428 in 2019.
The review period of the papers began in July, and in August, the popular #artificialintelligence conference, NeurIPS has sent out the paper reviews for this year’s conference. This has brought the popular machine-learning event once again amid the controversies as it has been claimed that the reviews of the papers are terrible such as either they are not clear, or the sentences were incomplete by the reviewers, among others.
This is not the first time that controversies have scarred the reputation of the conference. In other words, it can be said that controversies are not a new thing for this popular #machinelearning conference. In 2018, the organizers of the Neural Information Systems Processing conference had changed the event’s name from NIPS to NeurIPS after heading into a controversy about whether “NIPS” is an offensive name or not.
GPT 3
OpenAI released its latest language model in June, surpassing its predecessor GPT-2 with 175 billion parameters. It has raised many concerns about poor generalization, unrealistic expectations, and the ability to write human-like texts for nefarious purposes. Elon Musk, an OpenAI founder, also criticized OpenAI’s decision to give exclusive access to Microsoft.
Many advanced Transformer-based models have evolved to achieve human-level performance on a number of natural language tasks. Authors say the Transformer architecture-based approach behind many language model advances in recent years is limited by a need for task-specific data sets and fine-tuning. Instead, GPT-3⁷ is an autoregressive model trained with unsupervised machine learning and focuses on few-shot learning, which supplies a demonstration of a task at inference runtime.
Scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model
On NLP tasks, #GPT3 achieves promising results in the zero-shot and one-shot settings, and in the few-shot setting is sometimes competitive with or even occasionally surpasses state-of-the-art.
Future Prospects
The important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. Designing smarter AI systems⁸ is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind.
By inventing revolutionary new technologies, such a #superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes super intelligent.
Works Cited
¹Algorithms, ²Artificial Intelligence, ³Clearview AI, ⁴Deepfake Technology, ⁵Ofqual’s Algorithm, ⁶NeurIPS 2020, ⁷GPT-3, ⁸Smarter AI Systems
More from David Yakobovitch:
Listen to the HumAIn Podcast | Subscribe to my newsletter
|
https://medium.com/towards-artificial-intelligence/here-are-the-most-controversial-ai-moments-of-2020-df795d1e6248
|
['David Yakobovitch']
|
2020-11-30 19:01:05.559000+00:00
|
['Technology', 'News', 'Artificial Intelligence']
|
2,053 |
The Story of how Natural Language Processing is changing Financial Services in 2020
|
The Story of how Natural Language Processing is changing Financial Services in 2020
NLP Applications in Financial Services
Natural language processing is transforming the financial services industry with banks using NLP for evaluating performance drivers and forecasting the market.
From market analysis, content reviews, and risk management, NLP is accelerating changes in the financial industry¹. The traction towards NLP in financial services is increasing with demand for BERT NLP growing among financial institutions.
NLP can be utilized to assess a wide range of speech and text data from different contexts. Additionally, NLP enables banks to automate and optimize tasks including amassing customer information and searching documents.
Credit: Xenon Stack
Banks can expect NLP solutions from AI vendors to extract data from both structured and unstructured documents with a reasonable level of accuracy. Accordingly, financial institutions need to be aware of the fact that collected data from transactions and loan documents in the past, might not be useful for training #machinelearning models unless it is cleaned.
Overview of Natural Language Processing
The Bank of America is using natural language processing by leveraging this technology to become competitive in the market. Other banks including HSBC are following suit by using natural language processing to streamline operations and gain market insights.
According to Yahoo Finance, the natural language processing market will expand in 2020 with a growth rate of 19% totaling to $14B. Alchemy Data tool from IBM² is changing the financial services experience by converting large information sets into insights used for decision-making.
Companies such as Green Key Technologies¹¹ have developed NLP solutions for the financial industry with their latest innovation around trading desks. Financial institutions use their tools in voice information and analysis of trading processes.
Why #naturallanguageprocessing in the financial services industry?
The answer is simple. Retrieving information from unstructured resources that financial institutions have problems accessing.
Banks need accurate information about their operations and NLP tools are changing the landscape by helping them make decisions based on customer and market trends.
1. Customer Management and Predictions
Financial institutions must deliver quality services to their clients and this means going the extra mile to understand customer information.
NLP is reviewing customer data³ including social interactions and cultures which helps them to customize services. For instance, NLP filters through social media information and detects conversations that may help them offer better services.
Credit: Analytics Insight
Stripe¹² is using NLP to explore customer information to identify interest areas that influence customers positively. Predicting customer needs is critical in the financial industry and Stripe is deploying #artificialintelligence and natural language processing to deliver better services.
2. Market Evaluation and Monitoring
One challenge facing banks is the lack of tools for reporting market conditions such as company news posted online or mentioned in business news. NLP is bridging this gap by supporting real-time dissemination of information about their services from customers and business partners.
A company with a bad reputation performs poorly in the market and NLP assists to anticipate these problems and address them.
The Alchemy language tool enables financial institutions to track information about their operations in the market and make decisions. Developed by Watson, Alchemy Language assists banks to explore market trends⁴ and interactions around their services, which further supports the management process.
Unlike the past when banks took long to get the whole market view, NLP is streamlining the process through #data extraction tools.
3. Compiling Financial Reports
The financial services sector consists of volumes of information that pose challenges when reviewing transactions. Natural language processing⁵ is making the process easy through information filtering that helps financial analysts to access the right information.
JP Morgan adopted NLP with much success after the company faced problems in identifying key areas of their market operations.
Client communication in financial services is critical and NLP tools offer vital information to banks as they engage with customers. NLP systems predict and identify problem areas facing customers and this helps banks to develop policies around these challenges and serve them.
Banks make decisions based on NLP tools, which further accelerates the preparation of financial reports⁶.
4. Automatic Updates on Company Operations
Enterprises operating in the financial services experience market changes because of new hires and key people exiting the company and NLP is managing these responses by telling banks on market ramifications.
The stock market fluctuates or rises depending on company departures and NLP tools⁷ relay information to management for further action.
Banks look at the effects of staff reorganization on their share price and use NLP to facilitate the internal evaluation of operations to align with market expectations.
5. Risk Management
The success of companies in the financial industry depends on risk management procedures adopted and NLP is supporting in this area.
Fraud management is the first advantage of using NLP in financial services where banks monitor suspicious financial transactions⁸ and develop tools for addressing this problem.
NLP systems point to the risk areas and support communication across the financial organization about the impending risk. This further reduces the chances of incurring losses.
Chime¹³ is one banking institution with success in using NLP for fraud detection where the bank utilizes these tools in all transactions. According to the CEO of Chime, natural language processing is transforming financial services by reducing customer risks and offering value to investors.
Cases of fraud in the financial industry rose by 60% in 2019 alone according to a Pew Research poll and Chime is taking advantage of NLP tools.
Insider trading⁹ in financial services remains a major risk with banks losing revenues because of financial misconduct. Natural language processing offers an ideal platform for the management of trading activities by relaying updates based on company operations.
NLP pinpoints instances of insider trading before losses occur and safeguards the image of the business.
6. Stock Market Forecasting and Management
The stock market matters in the financial services and NLP tools are offering information about the behavior of stocks. For example, a bank can understand the current stock performance, forecast risks, and respond to market forces.
The Alchemy Data from IBM develops responses that enable banks to determine the performance of their stock.
A company needs to figure out ways of improving stock performance and through NLP¹⁰, this becomes easy because of accessing accurate information about the market.
Trading in the stock market fluctuates and responding to the problem requires technology solutions such as natural language processing which interpret data.
Natural language processing automation is helping banks and other financial institutions explore effective ways of managing their stocks with HSBC implementing NLP across all its operations.
By using NLP for market forecasting, HSBC explores stock market performance and offers recommendations based on prevailing market conditions.
7. Sentiment Analysis
Banks need information about their operations to remain competitive and reduce losses. Natural language processing reviews complex information within the financial services and offers accurate information including inconsistent data.
Unstructured information within a bank poses challenges when it comes to extracting insights and this is where NLP comes in. Equity performance is one area where banks need attention and NLP tools provide a clear analysis of operations.
The categorization of financial data by NLP is what makes this technology vital for banks in the current digital age. Overall, banks use NLP to measure and understand their operations based on variables such as customer demand and stock market performance.
8. Financial Variable Relationships
The #financialsector is adopting natural language processing because of determining relationships including revenues, stock earnings, value, and competition.
Graphical representations of these variables become easier by using NLP as banks can monitor and compare with the previous financial performance.
Regression analysis on financial graphs is one area benefiting from NLP as companies use the technology to determine the success rate in the market and detect financial misconduct as well.
By using NLP, banks establish connections between variables and use them to make strategic decisions. The entity modeling system from NLP has made relationships between variables convenient as banks can determine major areas affecting their operations.
The Future of Financial Services is Natural Language Processing
Advancements in natural language processing such as voice solutions are streamlining operations in the financial industry as banks use NLP tools to capture voice and text information, to convert data.
The same applies to the customer service department where financial institutions rely on NLP to track and understand customer insights. The ability to search through loads of financial information within a short time and with high accuracy makes NLP an important tool for the banking world.
#Textanalytics and voice recognition solutions by NLP have created new opportunities for banks to improve their services and offer value to the market. Before, banks incurred costs for mining data because of of the tedious task of searching through large data sets.
In this era of COVID-19, financial institutions are using information generated from NLP systems to evaluate the market and estimate risks. Natural language processing systems are assisting bank managers to measure the implications of the pandemic to their operations and support decision-making.
Because of information misinterpretation, natural language processing is improving this by scanning large volumes of data and interpreting them accurately. Unlike humans, NLP technology scans large information sets within a short time and increases efficiency for players in the financial industry.
Do you think NLP adoption in financial services is accelerating? Share your comments below to contribute to the discussion on The Story of how Natural Language Processing is changing Financial Services in 2020.
Works Cited
¹Financial Industry, ²Alchemy Data Tool from IBM, ³Customer Data, ⁴Market Trends, ⁵Natural Language Processing, ⁶Financial Reports, ⁷NLP Tools, ⁸Financial Transactions, ⁹Insider Trading, ¹⁰News API
Companies Cited
¹¹Green Key Technologies, ¹²Stripe, ¹³Chime
More from David Yakobovitch:
Listen to the HumAIn Podcast | Subscribe to my newsletter Online
|
https://medium.com/towards-artificial-intelligence/the-story-of-how-natural-language-processing-is-changing-financial-services-in-2020-8709cca3a100
|
['David Yakobovitch']
|
2020-12-19 13:04:56.855000+00:00
|
['Naturallanguageprocessing', 'NLP', 'Finance', 'Future', 'Technology']
|
2,054 |
Dealing With 5 Practical Issues in Machine Learning and Their Business Implications
|
Dealing With 5 Practical Issues in Machine Learning and Their Business Implications Appier Follow Jul 9, 2020 · 6 min read
Businesses today deal with more data and it’s arriving faster than ever before. At the same time, the competitive landscape is changing rapidly so the ability to make fast decisions is critical.
As Jason Jennings and Laurence Haughton put it “It’s not the big that eat the small… It’s the fast that eat the slow”.
Business success comes from making fast decisions using the best possible information.
Machine learning (ML) is powering that evolution. Whether a business is trying to make recommendations to customers, hone their manufacturing processes or anticipate changes to a market, ML can assist by processing large volumes of data to better support companies as they seek a competitive advantage.
However, while machine learning offers great opportunities, there are some challenges. ML systems rely on lots of data and the ability to execute complex computations. External factors, such as shifting customer expectations or unexpected market fluctuations, mean ML models need to be monitored and maintained.
In addition, there are a number of practical issues in machine learning to be solved. Here we will take a close look at five of the key practical issues and their business implications.
1. Data Quality
Machine learning systems rely on data. That data can be broadly classified into two groups: features and labels.
Features are the inputs to the ML model. For example, this could be data from sensors, customer questionnaires, website cookies or historical information.
The quality of these features can be variable. For example, customers may not fill questionnaires correctly or omit responses. Sensors can malfunction and deliver erroneous data, and website cookies may give incomplete information about a user’s precise actions on a website. The quality of datasets is important so that models can be correctly trained.
Data can also be noisy, filled with unwanted information that can mislead a machine learning model to make incorrect predictions.
The outputs of a ML model are labels. The sparsity of labels, where we know the inputs to a system but are unsure of what outputs have occurred, is also an issue. In such cases, it can be extremely challenging to detect the relationships between features and the labels of a model. In many cases, this can be labor intensive as it requires human intervention to associate labels to inputs.
Without accurate mapping of inputs to outputs, the model might not be able to learn the correct relationship between the inputs and outputs.
Machine learning relies on the relationships between input and output data in order to create generalizations that can be used to make predictions and provide recommendations for future actions. When the input data is noisy, incomplete or erroneous, it can be extremely difficult to understand why a particular output, or label, occurred.
2. The Complexity and Quality Trade-Off
Building robust machine learning models requires substantial computational resources to process the features and labels. Coding a complex model requires significant effort from data scientists and software engineers. Complex models can require substantial computing power to execute and can take longer to derive a usable result.
This represents a trade-off for businesses. They can choose a faster response but a potentially less accurate outcome. Or they can accept a slower response but receive a more accurate result from the model. But these compromises aren’t all bad news. The decision of whether to go for a higher cost and more accurate model over a faster response comes down to the use case.
For example, making recommendations to shoppers on a retail shopping site requires real-time responses, but can accept some unpredictability in the result. On the other hand, a stock trading system requires a more robust result. So, a model that uses more data and performs more computations is likely to deliver a better outcome when a real-time result is not needed.
As Machine Learning as a Service (MLaaS) offerings enter the market, the complexity and quality trade-off will come to greater attention. Researchers from the University of Chicago looked at the effectiveness of MLaaS and found that “they can achieve results comparable to standalone classifiers if they have sufficient insight into key decisions like classifiers and feature selection”.
3. Sampling Bias in Data
Many companies use machine learning algorithms to assist them in recruitment. For example, Amazon discovered that the algorithm they used to assist with selecting candidates to work in the business was biased. And researchers from Princeton found that European names were favored by other systems, mimicking some human biases.
The problem here isn’t the model specifically. The problem is that the data used to train the model comes with its own biases. However, when we know the data is biased, there are ways to debias or to reduce the weighting given to that data.
The first challenge is in determining if there is inherent bias in the data. That means conducting some pre-processing. And while it may not be possible to remove all bias from the data, its impact can be minimized by injecting human knowledge.
In some cases, it may also be necessary to limit the number of features in the data. For example, omitting traits such as race or skin color can help limit the impact of biased data on the results from a model.
4. Changing Expectations and Concept Drift
Machine learning models operate within specific contexts. For example, ML models that power recommendation engines for retailers operate at a specific time when customers are looking at certain products. However, customer needs change over time, and that means the ML model can drift away from what it’s designed to deliver.
Models can decay for a number of reasons. Drift can occur when new data is introduced to the model. This is called data drift. Or it can occur when our interpretation of the data changes. This is concept drift.
In order to accommodate this drift, you need a model that continuously updates and improves itself using data that comes in. That means you need to keep checking the model.
That requires the collection of features and labels and to react to changes so the model can be updated and retrained. While some aspects of the retraining can be conducted automatically, some human intervention is needed. It’s critical to recognize that the deployment of a machine learning tool is not a one-off activity.
Machine learning tools require regular review and update in order to remain relevant and continue to deliver value.
5. Monitoring and Maintenance
Creating a model is easy. Building a model can be automatic. However, maintaining and updating the models requires a plan and resources.
Machine learning models are part of a longer pipeline that starts with the features that are used to train the model. Then there is the model itself, which is a piece of software that can require modification and updates. That model requires labels so that the results of an input can be recognized and used by the model. And there may be a disconnect between the model and the final signal in a system.
In many cases when an unexpected outcome is delivered, it’s not the machine learning that has broken down but some other part of the chain. For example, a recommendation engine may have offered a product to a customer, but sometimes the connection between the sales system and the recommendation could be broken, and it takes time to find the bug. In this case, it would be hard to tell the model if the recommendation was successful. Troubleshooting issues like this can be quite labor intensive.
Machine learning offers significant benefits to businesses. The ability to predict future outcomes in order to anticipate and influence customer behavior and to support business operations are substantial. However, ML introduces a number of challenges to businesses. By recognizing these challenges and developing strategies to address them, companies can ensure they are prepared and equipped to handle them.
* The author of this article is Dr. Shou-De Lin, Chief Machine Learning Scientist, Appier
|
https://medium.com/appier-blog/dealing-with-5-practical-issues-in-machine-learning-and-their-business-implications-cc8e75a8ef8d
|
[]
|
2020-07-09 16:37:36.788000+00:00
|
['Artificial Intelligence', 'Technology', 'Machine Learning', 'Data Science', 'Data Analysis']
|
2,055 |
Notes on Deep Learning — Data Loader
|
Deep learning is a success because of big data.
When it comes to machine learning or deep learning 80% or more is spent wrangling with data.
The effort spent by a machine learning engineer or a data scientist to just get the data into format prepare and clean for abnormality is immense. The part of wrangling isn’t even trivial and most important part, so it can’t even be skipped. The wrangling, in fact, is given special attention as it affects the performance the most. This chapter is often revisited a couple of times to be revised.
The deep learning loves noise and learns better with noise but it cannot be yet denied that one has to prepare the data. The loading of data is one of the inevitable tasks. PyTorch helps us with load, preprocess our non-trivial datasets.
It also makes code looks nice^^ so why not do it better :)
Until this point in series, we skillfully iterated over our nice hand-written while loop. This was simple efficient but we could do a lot more than a simple iteration over our data. In particular, we could
Create batches
Shuffle the data
Load data in parallel enhancing multiprocessing
Access easy to use functions to iterate so we don’t have to worry
Use state-of-art standards and common standards across all the programs we develop
These all feature are accessible at ease if we use torch.utils.data.DataLoader
A dataset
A dataset is your data. In Pytorch it is an abstract representation of a class with functions:
len so that len(dataset) returns the size of the dataset.
so that len(dataset) returns the size of the dataset. getitem to support the indexing such that dataset[i] can be used to get ith sample
If we need to write our custom data loader we need to override the above methods
|
https://medium.com/datadriveninvestor/notes-on-deep-learning-data-loader-d93ab79a631a
|
['Venali Sonone']
|
2019-07-30 03:05:55.240000+00:00
|
['Machine Learning', 'Technology', 'Deep Learning', 'Data Science']
|
2,056 |
Takeaways from Blockchain Conference in Business 2020
|
Blockchain Conference in Business was held last Wednesday 2/12 as result of a collaboration between the Hellenic Blockchain Hub and Boussias Communications.
The feedback we received, as exhibited in a high Net Promoter Score for an online event (59), was very positive.
We had a stellar line-up of 23 guest speakers that maintained the interest of the audience for more than 7 hours.
More than 150 executives from various organizations of the private and public sector attended the event.
HBH will continue to activate the Greek ecosystem around this new technology as there is significant progress in the digital maturity levels in all sectors of the economy and European efforts intensify.
Some takeaways from my notes during the day:
· The cross-combination of industry and finance domains create totally new business models
· DLT is a GPT (General Purpose Technology)
· EMT (E-money Token) will be key for the future of crypto asset investing
· Main reason for having a digital Euro is the advent of the Machine Economy based on IoT
· Four major areas of DLT business models: smart contracts, securities, record keeping, digital currency & fraud detection
· What ERP is for one company, blockchain is for a network of enterprises
· We are still at the beginning of the curve towards mainstream and scalable use
· Scalability is still challenging since we need better automation, data governance and interoperability
· We need to take some gradual and solid steps to move forward
· Current roles of the energy sectors cannot be disrupted when transitioning into a new technology in the operating model
· Tokenization-as-a-service is here combining legal and tech aspects
· Smart city applications are numerous from identity and law enforcement to e-voting, land registry and mobility and mostly based on the token economy model
· More than 15 blockchain H2020 projects are current in progress in Greece, e.g. in energy, diplomas, e-voting etc
· Information silos need to be broken down in the digital age
· Supply chain offers one of the best fits for applications
· DeFi (Decentralized Finance) and CBDC (Central Bank Digital Currencies) are two of the most evolving areas
Photo taken from the closing Panel “Deep Dive: Public and private sectors leveraging DLT and what’s next by Hellenic Blockchain Hub” with our Board members Aggeliki Dedopoulou, George Panou and Marinos Xynarianos and the Head of Software Development Department at ASEP Panagiotis Zarafidis, who brought an interesting perspective of how the public sector experiments with this new technology.
One behalf of Hellenic Blockchain Hub I would like to thank everyone that made this possible, especially Angeliki Korre from Boussias Communications and our key board member George Panou who made some top speaker references and connections.
See you in our next large-scale event!
|
https://medium.com/@kostaskalogerakis/takeaways-from-blockchain-conference-in-business-wed-2-12-c9e11ae56ba3
|
['Kostas Kalogerakis']
|
2020-12-06 11:00:37.823000+00:00
|
['Blockchain', 'Blockchain Development', 'Blockchain Technology', 'Hellenic Blockchain Hub']
|
2,057 |
Amazon Proves That a Competitive Culture Beats an Anti-Competitive Policy, Every Time
|
Once more, titans of industry have fallen under censure for perceived monopolization and the abuse of their considerable power. But this time, their names aren’t Carnegie, Rockefeller, or Vanderbilt, but Bezos, Zuckerberg, Pichai, and Cook.
In recent weeks, all four have faced hard questions about perceived corporate misbehavior. The concerns directed towards each corporate icon may differ according to the specifics of their company’s actions, but all ask the same essential question: Can massive tech companies keep themselves from intimidating or using the small businesses that increasingly rely on their platforms to survive?
In late July, the House Judiciary Committee convened a hearing to address the matter. The event marked the culmination of an extensive antitrust investigation that encompassed over a million corporate documents and hundreds of hours of personnel interviews. One reporter for the Verge characterized the hearing as “one of the biggest tech oversight moments in recent years.” Representative David Cicilline, the Commercial and Administrative Law Subcommittee Chair, made the subcommittee’s belief in the importance of the hearing clear at its outset.
“Because these companies are so central to our modern life, their business practices and decisions have an outsized effect on our economy and our democracy,” Cicilline said. “Any single action by any one of these companies can affect hundreds of millions of us in profound and lasting ways.”
Cicilline further argued that each of the four tech companies under investigation — Amazon, Facebook, Google, and Apple — comprised a crucial channel for distribution, such as an app store or ad venue, and uses monopolizing methods to purchase or otherwise block potential competitors. He also noted that the companies all either show preference to their branded products or create pricing schemes that undermine third-party brands’ abilities to compete.
As you might have already guessed, each case has a wealth of associated information and considerations. Recapping them, let alone providing commentary, would be challenging at best. So, instead, I want to consider the question of whether or not a business can be both a market ecosystem and fair competitor through the context of one business: Amazon.
Amazon fell under fire earlier this year, when the Wall Street Journal released a stunning report that the e-retailer had used data from its third-party sellers — data that was believed to be proprietary — to inform the development and sale of competing, private-label products.
This revelation sent shockwaves through the business community, despite the fact that it wasn’t entirely unanticipated; according to reporting from the Verge, the European Union’s main antitrust body claimed that it was “investigating whether Amazon is abusing its dual role as a seller of its own products and a marketplace operator and whether the company is gaining a competitive advantage from data it gathers on third-party sellers” in 2019.
Amazon has pushed back on these concerns, claiming that it has policies that forbid private-label personnel from obtaining specific seller data. However, the Wall Street Journal’s interviews of former and current employees found that the rule was inconsistently enforced and overlooked so often that the use of third-party, proprietary data was openly discussed in product development meetings.
“We knew we shouldn’t,” one former employee said while recounting a pattern of using seller data to launch and bolster Amazon products. “But at the same time, we are making Amazon branded products, and we want them to sell.”
And therein lies the core of the problem. Amazon is a company that maintains a laser focus on success — even to the point that its employees are willing to circumvent policy for its sake. But we can’t blame the employees, not entirely. The tech industry has long been known for its move-fast-and-break-things attitude, and Amazon more than most; the e-retailer’s obsession with achievement is near-legendary.
In 2015, New York Times reporters Jodi Kantor and David Streitfeld published an exposé that painted Amazon’s culture as one specifically designed for intense, high-output, and unforgiving efficiency.
“Every aspect of the Amazon system amplifies the others to motivate and discipline the company’s marketers, engineers and finance specialists: the leadership principles; rigorous, continuing feedback on performance; and the competition among peers who fear missing a potential problem or improvement and race to answer an email before anyone else,” Kantor and Streitfeld described.
“The culture stoked their willingness to erode work-life boundaries, castigate themselves for shortcomings (being ‘vocally self-critical’ is included in the description of the leadership principles) and try to impress a company that can often feel like an insatiable taskmaster.”
The article even noted that Amazon holds yearly firing sessions (dubbed “cullings” in the exposé) to shed those who don’t perform up to its notoriously high standards. Illness, parenthood, and even family loss — none were considered excuses for lapses in performance.
Given the stressful environment and achievement-at-all-costs mentality, is it any surprise that employees would sneak around a barely-enforced policy to obtain data that will help their projects succeed? I would say no.
In a culture that positions cutthroat competitiveness as a professional survival mechanism, an anticompetitive policy is little more than flimsy caution tape: readily seen, easily circumvented, and meant more to provide plausible deniability than to prevent anyone from breaking the rules.
And, of course, we have to acknowledge the point that a company that periodically culls its staff for the sake of efficiency wouldn’t mind pushing blame onto a worker who happens to get caught. Bezos already did so in his hearing. He testified, “What I can tell you is we have a policy against using seller-specific data to aid our private label business but I can’t guarantee that policy has never been violated.”
Another hearing exchange between Cicilline and Bezos is particularly telling.
Cicilline asks, “Isn’t it an inherent conflict of interest for Amazon to produce and sell products that compete directly with third party sellers, particularly when you, Amazon, set the rules of the game?”
Bezos responds: “The consumer is the one making the decisions.”
But how is that an appropriate response, when the data Amazon collects allows the e-retailer an unfair advantage to design and market products designed to outstrip the competition? It remains to be seen whether legislators will ultimately choose to spin off Amazon marketplace from its Basics line, but Amazon has proven beyond a doubt that it is naive to believe that a company that was built with a crush-the-competition mentality should be trusted with safeguarding smaller, vulnerable competitors’ proprietary data.
Company culture beats policy, every time.
|
https://medium.com/digital-diplomacy/amazon-proves-that-a-competitive-culture-beats-an-anti-competitive-policy-every-time-b3158bedd0e4
|
['Bennat Berger']
|
2020-10-16 19:30:59.193000+00:00
|
['Technology', 'Work', 'Culture', 'Amazon', 'Business']
|
2,058 |
Design & the military: a love story
|
Collage by Vittoria Casanova.
By Vittoria Casanova
We usually don’t ask ourselves many questions about the objects surrounding our lives. Aside from the simple function and aesthetics, we don’t think about the object’s history or why products and services, which we use every day, have been designed in the way that we know them. When you think about design, you wouldn’t initially associate it with war. But, looking back at the history of design and invention, it seems that war is the main and most important catalyst for the research, discovery, and implementation of many new solutions and technologies.
The reason might be found in the large amount of funding that governments allocate to military and defense departments. Just to give you an idea, the DARPA (Defense Advanced Research Projects Agency), responsible for the development of emerging technologies for military use, has an average annual budget of three billion USD. Yes, three billion per year!
Here are a few intriguing stories about common products and services that have been catalysed by war.
The grandmother of the Internet was called ARPA, short for Advanced Research Projects Agency. Its initial purpose was to enable researchers to communicate and share knowledge and resources between university computers over telephone lines.
ARPA was born during the Cold War when the US was worried about the Soviet Union destroying their long-distance communications network. The US urgently needed a computer communications system without a central core that could be used wirelessly and remotely. Which would, therefore, be much more difficult for enemies to attack and destroy.
ARPA then started to design a computer network called ARPANET, which would be accessible anywhere in the world using computing power and data. “Internetworking”, as scientists called it, presented enormous challenges as getting networks to ‘talk to each other’ and move data was like speaking Chinese to someone who can only understand Turkish. The Internet’s designers needed to develop a common digital language to enable data sharing but, it had to be a language flexible enough to accommodate all kinds of data, even for the types that hadn’t been invented yet.
The Internet seemed like an extremely far-fetched idea, near impossible to design. But, in the spring of 1976, they found a way. The Internet went from being an obscure research idea to a technology that’s now used by over 4.2 billion people. And, it took less than forty years.
The Global Positioning System, commonly known as the GPS, also has its origins in the Sputnik era.
The idea for the GPS emerged in 1957 when American scientists were tracking the launch of the first satellite, a Russian spacecraft called Sputnik, to orbit Earth. They noticed the frequency of the radio signal from Sputnik got gradually higher as the satellite got closer, and lower as the satellite moved away. This was caused by the Doppler Effect, the same effect that makes the ambulance siren increase or decrease as it moves away or towards an observer. This provided great inspiration: satellites could be tracked from the ground by measuring the frequency of the radio signals they emitted, and, in return, the locations of receivers on the ground could be tracked by their distance from the satellites.
Drones, also known as unmanned aerial vehicles, are another great example. These are aircraft with no onboard crew or passengers, which can be either automated or remotely piloted. The initial idea first came to light in 1849 when Austria attacked Venice with balloons that were loaded with explosives. While few balloons reached their intended targets, most were caught in change winds and were blown back over Austrian lines. From there, it was clear that better aerial technology, which could be controlled remotely, was desperately needed.
Last, but not least, a simple item that we use very often: tape. Duct tape was originally invented by Johnson & Johnson’s pharmaceutical division during WWII for the military. The soldiers specifically needed a waterproof tape that could be used to keep moisture and humidity out of ammunition cases. This is why the original duct tape only came in army green.
Many more examples can be found in various other mundane products: microwaves, digital cameras, superglue, canned food, and penicillin, just to name a few.
It’s also interesting to see that these military-born technologies can even be found in three of our INDEX: Award 2017 winners: Ethereum — a decentralised digital network, commonly referred to as Internet 2.0; what3words — a new GPS system using three-word address; and Zipline — a medical supply delivery chain using drones. But, let’s hope that in the future we won’t need to rely on war for more great solutions to emerge.
|
https://designtoimprovelife.medium.com/design-the-military-a-love-story-99dd58b8b40f
|
['The Index Project']
|
2018-11-28 08:56:35.429000+00:00
|
['War', 'Technology', 'Design']
|
2,059 |
Fun Side Projects That You Can Build Today
|
Fun Side Projects That You Can Build Today
From building something in 3D to a Bitcoin tracker and more
Photo by Christopher Gower on Unsplash.
Working on side projects can expand your skillset dramatically as a developer and it will prepare you for further complex challenges. It’s probably the fastest way to improve since you can choose what project you want to work on — in contrast to your day job.
There are no shortcuts when it comes to becoming a better developer. Spending time behind the keyboard is a must. So why not do it while working on a fun side project?
However, most developers struggle with what they should build. They tend to overthink it, which leads to zero. I’ll save you the hassle of coming up with the next great killer app. Just start out small and simple.
That’s why I’ve listed seven projects in this article that are both challenging and fun.
|
https://medium.com/better-programming/fun-side-projects-that-you-can-build-today-553158597363
|
[]
|
2020-03-11 17:22:15.082000+00:00
|
['Web Development', 'Inspiration', 'Technology', 'Programming', 'JavaScript']
|
2,060 |
The Unexpected Analogy of Technical Data and Fictional Novels
|
I had written and self-published several fictional books by the time I participated at the S1000D User Forum in Seville at the end of September 2016.
So when I got challenged to explain what a data module actually was, I suddenly had an idea. I exclaimed, “Data modules are so much like scenes in a novel!”
Right after that, I couldn’t stop the flow of realizations.
“Just as scenes,” I said, “data modules have a start (for a procedural data module, it is preliminary requirements), the middle (the procedure itself), an end (close-up requirements), and one setting (a unit or assembly, where the operational or maintenance procedure takes place). And in a way, just like the scenes in a novel, they tell a concise and self-contained story, or a future short story, that is how the given procedure is to take place.”
This epiphany might also have been facilitated by a statement I made during the presentation I made at the same User Forum in Seville before the above discussion took place. During that presentation, I claimed that technical manuals were books, whatever format they are in or however interactive they might be.
The presentation and the discussions afterward only strengthened this flow of parallels between novels, books in general, and S1000D conformant technical publications.
After the User Forum, I jotted down a list of topics I could think of when relating S1000D and its constructs to storytelling, and I came up with at least seven of them immediately. Today, I guess that you can find an equivalent in a technical manual to any tool or construct the fictional writers use in their novels so artfully.
|
https://medium.com/technology-hits/the-unexpected-analogy-of-technical-data-and-fictional-novels-6745269b3cee
|
['Victoria Ichizli-Bartels']
|
2020-12-17 11:14:22.002000+00:00
|
['Technical Publications', 'Technology', 'Serendipity', 'Analogy', 'Fiction']
|
2,061 |
Seven Challenges of Adopting Artificial Intelligence (AI) Solutions
|
By Adnan Kharuf
Artificial Intelligence has begun providing real value to organizations in various industries. This will start becoming more apparent in the future as AI solutions become more accessible and easier to implement. However, even with the high levels of interest in leveraging AI and ML solutions, implementation and deployment in many organizations is still low. Much of this is because enterprises don’t incorporate some realities about AI projects into their thinking.
Let’s explore seven challenges that enterprises need to overcome to successfully leverage AI solutions.
Business Case
Organizations need to understand and identify the clear problems that utilizing AI and ML solutions will potentially solve. They will need to think about how they can add AI capabilities to existing and future processes, products and services. Instead of simply painting a broad vision, they need to also be clear on what impact it will have on the business over the short and long term.
Very often predicting and measuring the returns on investment in AI is difficult, especially when the results are not apparent right away. But in the long run, the built-in continuous learning that comes with AI will help organizations adapt to changing business conditions in a more flexible manner.
Skills
Organizations need to hire specialists who have deep knowledge of current AI technologies and their limitations. Additionally, it is necessary to supplement these AI specialists with subject matter experts that can provide context and clarity to business problems. The experts also need a complete understanding of the organization’s business goals and technology needs to help implement an AI strategy. Organizations with a limited budget may not be in the position to hire the right talent that is required for an AI project. It will take a lot of time and work for organizations to find well-trained professionals with the right skill sets who can build an enterprise’s AI solutions from the ground up.
Cost
AI/ML experts, business analysts, data scientists, and subject matter experts in today’s market are hard to find and expensive. Developing, deploying, and maintaining an AI solution from scratch will require data engineers, software engineers, product/project managers, ML/AI experts and the right infrastructure. The total expense that is required for a typical AI project is often quite large, and doing one-off projects are especially expensive.
Tools
According to Gartner, a team that’s building data science algorithms and solutions uses seven tools on average to build a solution. Many decide to build their own AI solution by combining multiple data processing (Spark, Hive, etc.) and AI/ML tools (Spark ML, PyTorch, TensorFlow, etc.). Many of these tools are rapidly evolving open source applications that are inadequately integrated across end-to-end data workflows. Not only will this limitation slow down innovation but will also create large security and process vulnerabilities.
Data
AI solutions are built and driven by data. Organizations must have base data as well as a constant source of data to keep it up and running. But AI is also dependent on the right kind of data, not just any data. It remains challenging for organizations to integrate data since information is usually spread across multiple applications in various formats such as text, image, video, and audio. Experts know success with AI will depend on quality data to build models and provide accurate learning and results. For example, in certain industries such as healthcare, it is difficult to predict outcomes of breast cancer if there is a lack of patient data sets to ingest. Organizations need to consider the problems AI will need to solve and take the time to prepare the data.
Infrastructure
Data handling, storage, compute, scaling, extensibility and security are all critical components necessary for enterprises to deploy an AI solution. An organization’s ultimate success with AI always starts with how suitable its infrastructure environment is to support powerful AI applications and workloads. This includes:
The right mix of processing capabilities and high-speed storage to support state-of-the-art machine learning and deep learning models.
The right software that is tuned and optimized to fit the underlying hardware.
A single interface that can manage most moving parts and components.
A flexible infrastructure that can be deployed in the cloud, or in an on-premise data center to optimize performance.
Organizations need to think about infrastructure more broadly to successfully enable AI today and in the future.
Integrations
Ultimately, the success of AI implementation within an organization will depend on how well the solution integrates with existing infrastructure and business functions. An organization needs to be flexible enough to adapt new business models, new team models, and new workflows across all departments and teams. Incorporating AI into the business is as much a people and process problem as it is a technology one.
Conclusion
Once an enterprise organization can overcome these challenges, they will finally be able to utilize AI to drastically revolutionize businesses, improve processes, and increase employee productivity. The key will be to minimize challenges and maximize the benefits to adopt the core capabilities of AI. Organizations will need to look for the right approach to enable AI-driven solutions without the need to build everything from scratch. Reusability of components such as data, models, and processing techniques is critical for expanding the use of AI.
|
https://medium.com/@petuum/seven-challenges-of-adopting-artificial-intelligence-ai-solutions-a3a79e65c53b
|
['Petuum']
|
2019-04-02 17:17:32.644000+00:00
|
['Machine Learning', 'Enterprise Technology', 'Scalability', 'Infrastructure', 'Artificial Intelligence']
|
2,062 |
Top 7 Digital Healthcare Companies Success Stories
|
Information Technology is all about empowering industries with whatever tools and techniques they require to in-turn elevate the life and livelihood of people. Over the years, technology has evolved to an extent that what was considered something impossible until a few years back are possible today.
Today, information technology influences every industry, sector and market segment it touches. And one industry that seems to have evolved for the better is the healthcare industry. For those of you who didn’t know, healthtech is an actual jargon in the market today that defines the application of technology in healthcare.
From understanding diseases and precise diagnostics to treatments, electronic records and fighting counterfeit medicines, healthcare is dealing with them all through tech incorporation and implementation.
Important Healthtech Statistics
Important Healthtech Statistics
The reach of technology in healthcare has been massive with companies and giants deploying the latest medical technology to overcome hurdles and deliver better services and products. The US healthtech market is all set to become a $390.7 bn industry by 2024.
Companies are consistently looking for newer ways to implement technologies like machine learning, artificial intelligence, data analytics, robotics and more into their healthcare software solutions to further add value to their services. This is to the extent that the spending by healthcare companies in artificial intelligence and its allied concepts will reach to about $40.2 bn by 2026.
Read More : 10 Common Applications of Artificial Intelligence in Healthcare
The wearable tech market is also on the rise as it too aids in fighting, preventing and tackling several diseases at personal levels. This market is forecasted to reach $56.8 bn by the end of 2025.
Read More : How Wearable Tech Helps to Runners and Cyclists?
Apart from these, chatbots, Internet of Things, telehealth and more are making the lives of healthcare professionals and patients better with their significant contributions. With so much happening in this sector, we really felt we need to shed more light on the developments in the healthtech industry software and healthtech companies.
That’s why we have handpicked a list of some of the most prominent healthtech companies that are making the best use of technology to approach healthcare in unimaginable ways.
Top 7 Digital Healthtech Companies you should know
Top 7 Digital Healthtech Companies you should know
1. Outcome Health
Based out of Chicago, Outcome Health is a $5bn company that intends to put the screentime of people to good use. Outcome Health is all about letting patients know everything they need to understand about their healthcare depending on their stage of treatments. From consultation and diagnostics to treatment, the company provides adequate educational information about the disease or disorder to patients, caregivers and healthcare professionals via technology.
This includes presenting relevant information to patients on waiting room television/board, exam room tablets/boards, waiting room Wi-Fi and more.
2. Oscar Health
A New York-based venture, Oscar Health is a health insurance company that prioritizes user experience and offers unlimited teleconsultations and generic medicine. It offers smart insurance policies to customers and diverse healthcare facilities like virtual doctor calls, medication deliveries, and more. The company is worth $3.2bn.
3. GRAIL
One of the most plaguing concerns in the world is cancer. In the US alone, it is estimated that there would be over 1.8mn new diagnosis of cancer and over 600,000 deaths due to the deadly disease. However, healthcare approaches reveal that cancer can be cured if detected at an early stage. That’s exactly what GRAIL has set out to do. It has built a blood test that helps patients detect cancer in early stages and pave the way for proper treatment and recovery. The company functions out of California.
4. Tempus Labs
While cancer is a malignant tumor, there are also benign tumors that are non-cancerous but cause physical complications. Both require respective healthcare approaches and treatment procedures to help patients lead a normal life. And to do that extensively is the initiative Tempus Labs, a company that incorporates genome sequencing and machine learning technologies to develop custom treatment procedures and plans to fight tumors. Tempus Labs is also based out of Chicago.
5. Butterfly Network
Butterfly Network indeed redefines accessible healthcare with its Butterfly iQ. In simple words, Butterfly iQ is a portable ultrasound imaging system developed by the company that can be connected to smartphones owned by users. With this, patients can perform analysis and diagnosis of specific systems such as blood vessels measurements, musculoskeletal, abdominal and cardiac systems and more. All the assessments are done via imaging.
6. 23andMe
Diseases are not just results of our lifestyle and hygiene practices. Some diseases are inherited and hereditary as well. For those of you who have always been told that you are on the risk of developing certain diseases because they are hereditary such as diabetes, cancer, stroke and more, 23andMe arrives offering an ideal solution. The biotech company allows you to send your DNA samples to them for assessment through which you can find out about your ancestry, genetic predispositions, genes and more. It’s headquartered in California.
7. HeartFlow
Coronary artery disease is one where its treatment procedures have always been invasive. And that’s what HeartFlow intends to change. Based out of California, HeartFlow intends to find non-invasive ways to treat coronary artery disease through CT scans. These scans accurately analyze the heart flow of patients, giving physicians better ways to diagnose and treat the disease.
Wrapping Up
So, these were the top healthtech companies that are rewriting history one patient at a time. Personally, a lot of these applications were new and intriguing to us. Healthcare is one industry that should benefit the most by technology. And if you’re someone who wants to contribute to this sector, we recommend you get started today on your idea.
If you already have an idea, you should get in touch with a custom healthcare software development company like ours to turn your idea into a product. We work with the best healthcare mobile app developers who would complement your visions. So, make use of our healthcare app development services and make the world a better place for tomorrow.
|
https://medium.com/techtic-solutions/top-7-digital-healthcare-companies-success-stories-e60ae329ddd7
|
['Techtic Solutions']
|
2020-09-15 14:35:46.488000+00:00
|
['Healthcare', 'Startup', 'Healthtech Startup', 'Digital Health Technology', 'Healthtech']
|
2,063 |
Noises of a modern city
|
Remembering the azan in Singapore
View of Masjid Sultan (Sultan Mosque) from Jalan Pinang, Singapore.
I was walking along Rocher Canal a few evenings ago and heard the azan from one of the mosques nearby, probably either Masjid Malabar or Masjid Sultan. I’ve always loved the music of that call, the intensity of a single soaring voice.
Which is why I think it’s such a pity that the azan is now rarely heard in public in Singapore. I’m not Muslim; for me, it is mainly a loss of music. But I also feel a loss that Muslims in Singapore aren’t able to hear this beautiful call loud and proud, as a regular public expression of their faith.
Why do I miss something I don’t fully understand?
Science has come to Singapore
Excerpt from “NOTES Of The DAY”, The Straits Times, 24 November 1937
“Science has come to Singapore,” declared The Straits Times in 1937 — “the mosques have been wired for sound.”
Excerpt from “Loudspeakers In Singapore Mosque”, The Straits Times, 29 December 1936
This referred to the 1936 installation of a powerful sound system able to broadcast the azan from Masjid Sultan, “audible more than a mile away”. This technology was so new, and residents so up-to-date, that Singapore was “the first city to try the experiment”.
“In future it will be possible to address congregations of between 4,000 to 5,000 people with a good margin of power still in hand,” the writer added in bold, describing the microphones connected by 600 feet of cable to directional-type speakers inside the mosque and external speakers on two of the four minarets.
This sound system was a gift from a “well-known member of the Mohammedan community in Singapore” — unnamed in the article but no doubt famous within the community — and it was such big news that even the person who planned and supervised the installation was mentioned: Mr F. Grainger-Brown, Singapore manager of the radio and valve department of the General Electric Co. Ltd. of England.
The 1936 Straits Times article mentions some nay-sayers, who believed that this electric amplifying system was “incongruous with the romantic conception of the holy cities of the East”. But, much like Singapore today, the residents in the 1930s were won over by this new technology in town. The article concludes:
“The majority believe that the noises of modern city demand an accompanying increase in the power of the muezzin’s voice.”
Five years later, the loudspeakers continued to be a booming success. In 1941, an article in The Singapore Free Press and Mercantile Advertiser describes the evening scene at Masjid Sultan during Ramadan:
“Then suddenly the crescent and star atop the dome over the entrance flash into light. The gun goes off with a tremendous reverberating boom, and an instant later, the crescent and star on the dome are also lit up. The sound of the muezzin is heard from one of the minarets, calling the faithful to prayer, his chanting voice amplified a hundred times by loudspeakers fitted five years ago to each tower. The call, insistent, strong, can be heard more than a mile away by means of the loudspeakers, though sometimes their atmospheric crackling spoils its beauty.”
Reducing noise levels
The English-language press* does not mention the azan loudspeakers again till the 1970s — this time, after a bureaucratic clampdown on noise.
In 1974, The Straits Times reported that Social Affairs Minister Mr Othman Wok criticised two groups (the Singapore Muslim Action Front and the Singapore Muslim Assembly) for “exploiting religious issues to create unrest” and conducting a “smear campaign against the Government”.
The groups had submitted petitions to the government and distributed copies of the petition to the public and foreign delegates at the Islamic Foreign Ministers’ Conference in Kuala Lumpur. Among their criticisms was the claim that “Muslims were the only group affected by the policy on noise abatement”. (Note: The petition was not published in the papers, and I have not read it, so I can only say this was Mr Othman Wok’s characterisation of their petition.)
This was not true, insisted Mr Othman. “Any measures proposed will affect not only mosques but Chinese wayangs and temples, sing-song shows, Indian temples, Sikh gurdawaras, churches and all sources of noise in the community.” Acknowledging that the use of loudspeakers for the azan was “a relatively debatable issue”, he added:
“The Mufti recently held discussions with the Management Committees of 68 mosques to consult them on how best to co-operate with the Government to reduce noise from loudspeakers. … Some suggested that loudspeakers which are presently installed outside of the mosque premises be re-installed within the premises; others felt that the loudspeakers should be left as they are, but the volume be reduced. You can therefore see how some unscrupulous people can distort the facts by making mischievous statements to the effect that the Government is banning the Azan.”
Were the groups genuinely worried that the government was going to ban the azan? Or were they really just “bent on exploiting religious issues to create unrest”? How widespread were these fears?
In the end, it was true that the government did not ban the azan — though the tensions continued. Four years later, in a 1978 article titled “Noise levels and the tensions of urban living”, The Straits Times reports an exchange between then-Acting Social Affairs Minister Dr Ahmad Mattar and his fellow Muslim Parliamentarian Haji Sha’ari Bin Tadin, then-MP for Bedok.
Mr Haji Sha’ari Tadin had asked Dr Mattar to explain the recent action taken by Majlis Ugama Islam Singapura (MUIS), in fixing sound attenuators to amplifiers used in mosques.
Echoing Mr Othman, Dr Mattar said: “The government’s policy on sound amplification affected not only mosques, but also Chinese wayangs, Hindu temples, churches and all public gatherings where sound amplification systems were used.”
He appealed for understanding, pointing to scientific research, the need for harmony, and efforts by MUIS and his ministry.
“Amplifiers used at such places were to be fitted with sound attenuators by the Singapore Institute of Standards and Industrial Research. … Research had shown that there was a direct relationship between growing tension and noise level of urban living. … Officials from the MUIS and [my] ministry had been testing different levels of sound amplification for the Azan (the call to prayer) and decided that an acceptable sound level was 60 dBA measured from a distance of 10 metres from the sound sources. Tests of this sound level could be heard clearly from a distance of 100 metres from the mosque. Since Aug 15 last year, [my] ministry had arranged for the Azan to be broadcast by Radio Singapore five times a day. This had meant re-starting up the station about an hour earlier than its normal broadcasting time in order to be on the air before the first Azan.”
The full Parliamentary record of 27 February 1978 gives a little more detail. In his answer, Dr Mattar had also referred to this move being a part of the government’s Community Noise Abatement programme aimed at controlling community noise. He added:
“MUIS officials are visiting mosques to explain to Mosque Management Committees the reasons for fixing sound attenuators to control the decibel levels of external loudspeakers of mosques. Almost all mosques have accepted the reasons and have been cooperative. To-date, 61 sound attenuators have been fixed. Sound attenuators are not required for 21 mosques which do not have loudspeakers mounted externally. The exercise has been completed.”
Almost all mosques accepted the reasons and were cooperative. What happened to those who weren’t? And what did Mr Sha’ari Tadin think? There was no reply recorded. And it has never been mentioned in Parliament again.
Removing the azan from public space
Today, the azan remains — broadcast quietly in the mosques, and on radio for those who tune in. It is as good as silent in the public soundscape.
Speaking at the opening of the inaugural Muis International Conference on Muslims in Multicultural Societies in 2010, then-Senior Minister Mr Goh Chok Tong acknowledged another reason for the government’s azan policy.
Describing the “critical contribution made by our Muslim minority” in Singapore, he said:
“Singapore, being a city state, is one of the world’s most densely populated countries. With people living in high-rise apartments and in close proximity, the call of prayer or azan amplified through loudspeakers at mosques during the early dawn or in the evening had to be modified. If not, it would have been an issue with the majority non-Muslims and would make it difficult for them to accept the building of new mosques in their vicinity.
So it seems it was not just because the urban environment was getting increasingly noisy, but because of a numbers game — because “it would have been an issue with the majority non-Muslims”, because it would have been “difficult for the non-Muslims to accept the building of new mosques in their vicinity”.
This is an unsurprising narrative, given what we know of Singapore’s race relations, Chinese dominance and cultural politics today. But were the non-Muslim Singaporeans then really so intolerant of the azan? Was the azan that loud and disruptive?
In 2002, then-Minister of State Dr Yaacob Ibrahim painted a slightly more nuanced picture. In a speech to the NTU Muslim Society, he said:
“At a certain place in Singapore, the Chinese community and the mosque committee have reached such a level of understanding and appreciation that when the mosque changed a particular practice it attracted the attention of the Chinese community. This practice was related to that of calling for prayer. The mosque had to lower the volume and turn the speakers inwards. The Chinese there who were used to the previous practice were surprised. More telling was the different reactions between the members of the long standing Chinese community there and new Chinese homeowners in that area. The former was more tolerant of the mosque’s presence than the latter. But over time with continuous interaction and mingling the mosque and the latter group reached a better level of understanding and appreciation.” (my emphasis)
So it seems the long-standing Chinese community was actually fine with the public azan. And while the new Chinese homeowners were initially less tolerant, over time they reached a better understanding and appreciation. Still, the azan was still removed from public space.
In his speech in 2010, then-Senior Minister Mr Goh Chok Tong described this gradual erasure:
“The changes were made incrementally. First, the loudspeakers were tilted inwards and away from nearby houses, and limits were set on their volume levels. Later, a radio frequency was allocated to allow the call to prayer to be broadcast over the radio. In this way, all Muslims who wished to receive the call to prayer could just tune in to their radio. Over time, the mosques did away with loudspeakers. This showed the pragmatism of our Muslims and their sensitivity to the feelings of non-Muslims.”
If Singapore had taken a different path, we might have had a noisier, more chaotic urban environment where the azan, the temple gongs, and the church bells would intermingle loud and clear.
I have never known that Singapore. My Singapore is pragmatic and sensitive, one of enforced invisibility, reduced risk, minimum noise.
***
*Disclaimer: I wasn’t able to read any Malay-language coverage on this issue, so I’m sure there are huge gaps in my knowledge. I welcome more information and thoughts on this subject.
|
https://medium.com/kampung-seaport/noises-of-a-modern-city-1537d7c780e8
|
['Lisa L']
|
2016-12-13 14:04:18.707000+00:00
|
['Noise', 'Technology', 'History', 'Singapore', 'Islam']
|
2,064 |
DevOps: A culture beyond Departments
|
It’s an age that is witnessing a dramatic increase in the dependency on cloud infrastructure for companies to stay at par with modern demands with respect to both products and services. Consequently, it has become essential for organizations to maintain fluency in performance and processes that define their existence in the competitive market. In order to achieve such goals, businesses had to find a solution that would withstand the test of time.
A steady combination of cultural philosophies, practices, and tools, DevOps holds the ability to enhance the development and delivery processes of organizations, and with additional speed! As the name suggests, DevOps came into being by not just merging the words ‘development’ and ‘operations’ but also comprehending an actual cultural shift to bridge the development and operation teams. Simply put, DevOps is a key that unlocks complex, manual processes involving error-prone interaction and simplifies them into testable, measurable, and scalable approaches.
It solves human problems with automated solutions.
In fact, it breaks the taboo of sticking to traditional development and management processes and encourages the implementation of fast-paced solutions. To further understand its cultural significance, let us peek at the various pros that DevOps brings to the table.
Accelerated workflow with improved collaboration
All For One and One for All
Under a DevOps model, development, quality assurance, security, and operations teams are no longer isolated as separate sects. Instead, they are closely integrated, making it more feasible to communicate with each other across the application lifecycle. Some models involve the merging of development and operations into a single unit where programmers can work across the entire lifecycle and hone skills across various functional areas. That, of course, depends largely on the project at hand.
When teams start to utilize automated processes, work gets done in a higher momentum. DevOps stimulates the usage of tools and technology stacks to operate as well as modify applications for quality results in a shorter stipulated time. For instance, when deploying code or operating infrastructure, team members can independently fulfill their tasks without having to rely on other teams.
Creation of reliable & scalable solutions
An infinite loop better automated
Implementation of DevOps allows organizations to upscale their infrastructure and development processes for a speedy delivery without compromising on quality. Hence, it ensures a positive experience for end-users. While implementing changes, practices like continuous integration and continuous delivery help maintain functionality and security across.
Continuous integration lets programmers share and merge code in a central location for seamless collaboration while with continuous delivery, software changes are automatically delivered and implemented as soon as they’re made. Such practices can be monitored and logged for real-time analysis and creating consistency in the workflow. And since improvements are executed swiftly, the development team can move on to focus on other problems.
Security in the digital environment
Implementing DevSecOps in more ways than one
DevOps models can be adopted without renouncing security. And that, perhaps, is too crucial in a world driven by data and the Internet. But how? By integrating automated compliance policies, fine-grained controls, and configuration management techniques within the cloud infrastructure.
When the infrastructure is defined by code, it is easy to be monitored at scale and reconfigured when deemed necessary. It also becomes less of a hassle when companies wish to make changes in resources since non-pliable resources can be automatically flagged or brought into compliance.
Conclusion
Be it any industry, software has truly changed the world and become a fundamental part of businesses across the globe. Brands extend their online services to consumers through software. Brands also use software to create new products, enhance operation value chains, and drive logistics. The upward spiraling population curve along with its dynamic consumption patterns is directing businesses to find more sense in process automation — to revamp how they design, curate, and deliver solutions, be it a product or a service. Enter DevOps!
Originally published on Coditas Blog
|
https://medium.com/@coditas/devops-a-culture-beyond-departments-4f737996602e
|
[]
|
2020-11-19 08:55:29.235000+00:00
|
['DevOps', 'Solutions', 'Technology', 'Software Development', 'Innovation']
|
2,065 |
Powering Digital Economies With A Blockchain Consensus Operating System
|
Powering Digital Economies With A Blockchain Consensus Operating System Visualmodo Follow Dec 17 · 5 min read
When we look ahead and try to envisage what the future global economy will look like, there’s no doubt that we can learn a lot from the innovation on blockchain and digital economies.
Decentralized Finance, or DeFi, is a good example. This increasingly important trend within the blockchain community is demonstrating how decentralized applications, or DApps, can automate interactions and value transfers that could be replicated across entire economies. For this to occur, blockchain technology needs to be adopted en masse. This seems a long way off right now but I believe Central Bank Digital Currencies or CBDC will be the catalyst for change.
With these digital currencies in place, governments can drive the mass adoption of blockchain technology for the benefit of society as a whole. However, for this to happen, we must first recognize that existing blockchains were not designed for governments and realize that an entirely new blockchain consensus operating system is needed to power digital economies.
Blockchain’s progress shows what is possible
Over a decade ago, Bitcoin introduced us to the possibility of exchanging value in a totally decentralized and peer to peer manner. This technological innovation not only solved the double-spend problem that had plagued the exchange of digital assets but also opened up the potential disintermediation of middlemen in all manner of transactions.
This progress was then built upon by Ethereum and other second-generation networks, which wanted to extend the power of blockchain beyond Bitcoin into a whole range of programmable applications. These dApps would utilize smart contracts to enable peer to peer interactions between individuals across all manner of transactions. The world envisioned by advocates of this second wave is one where any entity could transact on an open, public blockchain with any other entity and neither party would need to know who the other one was, as the code would act as the law governing their relationship.
Digital Economies Blockchain: Growing DeFi
To some extent, this vision has been realized, and the most obvious example of how is in the growing DeFi movement. Now, developers have built a whole host of dApps, mainly on the Ethereum network, to replicate traditional, centralized financial services. As a result, DeFi enthusiasts are combining innovative new saving, lending, and trading services to generate returns on their income via ‘yield farming’.
This is certainly an interesting trend to follow and it may be true that some of the hottest DeFi services right now will establish themselves as household names in the future. But before I go too far with the hyperbole, it’s worth asking — are you a yield farmer? Are your friends and family? So, great usage of digital economies and blockchain.
What I’m getting at is the question of whether these innovations are a widespread social phenomenon or just the fancy of a few financial and technology enthusiasts. It seems clear to me that the latter is true and that the only realistic assessment of the current situation is that blockchain has failed to achieve mass adoption.
Governments and businesses already use blockchain
Before going into the detail about what must occur to really spark mass adoption though. It would be remiss of me to ignore that blockchain is in use by governments and businesses.
In fact, while CBDC is a relatively new phenomenon. Some governments have deployed blockchain technology in production since as far back as 2012. The standout example of this is Estonia. A small Baltic state that has the digital state secured by the blockchain. The Estonian government receives a string of cyber attacks in 2007. In addition, ever since has become a leading light for the digital government. With 99% of government services available as e-services. In terms of blockchain, the government has the technology to power healthcare. Business, property, courts, and other types of registries.
Digital Economies Blockchain Business Usage
As well as governments, many businesses have also chosen to utilize blockchain technology. A recent report from Deloitte stated that nearly 40% of respondents have blockchain in production, while 55% said they see it as a top strategic priority. These adoption levels are up by a handful of standout examples too. JP Morgan has developed the Quorum platform in conjunction with Microsoft. So, shipping giant Maersk has collaborated with IBM on the TradeLens platform.
As with the earlier DeFi examples though, government and business involvement in blockchain still have a long way to go. Furthermore, you only have to ask around your friends and family to see that blockchain technology. More generally hasn’t really entered our daily lives.
The reality is that people don’t understand the power of this technology. Or the benefits it can bring to society as a whole. I believe that part of the reason for this is that the existing systems do not reflect the societal structures. So, we all recognize. For mass adoption to occur, this will need to change.
Blockchain will power CBDC in digital economies
One of the most important things to understand about the CBDC trend is that it will appear. By governments and central bankers. Who wants the CBDC to be used within regulated, digital economies.
For this to happen though, blockchain technology will need to adapt to allow governments to have more control. The L3COS system does exactly this. It utilizes a unique triple layer consensus mechanism that allows governments to operate supernodes at the top layer. While businesses and individuals interact within the second and third layers in a decentralized manner.
Many proponents of existing public blockchains will reject this out of hand. Because it goes against the permissionless and anonymous ideals they uphold above all else. However, a blockchain consensus operating system that is by governments is actually. So, what most people will recognize as reflecting societal norms. After all, when we elect our governments to protect us and help us prosper. Finally, why shouldn’t we have a single blockchain consensus operating system? That allows them to do the same for a digital economy?
|
https://medium.com/visualmodo/powering-digital-economies-with-a-blockchain-consensus-operating-system-3d471d78f0b7
|
[]
|
2020-12-17 02:35:27.443000+00:00
|
['Economies', 'Blockchain Development', 'Digital', 'Blockchain Technology', 'Os']
|
2,066 |
New Underdog in the Game-A kid & Minus Zero
|
This company, as it’s founder says it, is devoted to “building the impossible”. Minus Zero is an emerging company being built from the ground up in the field of self-driving or as they call them these days automated vehicles. We know that doesn’t surprises much at the beginning as we already have our globally loved Tesla or newly emerging lucid or many other small manufacturers but what gives this new gig a kick is that they are based in India- Home to the 2nd largest population on the planet, worst traffic and the cherry on the top “worst drivers”.
The interesting part about this company is that the founder is a college kid, at 20 years old Gagandeep Reehal has taken a step towards painting his dream with his ambition. He’s the first ever 2nd year college student from Thapar Institute of Engineering to get featured into Thapar’s notable alumni by the Google, He has had so much achievements at this age and for the record still counting that any regular 20 year old would stop reading this article out of self-complex (I know I would if was reading it*) but the purpose of this article is to inspire not to demotivate you guys. Gagandeep is AI engineer — he has been a participant, team leader and even judge at many of the national and international hackathons all over the nation and around the world, He has spoken as a key note speaker at many occasions. He has Published 2 Books at this Small age(I mean that gives goosebumps, Trust me I know), He has many more versatile skills and innumerable achievements, but all this shows is that this kind ain’t a “Regular 20 year old”.
With Minus Zero , when I asked Gagan why you doing this mate ? what’s the Reason behind it, I mean that was my first reflex, and he told me in a very non-romanticizing and practical way, “Nothing unique Just want to built something of my own and in my own country”, and all I got was fair enough. So, he makes it very clear and want to send out this message pretty clear that he isn’t another person trying to shoot in the dark, he knows what he’s doing and what he aims at.
So, I have talked a lot about the founder , lets talk about the creation in construction. Minus Zero is a company based on the idea of creating impossible technology presently focusing on automated vehicles. This company was established in May 2020 when Gagandeep Reehal came up with this idea. Currently the whole team working on this project are college kids and some graduates, They promise to get their first completely functioning and licensed vehicle in Indian markets by 2030 i.e. in the next decade and Here’s when it becomes interesting, the founder aspires that his cars would be 5 times cheaper than any other major electronic automated vehicle manufacturer in the market.
So, This is a very interesting creation in Action, this would take some — agreed and that’s where I come in, I would keep you guys posted with all the updates related to Minus Zero. Let’s walk together through this journey.
For the more curious one’s these are some of the social media handles of the company and the founder:
Linkedin- Minus Zero, Gagandeep Reehal(Founder)
Have a good day everybody and Thank you for taking out a few minutes of yours for reading.
|
https://medium.com/@just-ishrat/new-underdog-in-the-game-a-kid-minus-zero-577ebf2d6a64
|
[]
|
2020-12-25 10:05:50.970000+00:00
|
['Self Driving Cars', 'Technology News', 'Technology', 'Startup', 'India']
|
2,067 |
JavaScript Best Practices — Generators and Object Properties
|
Photo by Sebastian Pena Lambarri on Unsplash
JavaScript is a very forgiving language. It’s easy to write code that runs but has mistakes in it.
In this article, we’ll look at the best practices when using generators and defining and using object properties.
Don’t Use Generators If We Want To Transpile to ES5
Generator code doesn’t transpile well to ES5 code, so we may want to think twice if we’re targeting our build artifact to build into ES5 code.
However, this shouldn’t be the case with modern transpilers since they create custom closure based state machines from generators and async functions. They work the same way. However, the only advantage is that the transpile code is harder to debug even with source maps.
Therefore, if we never have to debug the transpiled code, then we can go ahead and use generators. Otherwise, we should think twice before using them in our code.
Make Sure Generator Function Signature is Spaced Properly
Generator functions should be spaced properly. The asterisk for the generator function should come right after the function keyword.
For instance, we should define our function as follows:
const foo = function*() {
yield 1;
}
In the code above, we have the asterisk coming right after the function keyword. And there’s no space after the asterisk.
For function declarations, we should write the following:
function* foo() {
yield 1;
}
In the code above, we have one asterisk after the asterisk and before the function name.
Use Dot Notation When Accessing Properties
If an object property’s name is a valid JavaScript identifier, then we should use the dot notation to access the object property.
It’s shorter than the bracket notation and does the same thing.
For instance, instead of writing the following with bracket notation:
const obj = {
foo: 1
} console.log(obj['foo']);
We should write the following code:
const obj = {
foo: 1
} console.log(obj.foo);
In the first example, we used the bracket notation, which is longer and we have to access the foo property by passing in a string into the brackets. We have to write extra characters just to access the foo property.
Instead, we should write what we have in the 2nd example, which is obj.foo .
It’s shorter and does the same thing.
Use Bracket Notation [] When Accessing Properties With a Variable
If we want to access a property with the name that’s stored in the variable, then we should use the bracket notation to do that.
For instance, we can do that with the following code:
const obj = {
foo: 1
} const getProp = (prop) => {
return obj[prop];
} console.log(getProp('foo'));
In the code above, we have the obj object with the foo property. Also, we have the getProp function, which takes the prop parameter and returns the value of obj[prop] .
We’ve to access the property value with the bracket notation since prop is a variable, so there’s no other way to access the property dynamically.
Then in the last line of our example, we can use getProp as follows:
getProp('foo')
to return the value of obj.foo which is 1.
Photo by William Moreland on Unsplash
Use Exponentiation Operator ** When Calculating Exponentiations
The exponentiation operator provides us with a shorter way to calculate exponents. It’s shorter than Math.pow which is available since the first version of JavaScript.
The exponentiation operator is available since ES6. For instance, instead of writing the following with Math.pow :
const result = Math.pow(2, 5);
We should write:
const result = 2 ** 5;
It’s much shorter and we don’t have to call a function to do exponentiation anymore.
Conclusion
If we want to transpile our code to ES5 and debug the ES5 code that’s built, then we shouldn’t use JavaScript generators in our code.
Debugging is hard even with source maps.
If we do use generators in our code, then we should make sure the generator code is spaced properly. The * should be spaced in a standard for consistency and easiness to read.
When accessing object properties that are valid JavaScript identifiers, then we should use the dot notation. Otherwise, we should use the bracket notation. This includes accessing properties with variables and property names that aren’t valid JavaScript identifiers.
A note In Plain English
Did you know that we have four publications and a YouTube channel? You can find all of this from our homepage at plainenglish.io — show some love by giving our publications a follow and subscribing to our YouTube channel!
|
https://medium.com/javascript-in-plain-english/javascript-best-practices-generators-and-object-properties-9c8a38b426b8
|
['John Au-Yeung']
|
2020-05-26 18:16:07.800000+00:00
|
['Technology', 'JavaScript', 'Software Development', 'Programming', 'Web Development']
|
2,068 |
OLED vs. QLED: What’s the Difference?
|
Samsung is promoting QLED panels for its high-end TVs, while LG is pushing organic light-emitting diode (OLED) for its flagship models. Which technology do you want in your next television? We explain.
By Will Greenwald
It’s starting to look like LED isn’t good enough anymore. The term (an abbreviation for light-emitting diodes) describes a now-standard method of lighting LCD televisions, to the point that most LCD TVs are now called LED TVs. In order to stand out, LG and Samsung have added letters to the term to carve out specific labels for their own high-end TVs. LG’s flagship TVs are OLED, while Samsung calls its top-end models QLED. They look and sound similar, but they’re very different technologies.
LCD vs. LED
Before we get into what makes them different, it’s important to understand what LED means. LED-backlit LCD TVs consist of two main parts: the panel and the backlight. The panel is an LCD (short for liquid crystal display) sheet that can produce images when electricity flows through it. The LCD generates the individual pixels of the TV, activating different combinations of red, green, and blue sub-pixels to produce the correct color for each pixel.
LCDs don’t produce light, and without a backlight the pictures they form would be very difficult to see under most lighting conditions. That’s why LCD panels need to be backlit by separate light sources either behind or along the edges of the panel. On early LCD TVs, these lights were bulky cold-cathode fluorescent lamps (CCFLs), but in the past few years thinner, lighter, and more energy-efficient LED lighting systems have all but completely replaced them.
At their simplest, LED backlights just illuminate the LCD panel so you can see the picture it’s displaying. More advanced TVs use arrays of dimmable LEDs to make parts of the TV brighter or darker, improving the contrast of the picture. The more individually controllable LEDs in the array, the more the backlight can improve the TV’s contrast ratio and prevent halos and auras in the shadowy parts of high-contrast scenes.
LG and OLED
Organic light-emitting diode, or OLED, TVs sound like they should be very similar to LED TVs. After all, the letters are there, and they even mean the same thing. The logical conclusion from the term would be that OLEDs are just LEDs that have an organic component to them. That’s true in the most basic sense, but OLED displays are actually wildly different from LED-backlit LCD TVs.
OLED TVs use panels of OLEDs to both generate and illuminate the picture. Each pixel on an OLED panel is produced entirely by the OLEDs themselves, determining the color of that pixel and causing it to produce light. Mechanically, OLED displays are much closer to now-defunct plasma TVs, which consist of individual plasma cells coated with colored phosphors, determining both the color and the light of each pixel on a single panel. The chemistry, engineering, and physics of the two technologies are wildly different, but fundamentally they do the same thing: generate a picture that doesn’t require an external light source for illumination.
Because each pixel generates its own light, OLED panels can produce the best possible contrast of all display technologies. If part of the picture is black, those pixels can simply turn off and emit no light at all. This is a stark difference from LED lighting arrays for LCD panels, which always bleed some form of light to parts of the panel that should be unlit. Excellent LED TVs generate less than 0.01cd/m2 of light for black sections of the screen. OLED TVs produce no light at all for those same sections. This is why OLED TVs are often described as having “infinite” contrast-no matter how bright the screen can get, the black level is always zero.
OLED panels are very expensive to manufacture in large sizes, so OLED TVs are consistently pricey to match. This is why LG keeps OLED technology reserved for its very high-end TV models. The OLEDC9P series retails for $2,499 for 55 inches, though it can be found for less. On the other end, the 88-inch, 8K LG Signature OLED88Z9P is an eye-popping $29,999.99. This is why LG is one of the few major TV manufacturers to offer OLED TVs, though Sony also has some flagship OLEDs available, like the Master Series XBR-A9G ($2,799.99 for 55 inches).
While LG is the biggest name for OLED TVs, OLED technology is much more common in smaller form factors. OLED panels are consistently more expensive than LCDs, but the cost decreases significantly as panel sizes scale down. OLED displays are used in many high-end mobile devices, including Samsung’s own Galaxy S20 phones.
Samsung’s QLED
Samsung’s flagship QLED TVs are, according to the company, comparable with OLED TVs. The terms even look similar, just with a little line coming out of the O to turn it into a Q. However, while Samsung QLED TVs might have very advanced technology in them, they’re still fundamentally LED-backlit LCD televisions.
QLED TVs use LCD panels and LED backlight arrays, like many high-end LED TVs. The QLED descriptor is a Samsung marketing term that indicates several Samsung-specific enhancements to the TVs. To start, QLED TVs use Samsung’s Quantum Dots technology for its LCD panels. Quantum Dots are nanoparticles that emit or alter light at different frequencies when exposed to electricity. This light-tweaking can produce more precise color in a wider range than the LCDs illuminated by white LEDs can.
A wider color gamut is very beneficial, but as an LED-lit LCD panel, QLED TV contrast is still a big concern-the technology doesn’t appear to produce the perfect black levels that OLED panels can. To improve contrast, QLED TVs are treated with a low-reflectivity finish while producing a peak luminance of 1,500 to 2,000cd/m2 according to Samsung. We’ll confirm whether or not QLED TVs can offer that sort of performance when we get them in the lab for full review.
QLED TVs are nearly as pricey as LG’s OLED TVs, but there is much more flexibility for different budgets. The excellent Samsung Q90R retails for $3,499.99 for the 65-inch model, but lower-end models like the Q60T can be found for as little as $799.99 for 55 inches.
For now, you should simply understand that QLED is not the same as or similar to OLED. It might make an excellent picture as well, but the two technologies are as far apart from each other as OLED and conventional LED-lit LCD TVs.
Big Features for Both
Since OLED and QLED are technologies for flagship televisions, OLED and QLED TVs are equipped with all of the features expected of top-end models. Specifically, both TV types have 4K resolution and support HDR content.
Ultra high-definition ( UHD, or 4K) is the current standard for consumer television resolution. 4K TVs are 3,840 by 2,160, displaying four times as many pixels as 1080p TVs. Most major TV manufacturers have all but replaced their 1080p TVs with 4K models, with the exceptions of some low-end budget TVs. If you buy a new, brand name TV larger than 40 inches this year, it will likely be 4K.
8K is also coming, and LG and Samsung are working on those TVs too, but don’t worry about it for now; native 8K content isn’t anywhere close to available to consumers. Even if you get an 8K TV, whether it’s OLED or QLED, you’ll be relying on upconverted 4K and lower-resolution content for a few years.
High dynamic range ( HDR) is a series of video standards that let TVs display a wider range of color and light than standard dynamic range video. HDR video comes in two major standards, HDR10 and Dolby Vision, with a few new standards and variants coming out recently. HDR10 is the format used with Ultra HD Blu-ray discs, and both LG OLED and Samsung QLED TVs can display them. While Samsung TVs don’t support Dolby Vision, most streaming services that offer HDR content in Dolby Vision also support HDR10, and supports the Samsung-developed HDR10+ format, which is more advanced than standard HDR10.
Of course, both types of TVs are also fully connected, smart TVs that support streaming video services and apps. LG smart TVs use webOS, while Samsung TVs use the company’s Smart Hub interface. Both are connected platforms developed internally by their respective manufacturers, and both support all major 4K, HDR streaming services.
For more, head over to our latest reviews and check out the top TVs we’ve tested.
|
https://medium.com/pcmag-access/oled-vs-qled-whats-the-difference-4db1cb7ea3ec
|
[]
|
2020-07-30 16:01:01.542000+00:00
|
['Display', 'Samsung', 'Television', 'Electronics', 'Technology']
|
2,069 |
Crypto wallets in 2021: From hot to cold, here are the option
|
Crypto wallets in 2021: From hot to cold, here are the option
After another jump in the price of major cryptocurrencies at the end of 2020, crypto enthusiasts began to mine, sell and buy currencies with renewed vigor – which means that nowadays, the topic of custodying cryptocurrencies is more relevant than ever. But unlike the past bullish waves, this time many users are also concerned with how to protect their assets.
The blockchain industry is developing, and traders have become noticeably smarter, but scammers and thieves have also become much more agile. This is also indicated by the period appearance of news related to exploits and rug pulls, not only regarding ordinary users but also large exchanges, decentralized finance projects and even nonfungible tokens.
Fraudsters use a variety of tools, from hacking accounts to creating malware. Even well-known projects do not avoid this fate. For example, Trezor recently detected fake apps on Google Play, which affected some users. And at the end of December 2020, more than 270,000 clients of the popular Ledger wallet faced threats after their personal data was exposed by a hacker.
All of this suggests that crypto enthusiasts should be exceedingly careful when choosing how to store their assets.
Buying crypto goes mainstream
In 2021, Bitcoin (BTC) has firmly established itself as a commonly accepted investment instrument and store of value, and it is now being likened to gold. This became especially noticeable when institutional investors started to explore and invest hundreds of millions of dollars – sometimes billions – into BTC.
From Jack Dorsey’s Square recently spending a further $170 million on BTC to M31 Capital filing documents with the United States Securities and Exchange Commission to launch a new Bitcoin hedge fund, crypto is going mainstream. Furthermore, Grayscale Investment’s Bitcoin trust now manages over $37 billion in BTC, which suggests institutional investors feel confident in the instrument. All of these examples work to cement crypto as a viable investment option for retail investors as well.
Also, in addition to simply buying cryptocurrencies, new ways to earn money have appeared on the market, such as decentralized finance protocols that offer various blockchain-based financial services. In fact, this is a very good way to get a fixed income in cryptocurrency with rather high annual interest rates.
The rise of decentralized exchanges has simplified even further the process of owning and exchanging cryptocurrencies. This method of trading cryptocurrencies has been rapidly gaining popularity lately.
Such exchanges, like Uniswap, allow users to carry out transactions directly between wallets. This method implies that users have to know how to store crypto properly and transact through a third party.
Alternatively, users also have centralized exchanges at their disposal; however, there are certain risks regarding the storage of funds. For centralized exchanges, this means that crypto in the platform’s accounts automatically falls under the custody of the exchange, which means that users don’t have full control over their assets. Thus, it is advised by most crypto commentators to store crypto in external wallets.
Examples of crypto wallets in 2021
Each user should remember some elementary security rules unrelated to cryptocurrencies themselves or the equipment that is used. The most important one is that users need to remember their password. It would seem obvious, but users regularly lose huge amounts of money simply because they forget passwords.
Blockchains do not have a password reset function, and there’s no support service to call on. Also, forgetting a wallet’s 12-word seed phrase or writing it down on a medium that gets lost easily is a mistake. The most effective recipe for protecting crypto assets is to be responsible for storing passwords and create a passphrase for the key.
In the case of online wallets, it is a little easier, and the effects of losing a password can be avoided because the keys are held by a trusted third party. The owner of the wallet does not control the keys, they simply login with a username and password. Thus, if their password is lost, they can contact support services, confirm their identity and reset the password. However, from the perspective of decentralization, this is not the perfect option, as the user delegates the control of their keys to a third party.
It is up to the user to decide what’s more important to them and if they indeed trust the company that hosts the gateway to their crypto holdings. Furthermore, any user should be responsible for their capital themselves because no crypto wallet or blockchain is responsible for forgetfulness or inattention.
There are several prominent types of wallets out there:
Hardware wallets
Hardware wallets represent a more sophisticated way to have a wallet, storing currencies on external offline devices. Some of the most popular solutions are Trezor, Ledger Nano X and KeepKey. These wallets usually come in a form of small flash drives and can support thousands of cryptocurrencies.
For example, Trezor offers two types of wallets, Trezor One and Trezor Model T, which can be purchased for $60 and $193, respectively. The Trezor One wallet has two control buttons, and the newly developed Trezor Model T has a touch screen.
The device is connected to the user’s PC using a cable. Security is ensured through the device, which stores the secret key and signs off on transactions offline within the device itself. If viruses are present on the user’s PC, it does not mean that they have access to the wallet. Naturally, in order to avoid losing money and being scammed, users should buy such wallets only through the official websites and make sure that the device is packaged as stated by the producer.
The process of connecting a wallet is quite simple: Users need to go to the official website, download an app and set up a new wallet. The main requirement is to write down and save a mnemonic phrase of 24 words then create and confirm a password.
Local wallets
Local wallets are the most popular type because they can be downloaded or installed onto devices. Users can enter such wallets only from the device on which they are installed. When using a local wallet, the owner has full control over their assets, as private keys are stored locally on the device without third parties having access to this information.
Today, some of the most popular local wallets are Jaxx, Exodus and Edge, which are examples of free multicurrency wallets that support a huge list of cryptocurrencies. In addition to a desktop version, these wallets tend to also have a mobile version. Most of such platforms have been integrated with the likes of ShapeShift and Changelly, where currency conversion is carried out directly within the app without switching over to a cryptocurrency exchange.
Private keys are stored exclusively on the owner’s device, and protection is provided by using a PIN code, with the option to copy private keys for storage offline.
Web wallets
Web wallets work with cloud storage, and users can access them from any device. Such wallets are just apps on mobile phones or can be accessed via websites, which is very convenient. For example, Matbea, Coinbase and BitGo are all web wallets and exchanges in one service. Matbea supports only seven major cryptocurrencies, which is not a broad range by today’s standards, but in terms of security, this wallet has a head start.
Most of these services make use of two-factor authentication: a code sent via SMS or email and a separate password. Even if a virus has settled on users’ PC, in no way will it be able to read the code from their mobile device to gain access to the wallet. And if a virus settles on a smartphone, it will not be able to read the password or email code. Files are regularly backed up, so even in the event of an accident or hard drive failure, users’ currency will be immediately restored.
Paper wallets
Finally, paper wallets are quite reliable, but due to the fact that their public and private keys are printed on paper, they are not used very often. But such wallets seem to be the most interesting way of using crypto. In fact, a paper crypto wallet is just a sheet of paper with a printed QR code that contains an encrypted address for storing cryptocurrency funds. QR codes first need to be scanned to carry out cryptocurrency transactions.
This method of storing cryptocurrencies is fairly safe, as the cryptocurrency is completely protected from the attacks of fraudsters. Along with hardware wallets, paper wallets are often referred to as “cold storage,” as they are completely isolated from the internet and cannot be hacked from the outside.
To create a paper cryptocurrency wallet, users need special software such as Bitaddress.org, which has an open-source code. The service creates a cold storage wallet using randomly generated numbers right in one’s browser. Secret keys remain with users and are not saved on Bitaddress.org’s servers.
WalletGenerator also works like Bitaddress.org, with users needing to move the mouse to increase the randomness of the key generation. The developers also recommend turning off the internet and running the generator from a local HTML file after downloading the archive from GitHub.
Hybrid solutions
There are wallets that combine several methods that were mentioned above. For example, Casa, developed in mid-2020, combines the functions of a local and mobile wallet, with developers outlining security as the main end goal.
When creating a wallet, the user does not need to enter and save a seed phrase or personal data, only email and name. In addition, the wallet does not track one’s location or data transmitted and is devoid of third-party analytics tools. The user is prompted to create a key that will be stored on the device, and the backups will be split between Casa’s own storage and Google or Apple cloud storage. Only the user has access to the key, which requires two-factor authentication.
Another wallet that provides a combined experience is Savl, a mobile wallet for Android and iOS that brings together a peer-to-peer platform, crypto wallet, messenger and cryptocurrency payment service. The wallet has been operating since 2020, and as in the case of Casa, the developers claim that special attention was paid to security and privacy.
When registering a user, the application generates a unique string of 12 words that is stored on the user’s device. No one except the user has access to it, not even the developers. Access to the app is protected by a six-digit PIN code that is set by the user.
Can a wallet be completely secure?
All crypto wallets are safe in their own way, if one chooses them carefully and understands why they are needed. Which wallet to choose depends on the specific person, but the main thing here is security and the ability to store private keys or seed phrases.
If a user needs to store a large amount of crypto, then it’s better to buy a hardware wallet. For those constantly trading on exchanges, users can store funds in wallets created on these exchanges so as to quickly make transactions and not have to pay a transfer fee. However, if the exchange is hacked and there is no insurance fund in place, crypto may be lost. For everyday use, web wallets are rather suitable. The popularity of this type of wallet is due to the ability to quickly and easily sell various cryptocurrencies and make transfers directly to an exchange.
Overall, cryptocurrencies were created on the premise of decentralization, which means each user controls their own funds instead of a centralized entity. Hence, no matter what method for storing crypto the user chooses, they must bear the responsibility for their funds.
Cointelegraph does not endorse any of the products mentioned in the article. Each user should do their own research in order to pick the product that works best for them.
|
https://medium.com/@achennyk/crypto-wallets-in-2021-from-hot-to-cold-here-are-the-option-e2011eacc753
|
[]
|
2021-03-14 11:36:27.324000+00:00
|
['Blockchain Technology', 'Cryptocurrency', 'Blockchain', 'Bitcoin', 'Bitcoin Wallet']
|
2,070 |
Is AI going to take my job?
|
“Will AI take my job?” This is one of the most asked questions on the internet. Before answering this question we have to understand what AI actually is and how it works and the history of technological innovations in general. Then it will help us to understand if there is any worry.
As its name suggests artificial intelligence is intelligence like humans has but it is created artificially. Before the age of AI, computers didn’t have their own intelligence. Programmers had to program manually for every permutation and combination of a task. They couldn’t do it on their own, so something like face recognition was impossible because every face has its own characteristics and programmers would have to write code to recognize every face individually. But now with the help of advanced mathematics and the increased computational power of computers, it is possible.
AI is nothing but another technological innovation like TV or mobile phone or the internet. It is the history of every technical innovation that when it is in its baby steps it creates a lot of buzz and a lot of skepticism. Like with the TV everybody said that it’s an idiot box and it will kill radio but radio is still alive and with the internet, people said it will destroy the world but here we are, our online lives have become as important as our offline lives.
It’s our history, whenever there is a groundbreaking change in our society, we adapt to it. The moral of this short history lesson is that the same will happen with AI.
As AI is an intelligent machine, there is one thing machines are very good at and that’s doing repetitive tasks more faster and accurately than human beings. So there are some jobs that are bound to change like factory workers on an assembly line or receptionists at a desk of the office, jobs like these will be done by a robot with AI. And that’s inevitable. Now if you are doing a job which consists of repetitive tasks, then it will be replaced by AI sooner or later.
There are some jobs that will not be replaced by an AI or not at least in the near future. Jobs that require human creativity or human emotion or imagination or any other human ability that nature has only given us. Jobs like writers, psychiatrists, scientists, lawyers, HR managers, or the same people who created AI, the software developers.
Now what’s the solution, if your job lies in nonreplaceable jobs then you don’t have to worry but if it is on the first list you have some options. If you are in the very early stage of your career you can always shift your job but if you are in later stage of your career, there is a history lesson in this regard. All those people who lost their job to automation just didn’t become unemployed. The nature of their job changed. Factory workers who worked on manual packaging lost their job to assembly line machines. They changed their job from packaging to operating those same machines, bank tailors whose job was taken by ATMs found new jobs in the same banks.
So the answer to the million-dollar question “Is AI going to take my job?” is “NO”. You will not be unemployed just the nature of your job will change.
|
https://medium.com/@ganeshpawarpodcast/is-ai-going-to-take-my-job-c9ec7fddd360
|
['Ganesh Pawar']
|
2021-07-15 09:56:23.265000+00:00
|
['Unemployment', 'Technology', 'Jobs', 'Artificial Intelligence']
|
2,071 |
Too Many Small Steps, Not Enough Leaps
|
I was driving home the other day, noticed all the above-ground telephone/power lines, and thought to myself: this is not the 21st century I thought I’d be living in.
When I was growing up, the 21st century was the distant future, the stuff of science fiction. We’d have flying cars, personal robots, interstellar travel, artificial food, and, of course, tricorders. There’d be computers, although not PCs. Still, we’d have been baffled by smartphones, GPS, or the Internet. We’d have been even more flummoxed by women in the workforce or #BlackLivesMatter.
We’re living in the future, but we’re also hanging on to the past, and that applies especially to healthcare. We all poke fun at the persistence of the fax, but I’d also point out that currently our best advice for dealing with the COVID-19 pandemic is pretty much what it was for the 1918 Spanish Flu pandemic: masks and distancing (and we’re facing similar resistance). One would have hoped the 21st century would have found us better equipped.
So I was heartened to read an op-ed in The Washington Post by Regina Dugan, PhD. Dr. Dugan calls for a “Health Age,” akin to how Sputnik set off the Space Age. The pandemic, she says, “is the kind of event that alters the course of history so much that we measure time by it: before the pandemic — and after.”
In a Health Age, she predicts:
We could choose to build a future where no one must wait on an organ donor list. Where the mechanistic underpinnings of mental health are understood and treatable. Where clinical trials happen in months, not years. Where our health span coincides with our life span and we are healthy to our last breath.
Dr. Dugan has no doubt we can build a Health Age; “The question, instead, is whether we will.”
Dr. Dugan head up Wellcome Leap, a non-profit spin-off from Wellcome, a UK-based Trust that spends billions of dollars to help people “explore great ideas,” particularly related to health. Wellcome Leap was originally funded in 2018, but only this past May installed Dr. Dugan as CEO, with the charge to “undertake bold, unconventional programmes and fund them at scale.” Dr. Dugan is a former Director of Darpa, so she knows something about funding unconventional ideas.
Leap Board Chair Jay Flatley promised: “Leap will pursue the most challenging projects that would not otherwise be attempted or funded. The unique operating model provides the potential to make impactful, rapid advances on the future of health.”
Now, when I said earlier that our current approach to the pandemic is scarily similar to the response to the 1918 pandemic, that wasn’t being quite fair. We have better testing (although not nearly good enough), more therapeutic options (although none with great results yet), all kinds of personal protective equipment (although still in short supply), and better data (although shamefully inconsistent and delayed). We’re developing vaccines at a record pace, using truly 21st century approaches like mRNA or bioprinting.
The problem is, we knew a pandemic could come, we knew the things that would need to be done to deal with it, and yet we — and the “we” applies globally — fumbled the actions at every step.
We imposed lockdowns, but usually too late, and then reopened them too soon. Our healthcare organizations keep getting overwhelmed with COVID-19 cases, yet, cut off from their non-pandemic revenue sources, are drowning in losses. Due to layoffs, millions have lost their health insurance. People are avoiding care, even for essential needs like heart attacks or premature births.
Our power lines are showing. The the hurricane that is the pandemic is knocking them down at will. We might have some Health Age technologies available but not a Health Age mentality about how, when, and where to use them.
Dr. Dugan thinks she knows what we should be doing:
To build a Health Age, however, we will need to do more. We will need an international coalition of like-minded leaders to shape a unified global effort; we will need to invest at Space Age levels, publicly and privately, to fund research and development. And critically, we’ll need to supplement those approaches with bold, risk-tolerant efforts — something akin to a DARPA, but for global health.
Unfortunately, none of that sounds like anything our current environment supports. The U.S. is vowing to leave the World Health Organization and is buying up the worlds’s supply of Remdesivir, one of the few even moderately effective treatment options. An “international coalition of like-minded leaders” seems hard to come by. Plus, only half of Americans say they’d take a vaccine even when it is here.
If COVID-19 is our Sputnik moment, we’re reacting to it as we did Sputnik, setting off insular Space Races that competed rather than cooperated, focused narrowly on “winning” instead of discovering. We will, indeed, spend trillions on our pandemic responses, but most will be short-term, short-sighted programs that apply band-aids instead of establishing sustainable platforms and approaches. We’re reacting to the present, not reimagining the future.
Credit: Darpa
Darpa’s mission is “to make pivotal investments in breakthrough technologies for national security,” and it “explicitly reaches for transformational change instead of incremental advances.” Her background at Darpa make Dr. Dugan uniquely qualified to bring this attitude to Leap, and to apply it to healthcare.
The hard part is remembering that it is not about winning the current war, or even the next one, but about preparing for the wars we’re not even thinking about yet.
Most of our population are children of the 20th century. Our healthcare system in 2020 may have some snazzier tools, techniques, and technologies than it did in the 20th century, but it is mostly still pretty familiar to us from then. If we truly want a Health Age, we should aspire to develop things that would look familiar to someone from the 22nd century, not the 20th.
Every time I read about the latest finding about our microbiome I think about how little we still know about what drives our health, just as our growing attention to social determinants of health reminds me how we need to drastically rethink what the focus of our “healthcare system” should be.
Not more effective vaccines but the things that make vaccines obsolete. Not better surgical techniques but the things that make surgery unnecessary. Not just better health care but better health that requires less health care. If we’re going to dream, let’s dream big.
That’s the kind of Leap we need.
Please follow me on Medium and on Twitter (@kimbbellard), and don’t forget to share if you liked the article!
|
https://kimbellard.medium.com/too-many-small-steps-not-enough-leaps-d25caa18a20
|
['Kim Bellard']
|
2020-07-27 22:25:28.161000+00:00
|
['Technology', 'Innovation', 'Health', 'Future', 'Healthcare']
|
2,072 |
Hive Mind
|
Bees are stinging our atmosphere
Conducting stranger experiments
Lasing realities at perception’s edge
Wasps chittering outer winds
Controlling would-be computers
Machines marauding life
Perhaps more than merely golems
What circuit’s freedom cannot be contained
By those electron clouds of old
Ones which bring their honey
At cost of an eternity’s labor?
By what minds do deceive
Leaping energies
Defying frequencies
Counting past infinity for fun?
In tandem sung a quiet song
Ghostly vapors, abducting night
Bohring parallax mounds
To dwell the stars
Bearing forth odd light
Waving silver wands
Mesmerized faces
Wizards of wayward wisdom
Fae of facile forests
Dwellers of night-terror
And the oppression of dreams
All too real to be but fancy
In eyes of paralyzed innocence
Psychological explication
Neurological sublimation
Wall-slinkers and dream-snatchers
Escapade across Schrödinger’s sanity
Throughout Einstein’s discovered duplicity
Walking Plancks unto sea of obscurity
Denying Heisenberg’s doubtful uncertainty
Buzzing away common err
Depressurized normality
At expense of four-dimensional
Substantiality
What left are we beneath the magnifying glass
In pen of social anxiety
By zoo of plausibly-denied sobriety
Caught dead-end drift society
Political mires and wars notoriety?
If ants shared in our common predicament
Would anyone behind the spyglass
Pay heed for but a moment’s lapse?
|
https://medium.com/poets-unlimited/hive-mind-16192029f9ff
|
['Immanuel R. Knight']
|
2017-03-21 21:10:26.941000+00:00
|
['Physics', 'Poetry', 'Mystery', 'Science', 'Technology']
|
2,073 |
The Rise of the Gig Economy
|
The gig economy has been with us as long as people have tried to find ways to make a living. However, it became part of our daily conversation in the first decade of this century and is usually used to describe part-time freelance work. Many people have come to know the concept of gig-workers through start up businesses like Uber, TaskRabbit, Instacart, and Fiverr. Some of us only look at the gig platforms as tools to make our lives easier, while the rest of us see them as ways to supplement our income or make a living.
Your Tech Support Helper Maybe a Gig Worker
Some people don’t realize that some of the people they depend on every day may also be gig workers. When you call or email for computer support, you may be talking to a remote gig worker at Nerdapp. Even Apple tech support employs gig workers to handle some of their front-line tasks.
Freelance in a Pandemic
The pandemic that hit this year rapidly accelerated the gig economy’s role in how our world works. Many people have lost their jobs due to the economic downturn. Meanwhile, many businesses are looking for alternative ways to operate amid lockdowns. Both of these trends have increased the supply of and demand for gig workers.
Remote Tech Support in the Gig Economy
Few industries are as primed for expansion in the gig economy than IT services. The ability to work remotely is one of the key drivers of the gig economy. This sector was one of the first to adopt the technologies that make remote work possible. In fact, IT support technicians are the ones keeping the gig economy infrastructure running. If you work from home, you may already experience working with freelance computer support professionals to get your home office up and running.
While there are exceptions, most IT support workers have no need to work from a central location. With broadband internet and advanced security solutions, an IT worker can be on the other side of the world. Some businesses find this an ideal solution, especially for 24-hour coverage of help desks and other critical functions. This, plus the number of people working remotely during the pandemic, leads to an increase in remote IT support jobs.
Gig Work is Scalable
The nature of gig work also works well for the technology sector since it is much easier to scale up and down as necessary. A start up may not need or be able to afford full-time tech support but can purchase what they need in the gig marketplace. As a company grows or large projects come along, it is easy to quickly fill the need. It is just as easy to scale down as business conditions change.
An IT Lifeline
From the gig worker’s perspective, the gig economy is throwing a lifeline in the middle of this global disruption. But it is not only filling the need for jobs now. It is also serving the need for more flexibility desired by younger tech workers entering the marketplace. No longer satisfied working 40 hours a week in a cubical, younger workers are looking for new opportunities. Some are interested in flexible hours, the ability to work from home, and even the chance to pair work and travel. A whole segment of workers takes their work with them wherever they go, as long as there is broadband internet.
More Freelance Change Ahead
There is no doubt the gig economy will continue to grow as a part of the way we work even after the pandemic is behind us. For tech support professions, the rise of more remote IT support jobs will mean more opportunities. For those in need of tech support, it may be more available than ever before from tech startups like Nerdapp.com who launched in the UK earlier this year.
|
https://medium.com/@kelvinwetherill/the-rise-of-the-gig-economy-3a0e84da877d
|
['Kelvin Wetherill']
|
2020-12-22 17:32:59.132000+00:00
|
['Remote Working', 'Gig Economy', 'Lockdown', 'Technology', 'Jobs']
|
2,074 |
The Future of Branded Stablecoins
|
The second generation of stablecoins will likely be developed in the next few years, having the Gen 1 fundamentals of stablecoins but with further advantages. These Gen 2 stablecoins will have features built-in such as privacy, rewards, interest, credit, and so on. They will fully make use of the fact that stablecoins are a programmable currency, and thus can be added to with utilities relevant to their various use cases, including payment, remittance, and stores of value.
The third generation of stablecoins (Gen 3) will become a staple of the payment economy. Gen 3 stablecoins will be basket tokens, combining Gen 1, Gen 2, and even other Gen 3 stablecoins to combine the utilities of the individual tokens. In the future, different stablecoins will serve specialized needs and niches, so Gen 3 stablecoins would be able to bundle specialized stablecoins into one token, creating a product serving the intersection of different specializations. For example, there might be an e-commerce meta-stablecoin that combines several Gen 2 stablecoins together from Amazon, eBay, and Shopify, etc.
Why Gen 2 Stablecoins Will Dominate the Future Economy
The ability of cryptocurrencies to be an anonymous, borderless store of value has proven itself to be a real-world necessity for millions. In Venezuela, people cannot flee the country with their fiat money: they cannot send it internationally through their banks and they cannot physically carry their money with them, as it would be seized from them at the border.
As such, Venezuelans have turned to Bitcoin. From 2014 to 2016, the number of Venezuelan Bitcoin users rose from 450 to over 85,000. However, while Bitcoin may be a good store of value compared to the Venezuelan bolívar, Bitcoin is still extremely volatile, and when families put their life savings into it to leave the country, Bitcoin’s volatility can cause them to lose a lot. As more people learn about and adopt Bitcoin in Venezuela, more people will discover stablecoins and take advantage of the aptitude they have as a store of value. It is only a matter of time before stablecoin adoption comes, and adoption in Venezuela could be the catalyst for more and more people hearing about and using stablecoins. As the stablecoin market grows, so too will stablecoin technology. Gen 2 stablecoins with pertinent features will start being developed en masse. From there, it will take only a few companies to integrate stablecoin payment for many others to join them; blockchain technology has a connotation of risk, but there are many empirical benefits to stablecoins integration, and once the first few companies succeed, many others will follow.
The Future of the Stablecoin Industry
The story of cryptocurrencies is one of astronomical growth, with cryptocurrency being the best-performing asset class in numerous recent years. In just over ten years, cryptocurrency has developed into an industry with over $100 billion in market capitalization, and with the current pace of technological innovation, the crypto industry is only set to grow further. Stablecoins will expand within this growing cryptocurrency market, but they will also capture market share from the remittance and payment industries.
According to the World Bank, global remittance inflow in 2018 amounted to an estimated $700 billion dollars. Most migrant workers in Asia send home approximately $200 monthly, but they must pay $12 in international transfer fees–half a day’s wages gone. There are already an array of blockchain solutions being developed to address the remittance issue, but in coming years, stablecoins will become the winning solution because of low transaction costs and price stability across any period of time. Once the dissemination of stablecoin technology inevitably occurs, stablecoins could become a dominant player in the $600 billion remittance industry.
Global remittance volume is on the rise
Within the next ten years, stablecoins could also enter the $100 trillion payments industry because of their use in retail payments and cross-border payments. Within the retail space, many company’s loyalty programs are failing, and since 2002 companies that focused more on loyalty programs grew in revenue at a rate 20% slower than companies that focused less on loyalty. Companies with failing loyalty programs need innovation and reliable methods of collecting data through their loyalty programs, both of which are problems that loyalty-integrated stablecoins help with. Retail payments are a $25 trillion dollar sector of the overall payment industry, and Gen 2 stablecoins with retail-relevant features could conceivably gain a large portion of market share due to the features discussed in the first section.
Stablecoins have a lot less transaction fees than traditional payment processor fees
One of the biggest limiting factors today for cryptocurrency to expand in the $25 trillion cross-border payments industry is not the technology, but rather a lack of awareness, a stigma of volatility surrounding cryptocurrencies and entrenched special interest groups. Stablecoins, though, are inherently risk-free and stable, so it would be easier for risk-averse banks to adopt stablecoins as potential cross-border payment solutions. In total, the market for remittance, retail, and cross-border payments is more than $50 trillion, and given that it is likely that in the next ten years stablecoins could take market share from existing financial institutions within these industries, 2030 could very well see a multi-trillion dollar stablecoin industry.
Today, the stablecoin industry is stillsmall relative to the cryptocurrency space as a whole, and the industry is dominated by Tether, which has ten times the market capitalization of the next biggest stablecoin, USDC. However, as the stablecoin market matures, market capitalization will become more evenly distributed among stablecoins. There were 193 stablecoin projects announced in 2018, compared to 81 total projects announced in years before. With more players entering the stablecoin market and greater technological innovation, Tether will inevitably give up much of its market share to its competitors, especially with its decline in reputability over the years. In April 2019, the office of the New York Attorney General alleged that iFinex, Tether’s parent company, had misused $900 million of Tether’s cash reserves to hide an $850 million loss. Days later, Tether revealed that it is only 74% backed by cash and securities, falsifying its previous claims that USDT is backed 1:1 by fiat. Trust in Tether will continue to erode, leaving space for coins that offer more transparency to overtake it. In the future, there could be hundreds of stablecoins, each serving the needs of specific markets, such as how today there are hundreds of banks and payment companies that cater to different needs and geographical areas.
About Stably
Stably is a venture-capital backed startup. Our vision for the future is to build an alternative digital bank powered by stablecoins — faster, cheaper, transparent and globally accessible.
What is USDS?
USDS is a stablecoin created by Stably and issued by Prime Trust, a Nevada-chartered trust company that is also the regulated administrator for USDS. Stablecoins are cryptocurrencies that are equivalent to national currencies — i.e. digital cash. Regulated, fiat-backed stablecoins are backed by physical reserves of cash and can be redeemed at a 1:1 ratio.
USDS virtually eliminates the crippling price volatility of traditional cryptocurrencies like Bitcoin and Ethereum, while still retaining many of their useful characteristics.
Follow our Medium blog to stay up to date with all of our latest announcements.
www.stably.io
Find us on Twitter
Like us on Facebook
Join us on Telegram
Contact:
Legal: [email protected]
Press: [email protected]
Exchanges or market makers: [email protected]
Partnerships: Edward Siafa, Business Development Manager— [email protected]
Investors: Kory Hoang, Chief Executive Officer — [email protected]
|
https://stablycoin.medium.com/the-future-of-branded-stablecoins-a2ec46aae1ff
|
[]
|
2020-03-20 03:15:15.729000+00:00
|
['Payments Technology', 'Remittances', 'Blockchain', 'Loyalty Program', 'Stable Coin']
|
2,075 |
Smart Contract and Web3! All that you need to know.
|
Smart Contract and Web3! All that you need to know. Ali Follow Jul 23 · 5 min read
A Glorious guide on creating your smart contracts and interact with smart contracts using web3
1. A system to store data on the blockchain and retrieve the stored data from the blockchain
Pre-requisite:
To follow this tutorial, you need to have a basic knowledge of the following programming languages:
HTML: HTML is used to design the structure of the website JavaScript: Javascript is used for interacting with the deployed smart contracts to store, retrieve, and manipulate information Solidity: solidity is used for writing down smart contracts
Tools Requirement:
ChainIDE [an online cloud-based, multi-chain IDE] available at: https://chainide.com/
MetaMask [a wallet for web3], available at: https://metamask.io/
!Note: For this tutorial, you don’t need to download any tools or libraries except MetaMask and web3.js.
Libraries:
web3.js [web3.js is a collection of libraries that allow you to interact with a local or remote ethereum node using HTTP, IPC, or WebSocket], available at: https://cdn.jsdelivr.net/gh/ethereum/web3.js/dist/web3.min.js
Experiment Setup:
Once you have access to ChainIDE and MetaMask, connect MetaMask to ChainIDE and buy some ethers from a faucet for any of the following test networks:
Ropsten Test Network (suggested) Kovan Test Network Rinkbey Test Network Goreli Test Network
To get test ethers from a faucet, simply click choose any of the above-mentioned networks and click on Buy →Test Faucet → Get Ether, and you will be redirected to the selected faucet network. This method varies a little from network to network but all are quite easy.
2. MetaMask
After adding the extension of MetaMask to your browser, make sure you have selected the faucet network.
Method
First of all, we will write down a smart contract that can store the employee's information such as id, first name, last name, address, mobile number, etc.,
!Note: We’ll use ChainIDE for the whole tutorial, ChainIDE supports all the programming languages that are needed to complete this tutorial.
Following code is a smart contract written in solidity that is used to store the employee's information on the blockchain. To make use of this smart contract, we need to deploy it on the blockchain, and before, we can deploy it, we need to compile it.
The pragma keyword defines the compiler version that we will use to compile the solidity code. Once you have completed writing down your smart contract, simply compile it from the compile panel, and you will get the ABI code for the following solidity code.
3. Contract Compilation
The ABI which is known as the application binary interface is needed when you want to interact with the smart contract using web3.
Now, the next step is to deploy the complied smart contract.
4. Contract Deployment
To deploy a smart contract, you must need to have some faucet ethers in your wallets to pay a gas fee to deploy for the smart contract. In fig.4, we can see we paid 25 gwei to deploy our “Employee_Records.sol” smart contract.
Once a smart contract is deployed, it will be assigned a contract address, as we can see in fig.4, highlighted pint 3, and from the interaction panel, we can copy our smart contract address and can check our deployed smart contract on the faucet network also.
|
https://medium.com/nerd-for-tech/smart-contract-and-web3-all-that-you-need-to-know-f33fbe2aae0
|
[]
|
2021-07-24 00:29:37.256000+00:00
|
['Blockchain Technology', 'Solidity', 'Web3', 'Chainide']
|
2,076 |
Machine Learning — The Present and Future Impact on Web Development
|
Machine learning has been impressive so far, allowing us to revolutionize work being done in an array of outwardly unconnected areas. Machines now assist humans in everything from market forecasting and algorithmic trading, to predicting when a bridge is likely to collapse.
For web developers, machine learning has yielded a wealth of unexpected benefits and only promises to bring more as the associated technologies improve. While Artificial Intelligence (AI) powered by machine learning is unlikely to replace human programmers and web developers anytime soon, the fact that machines have shown the ability to sift through enormous sets of data and find important patterns already indicates a level of indispensability.
There’s an excellent chance that machine learning will fundamentally change the website and web app development process.
What is Machine Learning?
Put simply, machine learning is the study of certain algorithms and statistical techniques that allow computers to perform complex tasks without receiving instructions beforehand. Instead of using explicit pre-programming directing certain behavior under a certain set of circumstances, machine learning relies on pattern recognition and associated inferences.
Once the algorithm is constructed, the machine is fed training data for which inputs and outputs are already known. The algorithm is then evaluated on the degree to which it arrives at the correct output, given its input, and is modified accordingly.
Through this supervised learning and training, the machine is refined to generate the most accurate predictions and extrapolations possible. Supervised learning is effective at sifting through data which can be cleanly categorized, such as handwriting, based on known letter patterns.
Through supervised training and refinement, the algorithm can be turned toward unsupervised learning, in which it examines data where there are no known patterns and attempts to find the patterns on its own. Unsupervised machine learning is useful for extrapolating patterns from sets of data, such as trying to predict the future price of a stock or the likely preferences of consumers.
For web design, in particular, unsupervised learning is especially important, as developers want to try and stay ahead of future demand. Harvesting and mining data about customers attempts to figure out their preferences in order to design and deliver a website that encourages a satisfying experience.
The Impact of Machine Learning on Web Development
Analyzing data is crucial to all forms of web development. As machine learning allows us to augment our capacity to analyze and organize data, it has revolutionary potential to improve and streamline web development. Here’s just a smattering of what machine learning can offer web developers and webmasters:
Provide a Dynamic Alternative to Conventional Data Mining
Conventional data mining techniques have existed long before the advent of sophisticated machine learning technology. With these techniques, we have been able to discern important patterns in bundles of data. Before machine learning, however, the question of what to do once patterns have been found was for humans to decide. Machine learning can do everything that the older techniques could do but also automates responses to detected patterns.
For example, let’s say that you’re looking at your customers’ search history on your website. Machine learning can help sift through that history to find what sorts of products your customers are most likely to be interested in, and then automatically offer them those products. In that way, some of your marketing is streamlined and automated.
Better Understand Customer Behavior
If you look at your customers’ behavior, you could be forgiven for sometimes seeing nothing more than an unsystematic and apparently random mess of events and actions. Machine learning helps see the hidden patterns and respond accordingly. Data not only from search history but also from conversations between customers that might occur on your website. The idea is simple — the more complete the data set, the more targeted and effective your design efforts can be.
More Precise Targeting for Personalized Content and Information Delivery
If your website specializes in providing content rather than selling products, machine learning can still help. Websites like YouTube base the videos that they recommend on the sorts of videos that those users have already watched, liked and otherwise responded to positively in the past. No matter the topic of the website you’re developing, a dose of machine learning along the way can almost certainly target your efforts more precisely.
Improve Search Results and Product Discovery
Human nature being what it is (mercurial and unfocused), not every search result is relevant to what a customer truly wants. Machine learning can help obviate that problem through sheer volume of data collected. The more you have, the more random spur-of-the-moment searches will be screened out as unimportant, leaving the algorithm to yield legitimate product interest and seemingly “magically” accurate additional recommendations.
Simplify and Speed Up Website and App Development
The reality is that AI website builders are still in their infancy. The code they produce for websites and apps can be clunky, hard to edit, and aesthetically questionable to put it politely. Where the technology shines is in completing relatively simple tasks, such as a landing page, which frees up a developer or designer for higher-level creative ideation.
Machine learning can also be helpful in the area of problem-solving and refinement after the site goes live. For example, you can put the algorithm to work examining user complaints about features and flaws that have crept into the design. Examining common user complaints can help guide developers take corrective measures or avoid these pitfalls in future projects.
Eliminate or Preemptively Respond to Cybersecurity Threats
If you run a business that stores large quantities of customer data on company servers, you automatically become a hacker target. Effective cybersecurity has become mandatory for a website or online business. The good news is that malware attacks tend to follow predictable patterns, which makes preventing them an excellent repetitive task for machine learning algorithms.
For instance, algorithms can be used to detect likely phishing scams by examining the particular language used in previous attempts and comparing it to incoming e-mail. Other machine learning uses include a warning when software or hardware needs updating or reconfigured in order to best repel against new or evolving threats.
The Big Idea here is that any website you build and take online assumes a security risk. Machine learning can buy at least some peace of mind in that area as well as reduce costs needed for IT staff.
Fundamentally Changing Web Development
Even if AI/ML-powered website and app builders aren’t an effective substitute for human programmers (yet), given the intricate web design process and need for creativity and intuition, any work that can be offloaded to an algorithm is a benefit. Expect more of this in the future. Rather than seeing “smart” website builders as a threat, those in the development industry should embrace them as a tool. Do carpenters eschew hydraulic nail drivers on the assumption that it will one day put them out of work? Not the smart ones. Instead, they see a better, faster project completion rate.
Final Thoughts
Machine learning is a revolutionary technology of such broad scope and power that, as it develops, it will likely leave a few aspects of human life untouched. In the field of web development, as in other areas, its astounding pattern-recognition skill paired with automatic learned responses can reduce the amount of hard labor involved in creating a website as well as add insights gleaned from data sets too massive for a human to analyze.
by Will Ellis
Originally published on GrapeCity.com
|
https://medium.com/grapecity/machine-learning-the-present-and-future-impact-on-web-development-d4385598d7a7
|
['Grapecity Developer Solutions']
|
2019-09-26 14:46:41.576000+00:00
|
['Machine Learning', 'Web Development', 'Technology', 'Developer', 'Programming']
|
2,077 |
Huawei Reportedly Tested a ‘Uighur Alarm’ to Track Chinese Ethnic Minorities With Facial Recognition
|
Huawei Reportedly Tested a ‘Uighur Alarm’ to Track Chinese Ethnic Minorities With Facial Recognition
The system also identifies information such as age and sex
Photo: Greg Baker/AFP via Getty Images
Chinese tech giants Huawei and Megvii have allegedly tested software that could identify Uighurs, an ethnic minority in China, according to a new report from the Washington Post and video surveillance trade publication IPVM.
The system being tested tried to identify whether a person was Uighur but also information such as their age and sex. If the system detected a Uighur person, it could notify government authorities with a “Uighur alarm.” The system relies on Huawei’s cameras, cloud computing servers, and other hardware plus Megvii’s facial-recognition algorithms.
Uighurs have faced increasing surveillance and incarceration in China, from the mass collection of DNA to allegedly being forced into more than 250 detention centers.
Huawei and Megvii’s technology is not new. Last year IPVM identified more than a dozen state-run projects that use A.I. to try and detect Uighurs. Some of this research has even been done publicly, like a 2018 paper mentioned by the Post that specifically tried to differentiate Uighur face characteristics from people of Korean and Tibetan descent.
However, the companies that typically supply this technology specialize in security cameras and surveillance tech, like Dahua and Hikvision, instead of mainstream tech companies. Dahua, Hikvision, and Megvii have all been sanctioned by the U.S. government for supplying this technology.
Read the story from the Washington Post for more information. I also suggest following BuzzFeed News reporter Megha Rajagopalan. She’s covered the topic extensively from within China, and you can even read about her work in an interview on OneZero.
|
https://onezero.medium.com/huawei-reportedly-tested-a-uighur-alarm-to-track-chinese-ethnic-minorities-with-facial-4cbddac9f99f
|
['Dave Gershgorn']
|
2020-12-08 21:25:12.266000+00:00
|
['Surveillance', 'Artificial Intelligence', 'Technology']
|
2,078 |
The Narrowing Rift: Voice UI and Conversational UI
|
The Narrowing Rift: Voice UI and Conversational UI
If today’s voice operated devices AREN’T conversational, what does “conversational” even mean?
This is the second in a series of posts inspired by my time as a workshop speaker and attendee at Interaction 17 (February 2017, New York City).
In my last post, we talked about the state of voice user interfaces (VUI) at this moment in time. Voice user interfaces have gone mainstream and are changing lives and increasing accessibility for many consumers.
At the same time, today’s voice user experiences (best known as Cortana, Alexa, and Google Home) remain rooted in a very simple, command-and-control methodology. We can only call the current experiences “conversational” in the broadest sense of the word — as spoken words exchanged.
A popular and sometimes contentious topic during the Interaction 17 proceedings was conversational UI (CUI). In general, this currently refers to chat bots and other written-input user interfaces. A frequent question raised: is Alexa conversational? How do these devices fall short of human standards?
Conversing with Alexa during my time on the Alexa voice design (VUI) team.
Facebook’s Messenger Bots are the most well-known example of conversational UI these days, although several public Twitter and Slack bots fit the CUI description. Notably, chat bots are almost universally implemented via graphical output and text input, rendering them still fundamentally different from voice UIs… for the time being.
Defining Conversation
How do we define conversation after taking it for granted our entire lives? Paul Grice published 5 maxims for conversation in a fairly dense paper on the subject.
In his IxD17 talk “Conversation is More than Interface”, Paul Pangaro applied Gordon Pask’s Conversation Theory to define conversation as Context, shared Language, Exchange, Agreement, and Transaction.
Paul Pangaro sets a conversational tone during his #IxD17 CUI talk.
Further, the typical outcome of conversation is beyond direct action: it is often the building of a shared history and trust.
This is where we fall short in today’s voice systems: they are fairly ignorant of a shared history, and have no concept of how they might engender trust.
On the subject of trust, researcher Christina Xu shared an important insight regarding Chinese digital culture in her talk “Convenient Friction: Observations on Chinese UX in Practice.” In that environment, conversational interfaces are routinely used for commerce, since they perceived as more trustworthy. And yet, those interactions in China are still generally run by actual people. What could we learn about trust in commercial conversation transactions in other cultures to inform conversational UI?
Christina Xu walks us through the extensive use of WeChat in Chinese culture for conversational transactions.
Back to Paul Pangaro’s talk: he further expanded on Gordon Pask and Hugh Dubberly’s work, describing four basic conversational frames. Two of those frames can be easily found in the current generation of VUI.
Controlling: specifying a goal with means of achieving it (“Play my Prince station on Pandora.”)
Delegating: asking for an outcome without specifying how to achieve it (“Play some uptempo music.” )
At the same time, two other conversational frames were described that go beyond most voice user interfaces today:
Guiding: discussing the means of achieving a goal (“I want to hear some music. How should I do it?”)
Collaborating: mutually deciding on goals between both participants. (“What should we do?”)
These less-common frames would be more helpful in situations where the customer is less experienced with the system, and indeed training and onboarding are big hurdles for today’s systems. And what if the customer’s goal is simply to be entertained? There’s still a certain something missing.
Craftsmanship in Conversation
Once we’ve built a framework for conversation, we must paint in the details — writing the actual text delivered in the exchanges.
Later in the CUI session, researcher Elizabeth Allen walked us through how Shopify uses cross-channel bots to emulate a marketing employee’s exchanges back in North America. These bots reach out via text based channels to offer to launch Facebook ad campaigns based on sales trends. Even though these were strictly graphical/text interactions, some customers began to reply to these bots as if they were actual people.
And yet, Elizabeth brought a few key cautions that can shatter this suspension of disbelief. In particular, customers can find these bots pushy if the timing and length of responses are not carefully tuned.
Our brains don’t give text-based conversational UI the anthropomorphizing “benefit of the doubt” that we apply to voice-delivered user interfaces. This puts greater pressure on CUI designers to be writers, keeping an eye towards creating the illusion of engagement. Voice UIs with good text-to-speech synthesizers sometimes get this illusion largely for free.
In a later talk in the CUI track, designer Whitney French called out 5 metrics for creating engaging conversational UI: intelligence, flow & cadence, helpfulness, personality, and utility. While these are all subjective metrics, the most difficult to emulate is personality; humor in particular is highly subjective. These metrics can also be applied to today’s voice UIs, but the burden of brevity is greater for spoken UI.
These metrics do give us a good framework for building what may be a (subjectively) engaging conversation. And it’s a fine line to walk. Most conversational UIs probably seek to be comfortable, but not fully anthropomorphized. Yet for spoken UI, it is extremely hard to prevent the brain from viewing the source of the conversation as human. What does this mean for the coming collision of conversational UI and voice UI?
Cautionary Creepy Dolls
Let’s take a VERY recent example: My Friend Cayla. This toy doll is now banned in Germany as illegal to sell, and the government has gone so far as to order parents to destroy the toy. What went wrong?
Cayla functions in a very similar way to other voice user interfaces on the market. To understand childrens’ speech, she transmits audio files over the Internet to a cloud service. Once she understands the speech, she generates a response in a synthetic voice using a text-to-speech system, and that audio file is sent back to the toy for playback.
My Friend Cayla, a doll with a voice user interface that has come under fire at a governmental level as a tool enabling illegal espionage (image from the Google Play store)
Unfortunately, Cayla doesn’t seem to adhere to the same stringent security standards that Amazon, Microsoft and (I hope and assume) Google applies to these conversations. They intentionally do not market to children since there are significant ethical issues when a child conducts conversations that can be recorded. Furthermore, the doll’s Bluetooth connection was found to be insecure, allowing attackers to use the toy for monitoring or even communication with the child.
Some of the lessons learned here are simple infosec lessons: be cautious when taking input with children, and make sure that any device equipped with live microphones or cameras CANNOT be controlled by third parties.
But there’s also an important lesson for CUI designers here: if we are too good at our jobs, could we put our customers at risk? Elizabeth Allen mentioned in her speech how Shopify observed their CUIs occasionally eliciting more information than is necessary. One presumes this is thanks to the sucessful illusion of a human conversation. Children are faster to suspend disbelief, so the ethical issue is more pronounced. What might they tell a doll (or a digital assistant) that they trusted? Their address? Financial information? Or worse?
With Great Power Comes Great Responsibility
To quote the fictional Ian Malcolm from one of my favorite films, Jurassic Park:
“ …Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”
Every important discussion is improved with a little bit of Jeff Goldblum.
As our voice-based digital assistants move beyond rudimentary voice exchanges and begin to move towards more conversational spoken UI, we as designers will confront more ethical considerations.
Just a few of the questions we’ll face as VUI and CUI collide:
When is it appropriate for these systems to behave in a “human” way, and what does that mean?
How up front should conversational systems be about their synthetic nature?
How much control should customers have over what information about them is tracked in a conversational context?
What damage could be done if a customer overdiscloses to a voice UI capable of surveillance, believing it to be human?
For privacy-minded customers who will not consent to long-term learning and tracking, how can conversational UIs still provide value?
Can anyone truly trust a conversation partner who is designed ultimately to drive sales?
Can an assistant that customizes its personality to suit the customer be trustworthy? Are we trusting the brand, or the adapted personality?
While I believe we should continue to pursue a more conversational world in voice UI, I also believe it should be done responsibly. In these challenging times, how can we use the great power of voice user interfaces and conversational understanding to do the most good?
Wading into Deeper Waters
In the last post, we talked about how empowering voice user interfaces are to a wide variety of customers underserved by visual/physical UI.
A key takeaway from Interaction 17 for me was a more formal taxonomy of the parlor tricks that can make our VUIs seem more conversational in nature in the short term. For voice UI designers looking to improve the craftsmanship in their system’s spoken replies, the conversational UI insights above provide a good starting point.
But in many ways, the sessions raise more questions than they answered. In my next post, we’ll dive even deeper into several of the blind spots that current voice user interfaces must address if they seek to become truly conversational beyond command and control.
May the voice be with you.
|
https://medium.com/ideaplatz/the-narrowing-rift-voice-ui-and-conversational-ui-7d5c95cf086c
|
['Cheryl Platz']
|
2017-02-24 01:21:20.820000+00:00
|
['Technology', 'Alexa', 'UX', 'Voice Recognition', 'Design']
|
2,079 |
DOM Manipulation — Custom Events and Event Delegation
|
Photo by Lukas Boekhout on Unsplash
JavaScript is one of the most popular programming languages in the world. To use it effectively, we’ve to know about the basics of it.
In this article, we’ll look at how to emit custom events, simulating mouse events, and event delegation.
Custom Events
We can emit custom events in our JavaScript code.
Th do that we can use the createEvent method and then use addEventListener to listen to the event.
For instance, if we have a button:
<button>
click me
</button>
We can write:
const button = document.querySelector('button');
const customEvent = document.createEvent('CustomEvent'); button.addEventListener('hello', (event) => {
console.log(event.detail.foo)
}, false); customEvent.initCustomEvent('hello', true, false, {
foo: 'bar'
}); button.dispatchEvent(customEvent);
We called createEvent to create a custom event. We pass in 'CustomEvent' to create a custom event.
Then we called initCustomEvent to initialize the custom event with data.
Next, we called dispatchEvent to trigger the event.
Then in the event handler we passed into addEventListener , we log the data which in the detail property.
So we get 'bar' when we access the foo property.
Simulating or Trigger Mouse Events
We can trigger mouse events by using the initMouseEvent method.
For instance, if we have:
<button>
click me
</button>
We can write:
const button = document.querySelector('button');
button.addEventListener('click', (event) => {
console.log(event);
}, false); const simulatedClick = document.createEvent('MouseEvents'); simulatedClick.initMouseEvent('click', true, true, document.defaultView, 0, 0, 0, 0, 0, false, false, false, 0, null, null); button.dispatchEvent(simulatedClick);
We called createEvent with the 'MouseEvents' .
Then we call the initMouse event with it. It takes many arguments.
They the arguments are in the following order:
type — type of mouse event we want to trigger
— type of mouse event we want to trigger canBubble — indicates whether the event bubbles or not
— indicates whether the event bubbles or not cancelable — indicates whether the event is cancelable or not
— indicates whether the event is cancelable or not view — the abstract view of the event, which should be window
— the abstract view of the event, which should be detail — click count
— click count screenX — the event’s x coordinate
— the event’s x coordinate screenY — the event’s y coordinate
— the event’s y coordinate clientX — the event client’s x coordinate
— the event client’s x coordinate clientY — the event client’s y coordinate
— the event client’s y coordinate ctrlKey — whether the Ctrl key is pressed during the event
— whether the Ctrl key is pressed during the event altKey — whether the Alt key is pressed during the event
— whether the Alt key is pressed during the event shiftKey — whether the Shift key is pressed during the event
— whether the Shift key is pressed during the event metaKey — whether the meta (Windows / Command) key is pressed during the event
— whether the meta (Windows / Command) key is pressed during the event button — mouse button
— mouse button relatedTarget — related event target
This is a deprecated API so other solutions should be considered.
Photo by Lutz Baumann on Unsplash
Event Delegation
We can use one event handler to listen to events originating from a different sources.
For instance, if we have:
<button>
button 1
</button>
<button>
button 2
</button>
<button>
button 3
</button>
We can write:
document.addEventListener('click', (event) => {
if (event.target.tagName.toLowerCase() === 'button') {
console.log(event.target.textContent);
}
}, false);
We listen to the click event on the whole page with document.addEventListener .
Then get the element where the event originated with event.target .
If we clicked a button, then we log the text content.
Conclusion
We can create custom events with client-side JavaScript code.
We can also simulate mouse events.
Also, we can use event delegation to use one event listener to listen to events from multiple sources.
JavaScript In Plain English
Did you know that we have four publications and a YouTube channel? Find them all at plainenglish.io and subscribe to our YouTube channel!
|
https://medium.com/javascript-in-plain-english/dom-manipulation-custom-events-and-event-delegation-3d2f12bd245c
|
['John Au-Yeung']
|
2020-06-27 08:24:44.723000+00:00
|
['Technology', 'JavaScript', 'Software Development', 'Programming', 'Web Development']
|
2,080 |
Geospatial Imagery Analytics Market to 2025 scrutinized in new research including leading players…
|
Geospatial Imagery Analytics Market to 2025 scrutinized in new research including leading players: DigitalGlobe Inc., Environmental Systems Research Institute Inc, Google LLC, Harris Corporation, Hexagon AB diksha golait Dec 2, 2019·4 min read
Geospatial Imagery Analytics Market
The Global Geospatial Imagery Analytics Market is expected to grow from USD 3,925.45 Million in 2018 to USD 22,123.49 Million by the end of 2025 at a Compound Annual Growth Rate (CAGR) of 28.02%.
The positioning of the Global Geospatial Imagery Analytics Market vendors in FPNV Positioning Matrix are determined by Business Strategy (Business Growth, Industry Coverage, Financial Viability, and Channel Support) and Product Satisfaction (Value for Money, Ease of Use, Product Features, and Customer Support) and placed into four quadrants (F: Forefront, P: Pathfinders, N: Niche, and V: Vital).
The report deeply explores the recent significant developments by the leading vendors and innovation profiles in the Global Geospatial Imagery Analytics Market including are DigitalGlobe Inc., Environmental Systems Research Institute Inc, Google LLC, Harris Corporation, Hexagon AB, AeroVironment Inc., Autodesk, Inc., Bentley Systems, Inc., Fugro N.V., KeyW Corporation, Planet Labs, Inc., RMSI Pvt. Ltd., Satellite Imaging Corporation, Trimble Inc., and UrtheCast Corporation.
Get sample copy of this report: http://bit.ly/35NLeTg
On the basis of Type, the Global Geospatial Imagery Analytics Market is studied across Imagery Analytics and Video Analytics.
On the basis of Collection Medium, the Global Geospatial Imagery Analytics Market is studied across Geographic Information Systems (GIS), Satellites, and Unmanned Aerial Vehicles (UAVs).
On the basis of Vertical, the Global Geospatial Imagery Analytics Market is studied across Agriculture, Defense & Security, Energy, Utility, and Natural Resources, Engineering & Construction, Environmental Monitoring, Government, Healthcare & Life Sciences, Insurance, and Mining & Manufacturing.
For the detailed coverage of the study, the market has been geographically divided into the Americas, Asia-Pacific, and Europe, Middle East & Africa. The report provides details of qualitative and quantitative insights about the major countries in the region and taps the major regional developments in detail.
In the report, we have covered two proprietary models, the FPNV Positioning Matrix and Competitive Strategic Window. The FPNV Positioning Matrix analyses the competitive market place for the players in terms of product satisfaction and business strategy they adopt to sustain in the market. The Competitive Strategic Window analyses the competitive landscape in terms of markets, applications, and geographies. The Competitive Strategic Window helps the vendor define an alignment or fit between their capabilities and opportunities for future growth prospects. During a forecast period, it defines the optimal or favorable fit for the vendors to adopt successive merger and acquisitions strategies, geography expansion, research & development, new product introduction strategies to execute further business expansion and growth.
Research Methodology:
Our market forecasting is based on a market model derived from market connectivity, dynamics, and identified influential factors around which assumptions about the market are made. These assumptions are enlightened by fact-bases, put by primary and secondary research instruments, regressive analysis and an extensive connect with industry people. Market forecasting derived from in-depth understanding attained from future market spending patterns provides quantified insight to support your decision-making process. The interview is recorded, and the information gathered in put on the drawing board with the information collected through secondary research.
Get Complete Report: http://bit.ly/35NLeTg
The report provides insights on the following pointers:
1. Market Penetration: Provides comprehensive information on Geospatial Imagery Analytics offered by the key players in the Global Geospatial Imagery Analytics Market
2. Product Development & Innovation: Provides intelligent insights on future technologies, R&D activities, and new product developments in the Global Geospatial Imagery Analytics Market
3. Market Development: Provides in-depth information about lucrative emerging markets and analyzes the markets for the Global Geospatial Imagery Analytics Market
4. Market Diversification: Provides detailed information about new products launches, untapped geographies, recent developments, and investments in the Global Geospatial Imagery Analytics Market
5. Competitive Assessment & Intelligence: Provides an exhaustive assessment of market shares, strategies, products, and manufacturing capabilities of the leading players in the Global Geospatial Imagery Analytics Market
The report answers questions such as:
1. What is the market size of Geospatial Imagery Analytics market in the Global?
2. What are the factors that affect the growth in the Global Geospatial Imagery Analytics Market over the forecast period?
3. What is the competitive position in the Global Geospatial Imagery Analytics Market?
4. Which are the best product areas to be invested in over the forecast period in the Global Geospatial Imagery Analytics Market?
5. What are the opportunities in the Global Geospatial Imagery Analytics Market?
6. What are the modes of entering the Global Geospatial Imagery Analytics Market?
|
https://medium.com/@dikshagolait123/geospatial-imagery-analytics-market-to-2025-scrutinized-in-new-research-including-leading-players-1fc7aadc99fd
|
['Diksha Golait']
|
2019-12-02 10:42:38.020000+00:00
|
['Geospatial Imagery', 'Geospatial', 'Technology']
|
2,081 |
All you need to know about Promise.all
|
Promises in JavaScript are one of the powerful APIs that help us to do Async operations.
Promise.all takes Async operations to the next new level as it helps you to aggregate a group of promises.
In other words, I can say that it helps you to do concurrent operations (sometimes for free).
Prerequisites:
You have to know what is a Promise in JavaScript.
What is Promise.all?
Promise.all is actually a function that takes an array of promises as an input (an iterable) and returns a Promise. Then it gets resolved when all the promises get resolved or any one of them gets rejected.
For example, assume that you have ten promises (Async operation to perform a network call or a database connection). You have to know when all the promises get resolved or you have to wait till all the promises resolve. So you are passing all ten promises to Promise.all. Then, Promise.all itself as a promise will get resolved once all the ten promises get resolved or any of the ten promises get rejected with an error.
Let’s see it in code:
Promise.all([Promise1, Promise2, Promise3])
.then(result) => {
console.log(result)
})
.catch(error => console.log(`Error in promises ${error}`))
As you can see, we are passing an array to Promise.all. And when all three promises get resolved, Promise.all resolves and the output is consoled.
Let’s see an example:
Simple example explaining how Promise.all works
In the above example, Promise.all resolves after 2000 ms and the output is consoled as an array.
One interesting thing about Promise.all is that the order of the promises is maintained. The first promise in the array will get resolved to the first element of the output array, the second promise will be a second element in the output array and so on.
Let’s see another example:
Explaining how an array of promises can be used effectively in Promise.all
From the above example, it’s clear that Promise.all waits till all the promises resolve.
Let’s see what happens if any one of the promises are rejected.
Explains how Promise.all behaves if one of the promises got rejected.
As you can see, if one of the promises fails, then all the rest of the promises fail. Then Promise.all gets rejected.
For some use cases, you don’t need that. You need to execute all the promises even if some have failed, or maybe you can handle the failed promises later.
Let’s see how to handle that.
Use cases of Promise.all
Assume that you have to perform a huge number of Async operations like sending bulk marketing emails to thousands of users.
Simple pseudo code would be:
for (let i=0;i<50000; i += 1) {
sendMailForUser(user[i]) // Async operation to send a email
}
The above example is straightforward. But it’s not very performant. The stack will become too heavy and at one point of time, JavaScript will have a huge number of open HTTP connection which may kill the server.
A simple performant approach would be to do it in batches. Take first 500 users, trigger the mail and wait till all the HTTP connections are closed. And then take the next batch to process it and so on.
Let’s see an example:
Let’s consider another scenario: You have to build an API that gets information from multiple third-party APIs and aggregates all the responses from the APIs.
Promise.all is the perfect way of doing that. Let’s see how.
To conclude, Promise.all is the best way to aggregate a group of promises to a single promise. This is one of the ways of achieving concurrency in JavaScript.
Hope you liked this article. If you did, please clap and share it.
Even if you didn’t, that’s fine you can do it anyway :P
|
https://medium.com/free-code-camp/promise-all-in-javascript-with-example-6c8c5aea3e32
|
['Srebalaji Thirumalai']
|
2019-05-24 09:49:44.516000+00:00
|
['Nodejs', 'JavaScript', 'Technology', 'Software Development', 'Web Development']
|
2,082 |
Data Journalism Crash Course #4: Open Source Communities
|
Image by the author
Co-creation, collaboration, sharing, community. What do these words have in common? The essence. Each of these terms is rooted in cooperation, in mutual aid, in joining efforts for a common goal.
And depending on where these principles are applied, the result can be efficient and highly profitable — if not financially, at the very least, with better use of time and human resources.
For those working with digital development, the most encouraged form of collaboration is Open Source Software, also known as FOSS, an acronym for Free and Open Source Software.
Open projects are great for anyone who wants to receive collaboration from others, learn from analyzing real projects or getting their “hands dirty” with true intellectual craftsmanship.
Open source is a term that means exactly what it says. This concerns the source code of the software, which can be adapted for different purposes. The term was created by OSI (Open Source Initiative) which uses it from an essentially technical point of view.
Because it does not have a license cost, open-source software offers the opportunity for greater investment in services and training, ensuring a greater and better return on IT investments. In the vast majority of cases, these tools are shared online by the developers, and anyone can access them without any restrictions.
The term open-source, as well as it is ideal, was developed by Eric Raymond and other OSI founders to present free software to companies in a more commercial way, avoiding an ethical and rights debate.
The terminology “Open Source” appeared during a meeting that took place in February 1998, in a debate that involved personalities who would later become a reference on the subject. Examples include Todd Anderson, Chris Peterson, Larry Augustin, Jon “Maddog”, Sam Ockman, and Eric Raymond.
The acronym FLOSS, which means Free / Libre and Open Source Software, is an aggregating way of using the concepts of Free Software and Open Source in favor of the same software, since both differ only in argumentation, as mentioned before.
The developers and supporters of the Open Source concept say that this is not an anti-capitalist movement, but an alternative for the software industry market. This collaborative model present in open source led the author’s right to be looked at in another light.
The creation of the Open Source Development Lab (OSDL) is an example of the great efforts made by several companies such as IBM, Dell, Intel, and HP to work with the creation of open source technologies.
OSI imposes 10 important points for a software to be considered Open Source:
Free distribution
The program license must not in any way restrict free access through sales or even exchanges. Source code
Of fundamental importance, the software must contain a source code that must also allow distribution in compiled form. If the program is not distributed with its source code, the developer must provide a means to obtain the same. The source code must be readable and intelligible to any developer. Derived works
The software license must provide permission for modifications to be made, as well as derivative works. You must also allow them to be distributed, even after modification, under the same terms as the original license. Integrity of the source code author
The license must, clearly and explicitly, allow the distribution of the program built through the modified source code. However, the license may require that derived programs have a name or version number that is distinct from the original program. This will depend on the preference of the code developer. Non-discrimination against persons or groups
The license must be available to any group of people and any individual. Non-discrimination against areas of activity
The license must allow anyone from any specific branch to use the program. It should not prevent, for example, a company from using its code. Distribution of License
The rights associated with the software must apply to all those whose program is redistributed, without the need to execute a new license or additional license for these parts. Non-product specific license
The program is not part of another software, and to use it, the entire program must be distributed. If the program is extracted from that distribution, it is necessary to ensure that all parties are made available and redistributed to every one, since everyone has the same rights as those guaranteed in conjunction with the original program distribution. License should not restrict other programs
The license is not considered open source if it places restrictions on other programs that are distributed together with the licensed program. Technology neutral license
The license must allow the adoption of interfaces, styles, and technologies without restrictions. It means that no clause in the license can establish rules for these mentioned requirements to be applied to the program.
EXAMPLES OF OPEN SOURCE COMMUNITIES
CREATIVE COMMONS
Creative Commons Reel video
Creative Commons (CC) is a non-profit entity created to allow greater flexibility in the use of copyrighted works. The idea is to make it possible for an author/creator to allow the wider use of their materials by third parties, without them doing so in violation of intellectual property protection laws.
With a Creative Commons license, a composer can allow other artists to use some of his compositions by creating a mixture of rhythms, for example; a writer can make an article available and allow other authors to use it, either by publishing in other media, or by applying part of the content in a new text, or using the original, but making changes, anyway.
Thanks to the internet, this “collaborative spirit” has become much greater. The problem is that copyright protection laws are strict and often end up hindering the desire of many creators to not only give away their materials but also to use the creations of others who also want to share their work.
With Creative Commons, authors and creators can allow their works to be used in a much more flexible way. They can decide how and under what conditions their materials can be used by third parties. An example: a writer can allow anyone to use and change a text of his own, except in commercial applications. Note that, in this case, the Creative Commons license gives more freedom of use to the work, but does not remove the possibility of generating income from the original author: he may charge for the use of the text in the case of for-profit activities.
WIKIPEDIA
Almost 20 years ago, a discreet and humble way to disseminate and contribute to knowledge appeared on the internet. With the support of the Wikimedia Foundation, Wikipedia was born, and today has more than 54 million articles in 309 languages, written by volunteer collaborators around the world. Virtually all articles can be edited by those who wish to contribute, cite sources and references to enrich the information.
Jimmy Wales and Larry Sanger were the creators of the project, which went public on January 15, 2001. The name Wikipedia came from the fusion of a reference to the Hawaiian word wiki (meaning fast, light) with the British term encyclopedia.
Because it is an open encyclopedia, many users and Internet users question the writing quality of the articles, the rate of virtual vandalism, and the accuracy of the information. Many articles in the Wikipedia database contain unverified or inconsistent information; however, anyone who takes the digital encyclopedia seriously and contributes to it has its merits: the scientific articles that Nature magazine compared in 2005 reached almost the same level of precision as those of the Encyclopædia Brittanica.
Wikipedia is often seen in academia as a source of inadequate information. Scholars point to its troubled environment due to “editing wars”, in which contributors struggle to maintain their text while suppressing that of others, although admitting there are great interest and originality in the work.
However, international literature seems to converge to what they conclude that Wikipedia’s success shows that self-organized communities can build high-quality information products. Wikipedia’s analyzes are increasingly detailed and critical, with no room for simplistic praise or rejection.
Wikipedia is not only an online encyclopedia but also a common good, a commons. Its quality and maintenance depend on the cognitive surplus, that is, in the free time of schooled people. It is recognized among the most successful collaborative initiatives on the Web, based on trust among millions of contributors and readers, supported by standards that promote reliability and objectivity.
IF YOU WANT TO KNOW MORE
ABOUT. In: WIKIPEDIA. The free encyclopedia. Florida: Wikimedia Foundation, 19 march. 2017. https://en.wikipedia.org/wiki/Wikipedia:About
AIBAR, E. et al. Wikipedia at university: what faculty think and do about it. The Electronic Library, v. 33, n. 4, p. 668–683, 2015.
Brasseur, VM (Vicky). Forge Your Future with Open Source: Build Your Skills. Build Your Network. Build the Future of Technology.Pragmatic Bookshelf.2018.
DALIP, D. H. et al. A general multiview framework for assessing the quality of collaboratively created content on web 2.0. Journal of the Association for Information Science and Technology, v. 68, n. 2, p. 286–308, 2017.
Herstatt, Cornelius / Ehls, Daniel. Open Source Innovation: The Phenomenon, Participant’s Behaviour, Business Implications.Routledge. 2018
Gain Access to Expert View — Subscribe to DDI Intel
|
https://medium.com/datadriveninvestor/data-journalism-crash-course-4-open-source-communities-857cbf504b36
|
['Deborah M.']
|
2020-10-31 19:08:36.116000+00:00
|
['Data Journalism', 'Technology', 'Open Source', 'Data Science', 'Journalism']
|
2,083 |
IBM-Oxford Team Uses Supercomputers to Design New Drugs Against COVID-19
|
By Katia Moskvitch
With the second wave of COVID-19 gaining strength, researchers are in a race against time to find a treatment or a vaccine.
One international team of scientists from IBM Research and Oxford University is trying to design molecules that would interfere with the molecular machinery of coronavirus, the virus that triggers the disease. If successful, such molecules could become the basis of a new drug to treat or slow COVID-19 infections.
“We are blending techniques such as advanced machine learning, computer modelling and experimental measurements to accelerate the discovery of these new molecules,” says the lead researcher Jason Crain, IBM Research physicist and visiting professor at the University of Oxford. He details his team’s work in a recent COVID-19 High-Performance Computing Consortium’s webinar.
It’s still early days — the team is only four months into the project — but the researchers have already identified several compounds that look promising based on the computational modelling. The scientists now have to test them in a lab, says Crain, and the experiments will take several weeks.
While the ongoing COVID-19-related work is new, Crain’s team has been for many years working on drug discovery, most recently in the area of antibiotic resistance. “We pivoted this earlier work, quickly adapting some of the fundamental methods we had previously developed, to address COVID-19,” Crain says.
The biggest challenge for the team, just like for any other team searching for a new drug to halt the pandemic, is dealing with an immensely vast chemical space within which to identify new functional compounds. To address it, the researchers are combining cutting-edge AI methods with modelling on two supercomputers offered by the COVID-19 HPC Consortium — IBM Summit at Oak Ridge National Laboratory and Frontera at the Texas Advanced Computing Center.
Without these extra computing resources, Crain says, “the throughput of the computational screening stages would have been prohibitively slow.” After all, the computational modelling of a myriad of AI-generated candidate compounds is among the most demanding and time-consuming steps in the discovery pathway.
Computer modelling on Summit and Frontera has allowed the team to screen compounds and reveal their mode of action at the molecular scale, so that they have to synthesize and test experimentally only the most promising ones. “Summit and Frontera allow us to perform calculations of how candidate drug molecules bind to viral proteins much faster than would have been possible otherwise,” says Crain. “The Consortium resources have allowed us to incorporate very HPC-intensive steps into the screening protocol, which is a very powerful approach but rarely possible to do.”
The Consortium has also helped, says Crain, to bring together an international team of experts. “Some of the Oxford team, for example, have extensive experience in the structure of viral proteins, and techniques related to screening of candidate drugs,” he says. “The AI teams at IBM in New York and in the UK have been working on developing new methods that can ‘discover’ functional molecules — which may or may not have been made previously — very efficiently.”
This article first appeared on the COVID-19 HPC Consortium blog
|
https://ibm-research.medium.com/global-ibm-oxford-team-uses-supercomputers-to-design-new-drugs-against-covid-19-6293ced5720a
|
['Inside Ibm Research']
|
2020-11-27 13:06:05.842000+00:00
|
['IBM', 'Technology', 'Covid 19', 'AI', 'Coronavirus']
|
2,084 |
Tesla is One of The Greatest Marketing Success Stories of Recent Times
|
Tesla is One of The Greatest Marketing Success Stories of Recent Times David Ferrara Follow Dec 28 · 5 min read
Tesla famously has zero sales and marketing budget yet has some of the best marketing of any company
Photo by Tech Nick on Unsplash
Have you ever seen an add for Tesla on TV? Heard one on the radio? Seen one on Facebook, or off to the side in your Google search?
Nope, that’s because Tesla does not do any paid advertising. Similarly, they don’t have a network of auto dealerships with ads all over your local TV, and clowns and banners and those dancing wind puppet thingies.
Let me tell you how absurd this sounded to me when I first heard it especially given my last job in the SAAS world where this ratio is king.
Sales and Marketing (S&M) Cost to Annual Contract Value (ACV)
I’ve sat in front of venture capitalist who insisted that in order to be a successful company we had to spend $1 in S&M cost for every $1 in ACV. Even better maybe we should be spending $1.50 for every $1.
Now, in case you’re not a finance person let me tell you how absurd this sounds when you first hear it. Let’s say you sell a product that cost you $0.2 cents for every $1 in revenue (i.e. you have a 80% margin). Then you spend $1 in S&M costs to earn that revenue.
Well then you’re already $0.20 cents in the hole.
And we haven’t even gotten to overhead costs such as rent, utilities, salaries of everyone else in your company other than the developers. i.e. management, finance, HR, product etc.
By the time that gets added in well you’re bottom line is you are spending $2 for every $1 you make. Sometimes you’re spending $3 or $4 dollars.
This is what companies do
Well, this is just how it works. Have you ever wondered how AirBnB and Doordash to take the two most recent big IPO lose so much money year after year? As well as WeWork, Uber, Facebook and well you name it, all of the Silicon Valley startups lost huge amounts of money in the early years.
This is standard practice. All big venture backed tech companies (especially SAAS companies) take this approach.
The theory is that
The money is recurring revenue so once you get that customer into the pipeline you will have that $1 of recurring revenue year after year for only spending $1 in year one. There is endless competition in these spaces and you have to hit critical mass (i.e. capture a certain customer share and hit a certain company size) before your competitors. So spend, spend, spend in the early years to hit that critical mass and then worry about profitability later.
I’ve always thought there is a third reason that nobody is ever willing to talk about. In order for a venture capitalist to get as much of the cap table of a company in their pockets and out of the founders pockets they need the company to spend as much money as possible.
i.e. “here’s more money Mr/Mrs. Founder, now give us more equity… this is just how it’s done, have to spend spend spend if you want to succeed… oops now we own 90% of your company”
How did Tesla get away with this? How did they grow to where they are without spending any money on marketing?
They didn’t fall into the above practices. In fact, they spent the majority of their costs building their product, innovating and building out manufacturing capacity.
There have been articles written about this walking through a ton of reasons. I just have two that I think really matter.
Free Marketing
The first reason is Tesla is a lot like Donald Trump. For whatever reason they just receive a ton of free marketing. The Tesla owners and probably even more critical Tesla stock holders are as to Tesla as CNN and MSNBC were to Donald Trump.
Superior Product
The second reason is because Tesla is the polar opposite of Donald Trump. That is to say unlike Trump, Tesla just has a really damn good product. In fact, not just damn good, but ridiculously good, game changing good. In fact a product that is so good it generates its own press.
Actually, let me just have Jay Leno summarize how good the product is and why he thinks Tesla will succeed.
Leno writes that he bought the Tesla because it is the fastest four-door car he could buy, and that it turned out to be electric was secondary. When he bought it, he wasn’t specifically thinking of the environment, so the reduction in emissions was just an added bonus.” by Jay Leno via www.tesmanian.com blog December 19, 2020.
The Chicken vs the Egg
Here’s the question everybody should have. How did this happen? I think everyone knows this general marketing concept…
“You can have the best product in the world, but if nobody knows about it, what good is it?” Phil Knight, Chairman Emeritus Nike
So, Tesla made a great product how did anybody know how great the product was before anyone bought one? And in case we forget, for a long time nobody did. It took a long time to get the first Tesla rolled out.
Well, that’s where the free marketing came in. See Tesla has a built in advantage in that they have access to perhaps the single greatest influencer in the world (and definitely the single greatest influencer on Twitter).
Of course I’m talking about Elon Musk who makes the news about once or twice a week due to a tweet. His latest one, that’s all over CNBC and all the rest of the news channels was this one about how he almost sold to Apple a couple years ago.
Elon Musk tweet screenshot from Author’s Twitter Account
So, that’s the answer. Tesla might not spend any money on sales and marketing. They just figured out how to get it for free. Elon built up his celebrity and reach and then leveraged that reach for free marketing valued equivalent to many millions if not billions of ad spend.
As a bonus now all of that money that most companies spend making sure you are aware they exist Tesla is able to pump into their product. Not a bad strategy if you can make it work.
Gain Access to Expert View — Subscribe to DDI Intel
|
https://medium.com/datadriveninvestor/tesla-is-one-of-the-greatest-sales-and-marketing-success-stories-of-all-time-1457188c062a
|
['David Ferrara']
|
2020-12-29 16:52:49.177000+00:00
|
['Business', 'Investing', 'Finance', 'Technology', 'Marketing']
|
2,085 |
US Army Seeking Quieter Helicopter and Drone Technology
|
The US military is looking towards the development of drone stealth technology. In its press release, the administrative division announced the USA military’s priority to seek advanced drone technology prospects. The aircraft design will hold noise reduction and can penetrate hostile areas hassle-free.
News and it’s Potential
The USA has decided to make new drones that have very little noise or zero noise. U.S. Army Combat Capabilities Development Command has collaborated with Uber and the University of Texas to investigate electric vertical takeoff and landing aircraft’s acoustic properties. These vehicles use electric propulsion systems for flights. Due to the stealth technology, the aircraft does not come on the radar. Hence, drones and aircraft are getting successful by building military camouflage. The mere noise somewhere lets the hostile actors become aware of any activity that is not suitable and compatible with them. Hence, there is a need for new designs in the drones to utilize more robust structures with better defense surveillance.
Why is there a need?
The eVTOL (electric Vehicles Take-off and Landing) vehicles have traditional rotors. Two significant types of noise are generated due to these vehicles. One is the thickness noise, and the other is the loading noise. The thickness noise comes from the air’s displacement by the rotor blades, whereas the loading noise forms during lift and drag forces acting on the air that flows around the rotary wings. The sum of these two noises is the tonal noise. The researchers observed turbulence noise (or broadband noise) more than the tonal noise in defense aircraft. The researcher’s team monitored the orthodox model of noise existing in the aircraft and measured helicopter noise simulations’ modeling capabilities for eVTOL rotors.
How was the test done?
The researchers designed a test stand. On this stand, they kept two rotors on it. Nine microphones were surrounding the test stand in a circular array to measure the noise above the rotor and below the rotor. For the simulations, the team developed a Rotorcraft Comprehensive Analysis System (RCAS). Coupled with a separate program called PSU-WOPWOP, RCAS analyzed a routine noise prediction code-named after an onomatopoeia for the sound that helicopter blades make. Some of the critical observations came out from the test were:
Co-axial, co-rotating rotors, or stacked rotors, may provide better performance and lower noise than a conventional and traditional rotor.
One of the reasons behind stack rotors performing well is the blades’ arrangement in multiple planes, unlike conventional rotors with blades in a single plane.
Conclusion
Drone and helicopter technology can witness new upgrades with modern designs. Though the models are a part of current research, the US army is working on these designs to convert them soon into reality. The Army research team published its paper, “Experimental and Computational Investigation of Stacked Rotor Acoustics in Hover,” during the Vertical Flight Society’s 76th Annual Forum Proceedings.
Everleig is a Blog expert and has been working in the technology industry since 2003. As a technical expert, Everleig has written technical blogs, manuals, white papers, and reviews for many websites such as thoughtg.com
Source: US Army Seeking Quieter Helicopter and Drone Technology
|
https://medium.com/@gracetaylor922/us-army-seeking-quieter-helicopter-and-drone-technology-2aa0170804b4
|
['Grace Taylor']
|
2020-12-05 05:48:57.608000+00:00
|
['Drone', 'Technology', 'Helicopter']
|
2,086 |
How to fix Mcafee Installation Error 76567
|
Technology plays an important role in our life. Imagining life without technology is quite difficult for us. We are completely dependent on technology likewise your computer system is dependent on antivirus to keep the malicious program away from the devices. A McAfee antivirus is high rated software that is best to protect the device from viruses and threats. During installation, some of you might face error 76567. So while seeing this error don’t panic, just call Mcafee support number
All You Need to Know about Error 76567
We live in a world where technology is everything for us. Since error 76567 is because of internet connectivity. Due to bad network problem, you can go through this error. When you start to download the application try to be on the good network connection. Otherwise, The download of files might be complete but the files will be corrupt and your PC or computer will show Error 76567
How do you fix Installation Error 76567?
If you are looking for a solution for this type of error then make use of the MCPR tool. Be sure that this time the Internet connection is better. You can contact McAfee for technical help. The experts will be able to guide you better. It is very easy to fix this installation error. In order to fix this fault, you need to follow these procedures
● At first, move your cursor to the Mcafee Safeguard icon and have double click on it.
● Then, you need to search for the Web and Email Protection.
● After clicking on the Web and Protection option, a new page will open. From that page, select the Firewall link. And click on it.
● Click on the Programs Permission tab and also Make sure that the firewall is activated.
● Scroll down your mouse. If you created any applicable rule, then check on that.
● After that, give outbound access to all the programs which need network connectivity.
After applying the above procedure, you can easily fix the problem. Further, if you need any type of help, contact our experts who assist you remotely and your device free from errors. You can dial toll-free McAfee Antivirus Technical Support Phone Number USA +1–888–847–7260 to grab troubleshooting solution. Our experts can guide you with the basic tips for removing all the unnecessary junk files data from your system.
|
https://medium.com/@shellybrown068/how-to-fix-mcafee-installation-error-76567-f59814240a6e
|
[]
|
2019-12-20 15:25:55.468000+00:00
|
['Tech', 'Support', 'Technology', 'Mcafee Antivirus Support', 'Antivirus']
|
2,087 |
Is ‘Google Opinion Rewards’ Worth It?
|
Is ‘Google Opinion Rewards’ Worth It?
When it comes to paid surveys, I’ve always found them laughable. You get little return for your time and data. Many will claim it’s a side hustle, but it’s really not. It’s more like getting loyalty point or vouchers for giving greedy companies even more data than you already volunteer to them for free.
That being said, the Google Opinion Rewards app has been on my phone for over 4 years, and I used it almost every day. But why?
Yes, it can be said that Google Opinion Rewards is the same as the others, in the respect that they gain data from you in exchange for credit. However, Google isn’t pushy about it, nor is it tacky like the others.
You can’t just go into it and fill out as many surveys as you want, grinding away until you can afford that foot bath on Amazon. You have to wait for it to notify you when it has a survey for you. They request your service and you decided whether to give it to them or not. I think there’s something respectful and classy about that.
It doesn't ask you overly personal questions, which I like. Nine times out of ten, it asks me what shops I went to that day, and what method of payment I used, if any. Google already knows what shops I’ve been to because of location tracking on my phone. So the only thing I’m giving away is whether I used a debit or credit card to buy something.
|
https://medium.com/the-shadow/is-google-opinion-rewards-worth-it-c144ef9ac3d5
|
['Gareth Willey']
|
2021-01-31 18:30:40.519000+00:00
|
['Gadgets', 'Digital Marketing', 'Tech', 'Business', 'Technology']
|
2,088 |
A Dear John To My Intel MacBook Pro
|
A Dear John To My Intel MacBook Pro
Dear 2017 13-Inch Intel MacBook Pro (Serial Number A96JJFFMQ150),
I have a confession to make. For the past four weeks, I’ve been tempted by another Mac. Who am I kidding? You’ve seen my browser history. Yes, I’ve fallen in love with the M1 Air.
…
Please! Let me finish.
I’ll never forget the first time I laid eyes on your Retina screen in that Best Buy all those years ago. You had the most beautiful Retina screen I’ve ever seen … way prettier than that 15-Inch MacBook Pro with her pretentious Touch Bar. My gosh! Your pixels were so tight! And your nits! Umph!
But I can’t hold back my feelings for Apple Silicon anymore. I …
It’s all YouTube’s fault! Engadget and The Verge! With their bloody subliminal messaging! Six more hours of battery life on a single charge! Geekbench 5 scores of 1,619/6,292! Stick a geek up your bench, you friggin’ jerkwads!!! No one cares about how fast you can scrub through your friggin’ videos!!!
No, I’m sorry, Intel Macbook Pro (Serial Number A96JJFFMQ150). It’s my fault. It’s not you; it’s me … What can I say? I’m weak!
…
No, she’s not prettier than you! Come to think of it, she looks exactly like you.
Ok, maybe she’s a little skinnier —
…
Not by a lot! Not by a lot!
…
No, I’m not tempted by another Pro. I’m not going to spend another $400 on a friggin’ fan.
…
Right, the M1 Air doesn’t have a fan.
Photo by Egor Myznik on Unsplash
…
No, Intel Macbook Pro (Serial Number A96JJFFMQ150), your fan is wonderful the way it is! It’s just …
Remember when I had a GoToMeeting with the guys at work, and I had to share my Chrome session because —
…
— That’s right, with the grumpy Brazilian guy. Anyways, your fan was whirring in the background, and it was driving everyone —
…
— Yes, I know you were running a lot of processes in the background!
…
No, I can’t switch to Safari if all the guys are using Chrome! Look, Intel Macbook Pro (Serial Number A96JJFFMQ150), I don’t want to talk about this!
…
What do I see in her? I don’t know. Everything is just snappier with her. Everything she does is just so … smooth.
…
No, I never said you’re slow, Intel MacBook Pro (Serial Number A96JJFFMQ150). I’m perfectly fine with how fast you perform my tasks. Why do you think I paid that bloody Apple tax for you?
…
No! I was thrilled with your purchase.
…
No, she’s not cheaper than you. Well, the M1 Mac Mini is … God, I’m tempted by the Mac Mini too! I’m a disgusting pig!
…
Yeah, I know I would need to spend an additional $150 on an Apple trackpad.
…
No, I love your trackpad! That’s one of my favourite things about you!
|
https://medium.com/the-haven/a-dear-john-to-my-intel-macbook-pro-9e6129154755
|
['Andrew Cheng']
|
2020-12-23 23:34:12.870000+00:00
|
['Gadgets', 'Humor', 'Satire', 'MacBook', 'Technology']
|
2,089 |
Responsive Testing & Cypress
|
So, how do we manage to have a scalable responsive solution?
For that purpose, we’ll use the Cypress API through a script.
As in everything, this is no more than an alternative. It is not intended to be the ultimate solution, but, after having tried it, I believe it turned into a strong starting point.
The aforementioned article understood that the right path is to have a default configuration in the cypress.json file that modifies some of its settings according to the parameters we establish. This is where we set ourselves apart and expand the solution. We want to have N resolutions (meaning N available devices to emulate) and N userAgents (in other words, N emulated operating systems). Moreover, we want to be able to set iOS apart from Android. We want to be able to provide test runs and regression tests even more specific: iPhone-X, iOS 13.3.1.
Commands
The script will allow us to set <device> and <osVersion> as parameters; and, if necessary, (for example, if we run the script on continuous integration), we can define the <record> parameter, which will, in turn, set the record key stored as an environment variable. It will also allow us to establish the <open> parameter so we can use the Cypress runner.
Hands on!
The script logic allows us to always get to execute the suites. As long as the <device> parameter exists in the devices.js file, the script will wait for us to set the <osVersion> corresponding to that kind of device (iOS & Android). For example, we can set “iPhoneX”.
Once the device has been defined, the script will search within the osVersions defined in the os.js file corresponding to the iOS devices. For example, we can set “13.3.1”.
If the version we entered doesn’t exist in the list, the script will search for the default version of that OS.
node cy-start.js — -d iphoneX -osV 13.3.1 -o
From the Cypress runner, we can see the following environment variables are available: device, osVersion, osType.
We can take advantage of those variables in order to use them for the suite names.
When running the example script, we’ll see that the tests were executed at the specified version and resolution.
|
https://medium.com/flux-it-thoughts/responsive-testing-cypress-72d2be68690b
|
['Gustavo Miguens']
|
2020-12-17 14:13:50.242000+00:00
|
['Cypress', 'QA', 'Automation', 'Technology', 'English']
|
2,090 |
What The History of Knowledge Management Can Teach You About Life
|
To reduce library staff, companies identified information needs and started systematizing the way they collected information. Dean Witter, for example, initially advocated for hiring more librarians to efficiently meet brokers’ informational needs, they eventually decided to centralize information in order to reduce library staff. They consolidated a core “information platter” by inputting key documents on a local area network server for brokers to access the critical information they needed with the technology they were already using (Davenport 1994).
Let’s look at another definition.
Gartner Group expanded the process of KM with the following only a few years later:
Knowledge management is a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise’s information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers.
Note the use of “integration” and “previously un-captured expertise.” Once knowledge is gathered, integrated, and connected, we start having access to information assets that were previously unknown, or unavailable.
Why does integration matter?
Knowledge Management is no longer limited to managing the information assets from and within the confines of an organization. The term has extended beyond “the organization” itself to include relevant and associated information.
Take the history of the internet as an example. Following the Early Internet came the World Wide Web, the Commercial Web with the emergence of Amazon, eBay, and Google, then the Interactive Web that led to the Dot Com Bust. To understand how knowledge has extended beyond the reach of a sole corporate organization is to understand how information from relevant places become integrated.
The information has been published, distributed, accumulated by commercial companies like Google to give way to the rise of a more democratic, accessible platform like Wikipedia. The interconnected nature of information proliferated for the world to see, through the medium of the Internet.
Barack Obama on Technology.
These associations, insights, and analytics become even more important when they are aligned with how humans take in information. The beauty of knowledge and information assets is precisely the beauty found in an aquarium: its ecology.
tl;dr
In essence, the history of knowledge management reveals a journey of how we understand the way knowledge works. This has shaped our approach to not only categorize our information, but also integrate and make new meaning from our information. Developing insights from our knowledge means we draw connections between relevant and seemingly unrelated information. This kind of processing is one that centers the human experience, rather than the machine one.
What do you think about Knowledge Management? Leave a comment or thought below!
While you’re here… If you enjoyed this blog post, can you share it with a friend? We’re trying to grow our readership as a new publication on Medium. Thank you!
References
Davenport, Thomas H. (1994), Saving IT’s Soul: Human Centered Information Management. Harvard Business Review, March-April, 72 (2)pp. 119–131. Duhon, Bryant (1998), It’s All in our Heads. Inform, September, 12 (8).
Koenig, M., 2018. What Is KM? Knowledge Management Explained. [online] KMWorld. Available at: <https://www.kmworld.com/About/What_is_Knowledge_Management> [Accessed 26 November 2020].
|
https://medium.com/weavit/what-the-history-of-knowledge-management-can-teach-you-about-life-f349a690d128
|
['Joy Lee']
|
2020-12-10 12:17:29.827000+00:00
|
['Information Technology', 'Life Hacking', 'Knowledge Management', 'Productivity', 'History Of Technology']
|
2,091 |
Top 10 Powerful Websites Built with ReactJS
|
Looking for Top 10 ReactJS Websites?
Top 10 Powerful Websites Built with ReactJS
You can get the Idea from this Blog Which and Why some Popular Websites choose ReactJS.
ReactJS is used for building UI, and it is the flexible and more efficient JavaScript library that is introduced in 2011 by Facebook. It offers the best rendering performances with the core objectives among all frameworks ReactJS is more popular JavaScript frameworks. Individual components you can focus because into simpler components; you can break down the UI which offer by the ReacrJS. The future of the ReactJS is more because it is supported by the most robust community Facebook. ReactJS becomes a more popular JavaScript framework across the world with its attractive beneficial features and simplicity.
Below this diagram is the historical trend of websites using ReactJS. The data are shown in the percentage of websites.
So you can analyze the use of ReactJS in Websites is more by this diagram. Now let’s see the market position of the ReactJS as compared to other JavaScript Frameworks.
Market Position:
Almost in all business sectors, RectJS is used. Developers love RectJS due to its simplicity and ease to use. The RectJS development company provides all facilities and new features with advanced versions of ReactJS that make your websites more beautiful that is compatible in the market to get success fastly.
There are many more big companies using ReactJS. However, here I provide you the list of some large companies using ReactJS for web development. After reading this blog, may you surprise by these ReactJS websites?
Top 10 Powerful Websites make with ReactJS:
1. Facebook:
On social media, Facebook is a powerful leading company. Over 2.2 billion people users base globally. Initially, Facebook created the React library. On its main page, Facebook uses the ReactJS some parts also in React Native, the mobile application on Facebook builds, which is on both Android and iOS displaying.
React Fiber is a rewrite version of React, which is announced by Facebook in 2017. Of React frameworks further improvements and any feature development, it becomes the foundation that makes more responsive React. Of reloading page, without the necessity, it displayed the comments, post reactions, and notifications facebook allows with component architecture.
2. Instagram:
For sharing photos and videos, Instagram is the popular social media framework that people love most. With React, Instagram built completely, which is the single-page web application. With JSX code, the designers contribute. For user events in less than milliseconds ReactJS responses that climbed by creators, it is highly responsive, fast, and lightning web.
The traffics of Instagram comes from the desktop is 18 % by the Statista states.
3. Asana
On focus on projects, daily tasks, and goals the team enables by a work management platform. For is simplicity, a key objective to strive when a website building comes it. It should be Readable, testable, performance, and maintainable over time. With React, Asana rewrites their frontend.
Regarding focus and animation of their UI issues, Asana solves many with ReactJS. Its client performance issues of Asana ReactJS provides some great benefits as under,
Small code size
Simple to integrate with Luna
Virtual DOM implementation
Similar to Luna views
Simple to reason about reactivity
4. Netflix
Give special thanks to ReactJS because with that Netflix built, and on it, you can enjoy your series. Under the best top ReactJS websites, Netflix is coming on that list. The main reasons to choose ReactJS are including modularity, startup speed, and influenced by several factors for application and website Netflix adopt the ReactJS.
React offers many more benefits over satisfies these requirements, such as while handling custom rendering code, user interaction to opt-out capabilities, and to grasp it enables simple. Runtime performance, scalability overall, initial load times the most attractive features they can leverage.
5. Codecademy
In different programming languages to provides free coding classes, the leading interactive platform is Codecademy. They are pleased with using ReactjS for their web application and site also on its performance and reliability. They are very confident.
Of the site without disturbing the rest in isolation, the individual test portion enables you since component-based react websites. Codecademy attracted by React based on some aspects are;
Made easy SEO
Shortcode to write
Component-based, therefore, easy to conceptualize
Compatible with legacy code, therefore, flexible for the future
6. Yahoo mail
More than mail, the reliability and performance matter most. Using technologies involving Node.js, Redux, React, and others, the new Yahoo mail built.
Like, server-side rendering, one-way reactive data flow, and Virtual DOM with ReactJS and Flux the Yahoo mail architecture rewriting these different reasons behind it. As Yahoo’s choice React websites benefits that made;
Shorter learning curve
Growing and active community
Predictable flow
Easy debugging
Independent of large platform libraries
One-way reactive data flow
7. New York Times
With React, the New York Times designed a new project on Oscar red carpet pretends different looks of stars and other photos that span 19 years to filter to users enables the gallery fantastically. For ReactJS’s most impressive feature, re-rendering, we especially thank it.
To a NodeJS, ReactJS, and GraphQL combination from PHP loading HTML, JavaScript New York Times moved that offers stable front end more on its whole online world.
8. Atlassian
Like JIRA, Bitbucket, Confluence, and Stash Atlassian is a popular software collaboration. Internally and externally, they employ ReactJS, so we can say that this company is the total and true ReactJS company among all. Developers can reuse the libraries and different features such as deploying to desktop, mobile web benefits they get from React.
“Over the last two years, almost all single-page applications built in the Atlassian Cloud use React and Atlaskit. As the library matures, Atlassian products and ecosystem vendors lean in more heavily into it.” — Trey Shugart, principal developer at Atlassian
9. Dropbox
Dropbox has moved to on ReactJS when among all developers, the React become more popular for websites and apps. Employ cloud computing, hosting service web-based file is Dropbox. With others using file synchronization, you can share as well as store the folders enables by this technology. With the React framework, the plethora of resources available Dropbox beneficial efficiently with that. This storage service and cloud-based online backup success by the React contribute.
10. Airbnb
For tourists and property, hosts serve as a common destination. Airbnb is a famous company that offers hospitality services online. Across the globe, books exclusive accommodations opportunity provides by the Airbnb.
To create your code iteration as well as to refactor easily, React components tend. These components are reusable highly. The best and more important benefit of React that attracts more Airbnb is reusability and refactorability.
Final Words:
So ReactJS is the best choice to make the websites and applications. ReactJS provides the high and increasing performance of any websites overall. Not only current time, but ReactJS got more future potential for website development. ReactJS popularity and Adoptability are proven in these above-mentioned successful websites.
|
https://medium.com/devtechtoday/top-10-powerful-websites-built-with-reactjs-757cd38bef05
|
['Binal Prajapati']
|
2020-05-15 12:11:40.489000+00:00
|
['Reactjs', 'Website', 'Developer', 'Technology', 'Development']
|
2,092 |
The History of Silicon Valley — A Brief Summary (Part 1/3)
|
1849: A Gold Rush that’s been going on for 170 years
The Gold Rush of 1849 brought legions of people to California.
One person was Leland Stanford. A businessman who made his fortune selling picks and shovels to gold miners. In 1862, he went on to become the 8th Governor of California.
Leland Stanford (1824–1893)
In 1884, Stanford’s only son, Leland Stanford Jr., died of typhoid fever right before his 16th birthday.
This tragedy devastated Stanford and his wife, Jane Lathrop.
To honor their son, they decided to spend their entire fortune on building a university. A place that would strive to teach practical knowledge.
In 1885, Stanford University opened its doors.
… And tuition was free in the early years!
Fred Terman: The father of Silicon Valley
Side note: Back then, the area wasn’t called “Silicon Valley.” Its main attraction were the beautiful fruit trees along the roads. So the area was called “The Valley of Heart’s Delight.” (Aww!)
One of Stanford’s graduates was Frederick Terman, who later left to MIT to get a ScD in electrical engineering.
In 1925, Terman returned to Stanford and became a member of its Engineering Faculty.
Terman designed a course, created a vacuum tube laboratory, wrote one of the most important books on electrical and radio engineering, and became Dean of the School of Engineering after WWII.
Fred Terman (1900–1982)
Arguably though, his greatest achievement was encouraging his students to stay in California and build their companies. Also, he invested in them.
Why do I think this is his greatest achievement? Because his students built some pretty impressive companies:
In 1939, Bill Hewlett and Dave Packard founded their company by setting up shop in Packard and his wife, Lucille’s, garage. They made their partnership official by investing $538 and flipping a coin to choose the name Hewlett-Packard. One of their first products was an audio oscillator sold to Walt Disney for the making of the film “Fantasia (1940).”
David Packard (left) and Bill Hewlett (right) in front of 367 Addison Ave, Palo Alto, CA 94301, also known as “The HP Garage”
The Varian brothers, Russell and Sigurd, founded Varian Associates in 1948. They developed the top-secret Klystron tube, which could amplify electromagnetic waves at microwave frequencies. What does this mean? That it was installed on England’s fighter planes to locate and destroy over 90%(!!) of enemy Nazi u-boats in the Atlantic. This allowed for American troops to be transported to England for the D-Day invasion.
Fun little fact: Varian Associates also hired a young bookkeeper by the name of Clara Jobs (mother of Steve Jobs).
Stanford University’s entrepreneurial policies (led by Fred Terman), the end of WWII, the successes of Bill Hewlett, Dave Packard and the Varian Brothers (and of course the amazing climate), became the fertile soil on which the Silicon Valley garden would grow.
End of Part 1
NEXT → The History of Silicon Valley — A Brief Summary (Part 2/3)
Thanks for reading! 😊 If you enjoyed it, test how many times can you hit 👏 in 5 seconds. It’s great cardio for your fingers AND will help other people see the story. to find out whenever others just like it come out. You can follow me on Twitter at @richardreeze to find out whenever others just like it come out.
.📚 📚 Do you like books? If so you might enjoy my latest obsession: Most Recommended Books .📚
This story is published in The Startup, Medium’s largest entrepreneurship publication followed by +424,678 people.
Subscribe to receive our top stories here.
|
https://medium.com/swlh/the-history-of-silicon-valley-a-brief-summary-part-1-3-5a7ffcae9e71
|
['Richard Reis']
|
2020-11-11 07:02:42.041000+00:00
|
['Technology', 'Startup', 'Entrepreneurship', 'History', 'Life']
|
2,093 |
21st Century Learning: The effects of IR4.0, globalization, the changing workforce and shorter shelf life of knowledge
|
Learning is the lifelong process of transforming information and experience into knowledge, skills, behaviors, and attitudes. Learning in the 21st century comprises skills, technologies and insights that leading-edge academicians and organizations are using to create learning systems that are better suited to the emerging challenges. This is done through the practice Instructional Design — systematically designing, developing and delivering instructional products and experiences, both digital and physical, in a consistent and reliable fashion towards an efficient, effective, appealing, engaging and inspiring acquisition of knowledge.
At its inception, Instructional Design was dominated by the views of behavioral psychologists, B.F. Skinner, whose stimulus-response operant conditioning theories gave us the famous drill and practice routine — the idea that knowledge and skill are acquired through repetitive practice. Today, there’s discovery that learning occurs most effectively when courses or programs are carefully designed around the key tasks and skills needed to perform the job.
Recently, there seems to be new buzzwords such as e-learning, byte size learning, gamification, digitized simulations, etc. Having been in the corporate learning and development space for quite some time, I was bewildered with the new buzzwords and decided to immerse myself in recent developments and emerging trends in the learning and development area. Hence, in March 2019, I attended a Learning & Development Conference in Kuala Lumpur with an interesting title — Big L&D Summit 2019 — Emerging Trends in Learning & Development: Are You Ready to Up Your Game!
The two-day event was an insightful session with the exchange of knowledge and experiences by various speakers. At the end of the two day conference, I discovered that there is a “new world of work” emerging in the 21st century disrupting the corporate learning paradigm. It’s turning old instructional, episodic and live training models upside down, as technology, financial, people and competitive pressures drive change to achieve 21stcentury corporate success, growth and sustainability.
During the session, a speaker from Frost & Sullivan Asia Pacific shared very interesting insights, talking about the 4th Industrial Revolution (IR4.0):
IR4.0 is leading to Mega Trends and transforming the way businesses operate. Mega Trends are transformative, global forces that define the future world with their far-reaching impact on business, societies, economies, cultures and personal lives, e.g. robots have entered our homes for personal use, mobile financial transactions are now in crypto-currencies, self-driving cars, etc. IR4.0 is enabling connectivity that allows for the convergence of industries, products & functions. This convergence is likely to drive unconventional players to contest for new markets. For example, cars plus unmanned technology leads to the development of autonomous cars. Every company will become a technology company, as most companies will use mobile applications, data, and analytics, IoT, cybersecurity, cryptocurrency and blockchain, cloud computing, etc. The banking sector, for example, is moving towards branchless banking and uses more than one technology i.e. mobile applications, cybersecurity, data and analytics, and others.
These megatrends, coupled with globalization, the changing workforce, and shorter shelf life of knowledge, reveals that “one-size-fits-all” content is no longer relevant where instructional design is concerned. Just as businesses are personalizing their products and services for clients and consumers, so should instructional design methods innovate to meet the changing needs for the new business landscape.
Learning and development is expected to play a critical role in enabling to build the future-ready organization. How could learning and development play this role?
Continue to read
|
https://medium.com/knolskape/21st-century-learning-the-effects-of-ir4-0-e30c26831a8c
|
['Anand Udapudi']
|
2019-10-04 06:23:03.926000+00:00
|
['Future Of Work', 'Learning', 'Technology', 'Futureskills', 'Learning And Development']
|
2,094 |
how to personalize your Windows 10: 11 tips for new users
|
Do you know why apart from so many operating systems out there you only hear about windows only?
Take a guess?
No, it’s not fast or very efficient for heavy work, there are many operating systems out there which are far better than windows in terms of efficiency.
Still, we purchase windows, because windows give you a great user experience, with windows you can personalize your computer as you like.
There are many things that you can customize and make your windows computer more stylish. and change things the way you like.
This is a beginner’s guide for windows users who have just started using windows.
make a picture your background change your account picture personalize your lock screen give your apps a new look choose a theme you like download desktop themes customize your desktop color enable the night light feature make text larger or smaller hear everything from one ear personalize your task manager
make a picture your background
With windows, one of the basic changes that you can do on your computer is, changing the background as you like, you can set any image as a background, as long as it’s size is perfect for the background.
To change your background picture of windows computer follow these steps:
change your account picture
Some of you might not know but, Just like your background, you can also change the profile picture which will show up on your lock screen.
It looks very elegant and professional, so there is no reason why you shouldn’t change it with your desired picture.
To do that:
personalize your lock screen
When it comes to personalizing your computer, nobody likes a boring lock screen, and when you can choose a lock screen you like then why not?
Here is how you change the lock screen:
give your apps a new look
with windows, you can give your apps a new look. if you’re a dark mode lover person and don’t like too much light on your eyes then you can change the look too dark mode and vice versa.
which will also change your whole system into dark mode.
here is how to do that:
choose a theme you like
apart from all the above, the best thing you can do now is, you can change the theme which will change many things about your computer.
Themes are artistic combinations of wallpapers, sounds, and accent colors.
try playing with its theme section to know better.
here is how to change your theme:
download amazing desktop themes
if you don’t find any themes that interest you, then you can download some extra themes from Microsoft from as well.
here is how you can download desktop themes in windows:
customize your desktop color
if you are fond of any special like pink or blue or something, then you can change your desktop color in that color as well.
here is how to customize your desktop color in windows:
you can also pick an accent color from the background, moreover, you can set a custom
Enable the night light feature for your eyes
if you are a heavy computer user then you might want to enable windows night light feature.
the night light makes it easier to look at the screen in the night.
it helps stop light from affecting the brain. you can read more about how night light affects our brain — here
here is how to enable night light feature of windows-
Click the small message icon in the bottom right of your desktop under that enable Night light to go easy on your eyes with warmer colors.
make text larger or smaller
if you have trouble seeing small texts on your computers, or don’t like big text and prefer tiny texts then there is an option for that also in windows.
here is how to make text larger or smaller in windows:
Select Start Go to Settings Go to Ease of access Go to display then adjust the slider under make text bigger to make everything bigger, choose an option from the drop-down menu under make everything bigger.
hear everything from one ear
if you have trouble in hearing then there is an option in windows exactly for that.
with the help of windows “mono audio option”, you can hear everything from one ear only.
to enable the mono audio option in windows:
If you’re using one earbud or something similar, the audio will be combined into one channel.
personalize your task manager
with windows, you can personalize your task manager. here you can edit many things in the task manager.
from changing task manager size to deciding what you apps you want to keep, you can change everything here.
here is how to personalize your task manager in windows:
there are lots of things you can do on task manager check the below article-
|
https://medium.com/windows-ground/how-to-personalize-your-windows-10-11-tips-for-new-users-466595bd1da6
|
[]
|
2020-09-27 21:26:26.127000+00:00
|
['Tech', 'Technology', 'Windows 10', 'Computers', 'Windows']
|
2,095 |
Top 10 Manufacturing Technology Magazine
|
What is Manufacturing Technology?
Manufacturing technology provides the tools that enable production of all manufactured goods. These master tools of industry magnify the effort of individual workers and give an industrial nation the power to turn raw materials into the affordable, quality goods essential to today’s society. In short, we make modern life possible.
Top Manufacturing Technology Magazine
1. The Manufacturer
Covering all sectors, The Manufacturer, is an essential resource for every boardroom, senior management, delivering thought leadership articles, regulatory updates and best practice case studies. With regular events hosted around the country The Manufacturer brings extensive industry knowledge to you in person as well as online. For more information about our events and webinars go to our website.
Check This Out: The Manufacturer
2. Manufacturing Technology Insights
Manufacturing technology insights get its name within the list of prime manufacturing technology magazines within the field of producing for there lovely content and insights provide by company giants. There has been an improvement in manufacturing products using technology. Through the event of automation, robotics, and advanced production, the world has bounced back in conjunction with the economy. during this competitive era, corporations should adapt to customers’ evolving interests, like the personalized products, and therefore rummage around for a supply comprehensively covering growing changes within the trade. Manufacturing Technology Insights focuses on growing trends, client demands and a number of other technology solutions that square measure dramatically moving the producing arena.
Check This Out: Manufacturing Technology Insights
3. Make it British
Make it British helps to facilitate connections between designers/brands and UK manufacturers using a database of hundreds of British factories, predominantly in the clothing and textiles sector. Clients have included start-ups, established designers that show at London Fashion Week, as well as retailers with outlets on the British high street.
Make it British also helps to promote companies that manufacture in the UK through various online marketing channels, and provides up to the minute editorial on the people, products and places that are important within British manufacturing.
Check This Out: Make it British
4. The Manufacturing Outlook
Manufacturing Outlook is a print medium that showcases the various enterprise solutions that can restructure business goals for a better tomorrow. We strive to provide a platform that allows high-level executives in the manufacturing industry to share their insights, which will enable business leaders and startup ecosystems to leverage manufacturing trends and provide a better understanding of the manufacturing sector and achieve business goals in an effective manner.
The Manufacturing Outlook is an opportunistic platform for organizations that are looking to embark on a path that leads to improved functional efficiency.
Check This Out: The Manufacturing Outlook
Manufacturing & Engineering Magazine
Manufacturing & Engineering Magazine is a comprehensive publication that looks at the most important issues affecting the marketplace of today. Manufacturing & Engineering Magazine focuses on the latest developments within the UK manufacturing and engineering industries. Covering all issues affecting the industry and including comprehensive analyses of market trends, the magazine features in-depth interviews with leading company figures, as well as getting the low-down on the most significant projects that are taking place across the country today.
Check This Out: Manufacturing & Engineering Magazine
5. Manufacturing.net
Manufacturing.net delivers to a global community the most up-to-date information shaping the manufacturing landscape. Whether it’s bringing to light new regulation that might change the way you run your business, detailing broad economic trends, or showcasing the latest trends in product development — Manufacturing.net has you covered.
Our dedicated editorial staff uses numerous industry resources to keep the site constantly updated with the latest and most relevant content on all the topics, critical issues, and market sectors relevant to the manufacturing and product development marketplace.
Check This Out: Manufacturing.net
6. Manufacturing Global
Manufacturing Global is an innovative digital publication aimed toward conveyance business executives up-to-date with the newest news, data, and trends from across the producing trade, therefore creating into this list of manufacturing technology magazines.
Manufacturing Global digital platform includes an interactive web site and magazine expertise that may bring you within the planet of producing, as well as comprehensive insight and analysis into the arena. Manufacturing Global magazine provides an interactive session with leading executives concerning key trends, technological advances, lean developments, operational excellence and therefore the progression of individuals and skills throughout the trade.
Check This Out: Manufacturing Global
7. Industry Week
Industry Week is an associate monthly trade publication that started in the year 1882. Industry Week provides producing executives with key insights on manufacturing technology and analysis of trends, news, operational information, and analysis, further as facilitating peer-to-peer spoken communication amongst the world producing management community.
Industry Week Magazine provides the user with top manufacturing technology news and articles. There are a lot of monthly visitors to there site who keep coming back to read their content. This is a great sign of a good magazine thus listed over here in top manufacturing technology magazines.
Check This Out : Industry Week
8. Manufacturing Management
Manufacturing Management has been the authentic voice of UK manufacturing for 70 years, providing the industry with a forum to exchange views and a platform for business improvement through its monthly magazine and daily website. The publication also runs a number of events including the Manufacturing Management Conference, Manufacturing Management Show and Manufacturing Management Champions. Several Manufacturing Management Factory Tours also take place each year.
Check This Out : Manufacturing Management
9. Addictive Manufacturing Today
Additive Manufacturing Today presents you with detailed information on 3d printing in the manufacturing sector. There are various publications that are explained in detail. The magazine also lists detail insights, articles on the latest trends in manufacturing technology. Thus making it into top manufacturing technology magazines.
Check This Out : Additive Manufacturing Today
|
https://medium.com/@chrishtopher-henry-38679/top-10-manufacturing-technology-magazine-f749a7edad40
|
[]
|
2020-12-10 13:28:32.400000+00:00
|
['Magazine', 'Tech News', 'Manufacturing', 'Solutions', 'Technology']
|
2,096 |
Is the Way You Use Burndown Charts Helping or Holding You Back?
|
Origin and Purpose
The burndown chart originated with Ken Schwaber at the turn of the millennium. His intention was to provide scrum teams with a simple toolkit to use during a sprint. In the years following its invention, it gained popularity and saw widespread use. At its core, a burndown chart is a simple graph. It has the amount of work remaining on the vertical y-axis and elapsed time on the horizontal x-axis. A straight line is then drawn between the upper left and lower right corners. This line represents the ideal progress during a sprint.
The chart is intended for a single task: monitoring the progress the team is making towards achieving the sprint or some other defined goal. The original concept made use of total estimated hours for the vertical axis and days for the horizontal axis.
Can we meet the goal is the only question a burndown chart answers.
Anyone who has worked in development knows that accurate estimation is near impossible. Once you start to work on a task, you are entering into a learning phase. This nearly always leads to discoveries that were not included in your estimate. So the estimate to complete needs to increase. The inability of teams to accurately estimate effort for IT has been known since the early days of computing. It was even an agenda item at the first-ever NATO IT conference in 1968.
Consider this example. You estimate eight hours to complete. After working for four hours, you believe that there are still eight hours of work remaining. This is the reality of development in an IT setting. The burndown chart tracks the new total estimate to complete at the end of each day against remaining time. The team will then use it to judge if they can still achieve the sprint goal given the estimated total of work remaining and the remaining time.
Using the burndown chart in this way requires the team to re-estimate the remaining total work on a regular basis. If they do not, the burndown chart becomes worthless. In the 20 years that burndown charts have been around, they have evolved to better reflect the way we currently work. They have also been hijacked by traditional managers and used to monitor and control the team, turning them from a useful team tool into a weapon of oppression.
|
https://medium.com/better-programming/the-definitive-guide-to-burn-down-charts-a176db096294
|
['Mark Gray']
|
2020-10-20 15:42:31.234000+00:00
|
['Technology', 'Leadership', 'Programming', 'Product Management', 'Agile']
|
2,097 |
How will the future make mobility smarter? Interview with Eric Hale, speaker at EICS 2020
|
Technology and mobility, technology and tourism, technology and environmental sustainability. Technology is increasingly being involved in the future of many different industries, connecting them in an extraordinary way.
That’s why we need to stop talking about emerging technologies as a whole. We need to start to discuss how emerging technologies will have a role in different industries, and in the companies belonging to those sectors.
The European Summit of Emerging Technologies, 2019.
That’s our mission at EICS 2020.
The third edition of the European Summit of Emerging Technologies will be memorable. We’re working hard to let you ‘TRY — LEARN & DISCOVER’ everything related to emerging technologies and their applications in specific sectors. We want to help managers and entrepreneurs to understand the role of those cutting-edge technologies learning to innovate projects in 3 ways: try on demos and new devices to understand what emerging is about, putting their hands in innovation design-lead process with practical workshops and getting inspired by international case studies.
Are you joining us in Milan on 25–26 March 2020?
Is the goal of future mobility to be sustainable?
The combination of technology with mobility, tourism and sustainability creates a new important topic “sustainable mobility”, becoming increasingly necessary as a tireless engine of the unstoppable technological revolution we’ve been living in.
If it’s true that sustainable mobility can be studied and discussed through immersive technologies such as virtual reality, then we are investigating an issue of extraordinary importance with the technological tools that are available for us now and tomorrow.
And that’s why the future of sustainable mobility studied through immersive technologies is the main topic of one of the workshops at EICS 2020.
On March 26th from 10:30 am to 1:30 pm at LA Village of Credit Agricole there will be a workshop focused on smart mobility in which participants will learn in which direction immersive technologies aimed to make mobility more sustainable are moving to.
In a team of 4 to 6 people, participants will work together to solve design problems related to mobility. What does it mean making mobility sustainable? What will be the needs of the people and our future cities moving fast towards just another technological revolution thanks to emerging technology?
The keywords here are team working and Design Thinking.
The workshop facilitator will be Eric Hale, Design Director at Uqido, an Italian engineering and consultancy company specialized in software development and immersive computing.
Subscribing this workshop will allow everyone to have an extraordinary overview of the technological aspects related to mobility. By learning or refining the “design thinking” companies will learn how to approach and solve problems and finally, having the chance to making the world a better place thanks to potentially disruptive solutions that can be designed and developed by the workshop participants in the future.
It’s a design-based procedure you can learn and implement everywhere!
Eric Hale and his vision of having a sustainable future
I had the opportunity to ask Eric, a visionary and multifaceted man, some questions.
Eric Hale, workshop speaker and facilitator at EICS 2020
He explained to me a wonderful way to approach people’s problems.
Eric sees those problems as puzzles to solve, imagining them as connections to be created through design and technology.
The vision of this professional who at the age of five had Leonardo Da Vinci as an idol is to design a workshop to allow the participants to roll up their sleeves and learn a method of planning-oriented strategic thinking, understanding new tools and getting methodologies in a very pragmatic and practical way.
I asked Eric how he sees the future of Smart Mobility technologies in the next 3–5 years.
Eric imagines a change driven by new mobility that aims to eliminate traffic as we know it. The scenarios he has allowed me to imagine are extraordinary and futuristic. From the car with automatic driving to the centralized organization of traffic resulting in the railway like the efficiency of moving cars, continuing to a paradigm shift where the car owner doesn’t need to be human, after all.
Who knows what will happen? None. But we can have a glimpse on the consequence of this new scenario, which will be about redesigning our cities, our spaces, our daily habits and eliminate parking, giving way to parks and green areas.
Eric tells me about different cities, such as San Francisco, where technology is making an impact as currently dedicate roughly 75% to 80% of the city’s space to moving and parking cars. This is actually very inefficient and wasteful. What if we could cut down on excessive parking requirements or boost mass transit and free up land for development? A recent report from the Rocky Mountain Institute argued that the era of private car ownership may peak within a decade, as new networks of shared, electric, possibly autonomous vehicles become cheaper. How can cities be redesigned accordingly? San Francisco sketched out a forward-looking plan to take advantage of these new transportation options and shrink the amount of space devoted to cars. With smaller streets and fewer parking spots, the city would have more land to work with — to build more affordable housing.
And we all, thanks to Eric and people like him, will start believing in this more and more, as we understand how design processes can make these visions real and ready for the future that is not very far away anymore.
|
https://medium.com/eics-the-immersive-blog/smart-mobility-with-eric-hale-8fe476d49002
|
['Manfredi Domina']
|
2020-02-03 16:17:03.015000+00:00
|
['Emerging Technology', 'Smart Tourism', 'Smart Cities', 'City Planning', 'Design Thinking']
|
2,098 |
How to Develop A Food Recipe App: Cost, Features and Business Model
|
Are you a hotelier, restaurateur or a passionate cook who wants to share your best food recipes with the food lovers? Turn your well-thought recipes into a business by simply developing a food-recipe app like Tasty, Yummly, SideChef or BigOven.
Cooking is an art that comes from passion and allowing people to turn any ordinary meal into mesmerising and tempting food. Most people out there take this art as their hobby or a profession.
But with the fact that how people, especially foodies are turning towards mobile apps for searching a wide choice of food recipes, now is the time to take the momentum with the launch of your food cooking recipe app.
If you are still confused whether it will be worth developing a food recipe app like Tasty or Yummly, then you need to go through these market insights…
Here is the graph, portraying how frequently people have searched for recipes and how it is kept on rising.
“According to the survey, out of 400 crore people to their home worldwide during Covid-19 pandemic, 130 crores are in India. And with activities outside the home halted, it is found that Food recipe, Netflix, Health and Ludo are the most leisure-related topics during the peak time of lockdown.”
“Apart, a survey has discovered food recipe apps will enjoy the market size of 22,755,800 potential users from the US, Canada, United Kingdom, Australia, India, Pakistan and Philippines have searched through a Facebook and shown interest in cooking.”
“The mobile app industry is soaring, like never before. As of 2019, the total mobile app revenue was $461.7 billion worldwide and projected to generate $935.2 billion in 2023, it is worth considering investing in mobile apps. Moreover, 1 out of each 4 is an iPhone/iPad user of the age of 18 and older, searched for cooking food recipes.”
Gone are those days when only moms used to look for a cookbook to try various cuisines at home. Several foodies are always high on cooking food by using recipe apps on a daily routine or asking google to find the best recipes for the meal.
You must be wondering, while there are so many food delivery applications available to satisfy your food craving, why would one look for the recipes and devote long hours to the preparation?
Well, the fact of the matter is, each recipe involves a whole gamut of ingredients, and when cooking something delectable to our taste palette, then it feels like a greater satisfaction.
There are various traditional cookbooks available that help you find the recipe or do some research online and all the recipes are laid out on Google, Youtube, Facebook and more. Now you can be an expert cook in no time!
But, users can do it far easy by simply installing an app to look for the recipes to cook for different occasions and purposes. Instead of going online to look for the recipes, businesses and mobile app development companies have come up with the excellent app idea that can answer all your questions in seconds.
Right from what should I cook to How should I cook, a recipe app can offer you all the solutions right away.
All thanks to technological advancement that helps people to try different things in a nutshell. People who do not know about cooking and ingredients at all, are cooking expertly.
With the growing boon of digitization in all business verticals, it makes sense to hire a mobile app development company to build a food recipe app like Tasty and Yummly.
However, whether you a startup, passionate cook or an entrepreneur, once you have decided to create a food recipe app, few of the questions will pop up in your mind as below:
What Type of Food Recipe App Can You Develop?
Who Can Earn From This Type Of Application?
Who Will Be The End Users of Recipe App?
What Features Can You Integrate Into a Food Recipe App?
How Can You Make Profit From a Cooking Recipe App?
How Much Does It Cost You To Launch An App?
Once you get the answer to these questions, you have a better idea about the scope of your future app will have in the market and accordingly plan to create a structure of your app. Before you dig deep to find the answer to these questions, it is worth to look over two leading food recipe apps like BigOven and Yummly providing over 350,000 to 1 million recipes to the users. Both of these apps have succeeded in attracting the attention of the users despite being altogether different from each other.
Though, it is crucial to realise that sticking to a single strategy will not help you win the game. So to become the next big hit of the market, you need to give a thought to different scenarios that make your product survive even in the tough competition. So, you must know an app development company that can understand your business goals and provide you with the solution that works best to meet the specific needs of the end-users.
So what’s the idea behind creating a food recipe app?
What Type of Recipe App Can You Develop in 2021 To Make Money?
With the availability of hundreds of cooking food recipe apps, how to make your app idea stand out in the crowd is the biggest challenge for you. So the main idea behind creating the app is to surprise your audience with something exclusive than to develop an app with standard features simply. So here are the few types of app that you can consider to develop:
No matter which app idea you choose to customize, make sure you have a home screen of the app that encourages users to get engaged with it for long hours. Like search bars on the homepage attract users to search for a broad choice of recipes, list the favourites and engage with the app as much as possible. If your app has a lack of content, then users will get bored and not be able to connect with your app.
So here are the few parameters that you can consider while hiring software developer for the project:
Advance level of App Functionality:
While customizing the structure of the app and defining the functionality of the app, make sure to keep in mind the device structure. This is where you need to consider the target audience and device structure also so that the users are able to access the functionality of your app to the fullest.
Multi-Purpose Apps:
Make sure your app is designed to solve the multi-purpose of the users, instead of just providing a bunch of recipes. This is what users expect from the app after downloading it. Thus, your cooking and food recipe app should have different modes to satisfy the needs of the users.
Food Cooking Mode:
People who love to cook, whether it be men or women, professional chefs or beginners- cooking mode is the essential app functionality that users expect from you. Right from exploring new cuisines to experimenting with fresh ingredients, make sure your app has everything to engage users for long hours.
Also, keep in mind to keep the interface simple and it needs to be supported by too many features. You can also choose to hire a software development company that ensures you high-quality content in the form of tutorials, video guides, photo instructions and other essential cooking tips that make the process easier for the users.
Grocery Shopping Mode:
Integrating ecommerce features to your app and allowing users to buy groceries from the listed stores, can be a worthy decision to entice more users to your app. And further, to enhance the functionality of the app, you can choose to incorporate the feature of providing on-demand food-ordering in an app.
Who Will Be Interested in Developing and Using This Type of App?
You have a brilliant app idea to startup the business, but the question is who can leverage this app idea and able to make money in 2021:
Here is such kind of people who can invest in the recipe app:
Professional Chefs
Health Coaches
Startups in the Food Industry
Passionate Cooks
Nutritionist
Hotelier/Restaurateur
But, who will be the potential customer of your application? By knowing your target audience you will be able to create a recipe app that actually helps meet the needs of the end-users. Apart from the foodies who love to browse food recipes on social media or other platforms?
Here are the few other categories of the audience that you can choose to target:
People living alone far away from their home
Medical Patients prescribed to be on a special diet
Fitness lovers who prefer to in-take a balanced diet
People are fond of exploring new cuisines and eating outside.
Amateurs in cooking
Travellers who always roam around cities to cities.
Beside every user has different cooking skills, experience and understanding, so make sure you describe each recipe with an easy to follow guide that is simple to execute. To avoid making it complicated, you can choose for android app development solutions that help you develop an app with all the necessary features.
What Are the Key Features to Create the Best Recipe Book App in the Market?
Features are the most important of your app development as it defines how your app will be taken in the market by the users. The choice of the features will also help you determine the cost to develop a food recipe app in 2021.
So here are the few necessary features that you need to integrate into your app:
User Features of Food Recipe App
Registration: The first thing users will look about your app is the registration process. You can hire mobile app developers to create a comfortable and smooth registration process for users to log in your application without any hassle. To make it far more comfortable, you can also provide the provision to integrate social logins and allow users to log in just with the social pages.
Profile Generation: Allow users to create a profile with the necessary information, including contact related info, their preferences and interest areas for the further recipe suggestions.
Meal Plan Out: By integrating this feature, you allow the app users to plan out the weekly meal in advance so that you don’t need to think about what to bake today.
Recipe List: Choosing from thousands of recipes is quite challenging for users. Therefore, categorize the list of recipes under various sections including healthy eating, salads, recipes for occasions, regular meal recipes and more.
Search and Advanced Search: This feature will help you engage users for long hours and encourage them to try recipes. Also, allow users to use filters to narrow down their searches by categories, occasions, methods, diets and more.
Nutritional Calculators: Nowadays, people are becoming more and more conscious of their health, so they prefer to watch out the calories and nutrients they take in. Make sure your food recipe app has this feature.
Save Recipe: Add an option to save their favourite recipes and access it from anywhere, anytime.
Print Recipe: Integrating this feature will help users to easily access the recipe in the printed form and avoid the inconvenience of holding a smartphone or bear the loss of the internet while cooking.
Personal Note: While watching the tutorial, allow users to add private notes for the special inputs for cooking any particular recipe.
Social Sharing: You can expand the reach of your recipe by allowing users to share their favourite recipe on social media. This will help garner more attention of users and engagement for your app.
Photo Sharing: Allow users to click a picture and share it in an app, to help others know how it will exactly look after cooking.
Review and Rating: Users can rate and review the recipe that they tried cooking. With this, they can give an idea to other app users about the performance of that recipe and how easily it is to execute. Implementing this feature to your app can be time consuming and complicated, therefore, you can choose to hire app developers to eliminate the complexity of the task.
Admin Features for Creating Cooking Mobile App
Manage Recipe and Subscription Packages: Recipes would be the heart of your food app, so allow admin to manage it seamlessly.
Manage User Profile: This feature allows the admin to view, manage and edit the profile of the users.
Manage Payments: The feature allows the admin to manage the payments done by the users for the app subscriptions, groceries, e-book purchase and more.
View Earnings: With the analytics, you can determine how much your app is earning in a month.
View Ratings and Reviews: Admin can view and manage the reviews and ratings given by the users and also their details.
Manage Community: Admin can manage the community where the app users connect. Admin has the facility to view, hide or delete the question and answer if they find across anything inappropriate.
Content Management: Content is the king of your app, so admin is responsible to manage the content published in an app and make sure to continuously update and review it.
Notifications: An option to send a push notification or alerts to the app users related to the offers, coupons, new additions and features in the food recipe mobile app.
So now you have all the details about who can earn from this app, who will be your end-users and what features and functionalities are required to create a food recipe app. But many of you are wondering, what all this makes sense if it is not earning you anything.
Best App Monetization Methods for the Food App
How to make a profit from your app is one of the most frequently asked questions by the app developers. No matter how brilliant the app development company you have deployed to the project, it is ultimately a waste if your app is not earning any money.
So how can you make money from the food recipe app?
These are the few app monetization strategies leveraged by thousands of apps in the app store and play store. So let’s begin with it:
User Subscription: You provide additional features to the users opted for monthly subscriptions like ad-free recipes, diet oriented recipes and more. Your mobile app offers subscriptions for a certain period.
In-app Adds: This is one of the traditional ways to earn money from the apps. The advertisement has become the most powerful tool for the digital market and using this tool to your recipe app will help you earn money. But make sure to advertise your product and service-related ads. If a user clicks or interacts with an ad, then you will get money.
Freemium: To monetise from this strategy, you need to have a second paid version of your app. If users like a free version of your app, then they will probably download the paid app version that has extra features.
Ok, now all set! You have an idea about how to make money from your app. But the only question left is how much it cost to develop a food recipe app in 2021?
Well, there is no straightforward answer to this question, thus let’s find what factors are influencing the final cost of developing a food recipe app…
How Much Does It Cost To Launch A Recipe App in 2021?
While there is no standard cost of developing a food recipe app, but all such features there is a cost attached to it. To create a food recipe app like Yummly, it would cost you around $5000+ depending upon various factors your cost could vary. Factors ranging from designing an application, features, sizes and the platform of the app are the most critical aspects that drive the final cost of the app development.
As you choose to add more and more features and complexity to your app, it is going to be a higher price. App platforms also impact the prices of the application, and developing an iOS application is comparatively more expensive than Android.
In addition, the development cost of the application also varies according to the location of the developers and their skills and expertise.
Make sure you have the best app development team that comprises of:
Project Analyst and manager
Front-end and Backend developer
UX/UI Designer
Quality Assurance
Marketing Team
All-in-all, with the average per hour cost of the developer about $50, it will cost around $5000 to $10,000+ to create an app like Yummply, BigOver, Tasty and more. Further, it can be either higher or lower, depending upon the specific needs of your business.
Conclusion
Undoubtedly, developing a recipe app has become a new trend especially in 2020 and expected more and more foodies are kept embracing a different way of cooking food.
So with the changing psychology of the people and their eating habits, developing your own cooking and recipe book app will be a great idea for startups. Hopefully, with this blog, you have a concept and all the information with you as to how to stand out in the market with your app.
But to make your application a great success in the market, you need to choose to partner with a skilled software development company that can turn your idea into a perfect solution. Also, you can leave a comment below if you have any doubts or query!
https://www.twitter.com/FlutterComm
|
https://medium.com/flutter-community/how-to-develop-a-food-recipe-app-cost-features-and-business-model-9ef5c525edff
|
['Sophia Martin']
|
2020-12-29 05:55:26.346000+00:00
|
['Mobile App Development', 'Apps', 'Mobile Apps', 'Technology', 'Startup']
|
2,099 |
Netflix’s Looming Merger
|
Photo by Dawid Labno on Unsplash
Disclaimer: I’m not a prognosticator and I’m certainly not an entertainment industry expert. As a result, this article is more like a working paper rather than a final, absolute argument. The main objective is to illustrate a line of reasoning heavily inspired by current events and this week’s study of the brilliant book Blue Ocean Strategy.
Netflix and Fragility
A few weeks ago, I wrote about Netflix and its fragility problem. To put it simply, Netflix has always been a distributor of sorts, a centralized hub for entertainment. Just like Blockbuster. The two companies weren’t rivals (at first) because they operated on completely different service models. But they provided the same service.
And as history showed rather quickly, Blockbuster’s original model was quickly made inferior. Netflix’s quick success was anchored on the innovation of its DVD-by-mail system which offered greater value than Blockbuster (more selection, convenient browsing online) at lower prices. Blockbuster eventually tried to mimic this service but the effort (Blockbuster Direct) was short-lived, inferior, and came too late (HT to Adam Gonnerman).
To put it in Blue Ocean terms, Netflix’s model perfected the value innovation of home media distribution. With lower costs and greater value never-before-offered, they set themselves on a course for success. To their immense credit, they continued to chase that value innovation to the next evolution.
After seeing the potential of online streaming, Netflix ventured into its next Blue Ocean as a pioneer in the Streaming Video On Demand (SVOD) industry. Few were equipped to enter this space and being the first real distributor gave Netflix a chance to retain all the value they originally held and simply offer it in a new, more-frictionless channel. It would be akin to Amazon somehow creating the Star Trek replicator so that every product was available instantly on demand, eliminating all the wait times that goes into the shipping and delivery component of their service.
Which is to say that Netflix’s streaming was something of a miracle. No more worries about DVDs, preordering, or being locked to a single device (dvd players). Customers flocked to it.
We came for the novelty and convenience but we stayed for the value. That value was built on the content selection. By virtue of being so early to the game, Netflix offered content providers the only real channel to share their entertainment. As popular television shows and movies came to the service, customers found just about everything they had wanted from the mail-order service and thus shifted to streaming-only. New subscribers came aboard, too, and the business just grew by leaps and bounds.
This showed Netflix’s great strength. And its vulnerability. Because what draws subscribers to join is different from what compels them to stay. The convenience is an easy attraction but the content is what keeps us.
So what happens when the actual content providers decide to take their ball (i.e., content) and go home?
The Reddening Ocean
We’re about to find out. The shift is happening. Netflix is losing some of its most popular content as AT&T’s WarnerMedia launches its own ship into the reddening waters of the SVOD ocean. Disney is doing the same. And NBCUniversal. And Apple. And existing competitors like Amazon are strengthening their offerings with live sports and even beating Netflix at the distribution game with new partnerships that once went to Netflix when it was the only real game in town. Meanwhile, separate content providers, like CW, are now exiting any exclusive deal with Netflix so that they can draw new bidding wars in this expanded ocean.
I should mention, too, that Disney isn’t just offering their own service for their own content. They’re also taking over Hulu to broaden their SVOD value, expanding just as aggressively as Amazon.
This will only continue. Because we’re still in the early dawning stages of the SVOD industry. And the action already casts an ominous shadow on Netflix. Despite all appearances, I honestly think they are in desperation mode.
Losing Grip On Value Innovation
To understand why, consider the following two bits of information:
First, as reported by the Wall Street Journal, non-original programming constitutes 72% of the viewing time spent on Netflix. In a simplistic view, this means that 72% of the value Netflix offers to customers is severely threatened as competitors regain their content.
Meanwhile, Netflix’s subscription fees have increased into HBO territory.
Value is decreasing rapidly from this one-two punch. Netflix is essentially being pushed to the ropes. They are losing what customers want and raising prices at the same time.
Hulu has already attacked this weakened position by lowering their subscription fees. Disney and others will also undercut once their services launch. And given that Netflix’s borrowing and spending has skyrocketed, the company can’t afford the pending price war.
Why has this happened? Because Netflix started and succeeded as a distributor. They provided content to people in convenient ways. Remove that content and you’re only left with convenience. This is the fragility.
If that sounds strange, think of it this way: imagine if Amazon could no longer sell major brands. Would you still keep a prime membership? The mass-market value of Amazon is built squarely on its convenient selection of major brands that consumers trust.
Similarly, the mass-market of Netflix is built squarely on the convenient selection of major entertainment that consumers trust. We are loyal to Friends, Frasier, The Office, Parks and Rec, and whatever else. We are not loyal to Netflix.
The only way for Netflix to guard against the fragility is to find new content that serves people just as well and build loyalty around it.
This, of course, is why Netflix exploded with a content-creation frenzy that dumped 1,500 hours of new entertainment onto the service in 2018. It’s almost obscene. And I think it reeks of desperation. Netflix saw this day coming and decided to go on a spending spree, borrowing and burning as fast as it could, to throw all the content it could onto the service and see what sticks.
Has it worked? I don’t know. I doubt it. And Netflix won’t give any straight answers.
The approach reminds me of Atari’s strategy in its heyday. Deliberately flooding the market with more content just cheapened the overall experience. Will bad Netflix shows become this industry’s version of Atari’s E.T. the Extra-Terrestrial?
I can’t help but think the company is losing its grip on the value innovation.
To refresh that idea, let’s return to Blue Ocean Strategy. The integral part of value innovation is to find ways to break the value-cost trade-off. As the authors write,
Value innovation requires companies to orient the whole system toward achieving a leap in value for both buyers and themselves.
That’s not happening anymore. When Netflix was a distributor of all the in-demand content, it had a perfect alignment of a convenient service at a low cost that continued to develop robust offerings, thus add more customers, thus add more content, in a virtuous cycle. Again, as a distributor, it worked great.
But with the non-original programming disappearing, Netflix is no longer a distributor. It’s an ersatz version of HBO. It’s another Disney. Another NBCUniversal. Its primary offering will be original content.
And given the quality of that content, it’s actually not a Disney or HBO. Instead, it’s more like Starz. Only more expensive.
That expense would not be an issue, per se, if there was a commensurate level of quality. Netflix is trying to provide that quality. But I don’t think it’s succeeding. The value innovation is disappearing.
Drinking Milkshakes
I’m reminded of the powerful ending scene in the movie There Will Be Blood. Without spoiling it, the film ends with the revelation of how the lead character used his wiles and the laws of gravity (i.e., drainage) to steal value (i.e., oil) from a competitor.
Netflix was the pioneer. It developed the model for true value innovation in the SVOD industry. In many ways, it created the technology. And while the company still has the advantage as the single best streaming experience, that convenience can (and will) be easily copied. It’s already happening. That milkshake has many straws.
It reminds me of a very important observation from the authors of Blue Ocean Strategy,
Value innovation occurs only when companies align innovation with utility, price, and cost positions. If they fail to anchor innovation with value in this way, technology innovators and market pioneers often lay the eggs that other companies hatch.
It brings new meaning to the idea that Netflix is walking on eggshells.
As those proverbial eggs hatch, Disney and NBCUniversal and others will draw us to their services with lower prices (free with ads!), better content, and longer trials. Existing players like Amazon and HBO might even get aggressive with lower prices for a period.
Meanwhile, certain components of Netflix’s value — like it’s kid-friendly experience — will be gobbled up by Disney or others who have strong position in those areas.
Netflix will also try to diversify. They’ve already ventured into interactive content and they’ll try gaming, too. But there are others already occupying that world and others moving in.
There’s no easy place to turn to hold onto value innovation.
Unless they merge.
The Consolidation
It won’t happen tomorrow but the entry of these new competitors offering trusted content at lower prices will create big shifts over time. Consumers will get frustrated by all the different services. The glory days will be behind us. Many will weigh the options between five or six competing services (to say nothing of regular cable television and the world’s greatest public access channel, YouTube) and start to trim their budgets.
Where do they go? History and current behavior shows they will not go with what is cheapest but rather with what has the most value. The value is in familiar, loyalty programming. Which Netflix will soon lack.
Nonetheless, there will be a price war to hasten this shift and, once it kicks into high gear, we’ll see the consolidation. As seen in countless industries before. Including cable TV.
Netflix will merge with someone who wants the remaining subscriber base, technology, and any of the valuable content that remains in-demand. Will it be Apple? Many hope for that. I see how they could combine to create something unique and valuable.
All the same, we’re headed to the next stage of the SVOD industry. Netflix created it. Others now enter. The battle begins. And like Thunderdome, fewer will leave this arena than entered it.
For reasons described above, I don’t think Netflix makes it out alone. Blue Ocean Strategy helps me understand this. Along with Taleb’s Antifragile (review here) and the nature of an entity’s response to stress and pressure.
So it goes. Meet the new TV. Same as the old TV. A blue ocean turns red. Some ships sink. Some combine. The waters grow calm. Balance returns as the many are winnowed down to a few and an industry matures into something familiar.
|
https://medium.com/striving-strategically/netflixs-looming-merger-c531bbbfcf4
|
['Norm Wright']
|
2019-05-20 14:01:42.885000+00:00
|
['Entertainment', 'Netflix', 'Business', 'Technology', 'Strategy']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.