text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Cryptography, or the practice of securing data between a source and a destination, is arguably network security's most basic form of defense, but in order to be effective, the methodology behind it must be both sophisticated and efficient. Encryption that is too complex can lead to slow, cumbersome connections, but encryption that lacks complexity or has been compromised can leave your network vulnerable.
Even if network security isn't necessarily your area of expertise, knowing the basics of digital cryptography can help you spot and remediate weaknesses such as deprecated ciphers or insecure SSL/TLS connections.
To demystify network encryption and help you know exactly what to look for to maintain better overall security hygiene, we've made our cryptography training path free and open to everyone. By taking our cryptography courses, you'll understand the basics of encryption, decryption, secure connections, and how to interpret key security data.
We've broken up the training path into seven short courses. Each course includes a video module with easy-to-follow narration and graphics, and a resources module to help you dive deeper into each topic. Within each course, we've also included instructions on how to discover cryptographic weaknesses in your network using Reveal(x), which you can try out yourself using our online demo.
To take the course, head to our Customer Portal and click Sign Up—you don't have to be a customer to register. Once you're in the portal, click on Free Cryptography Training to start the learning path.
|
<urn:uuid:1757b02e-7424-4af5-af19-65e211d37558>
|
CC-MAIN-2022-40
|
https://www.extrahop.com/company/blog/2021/free-cryptography-class/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00458.warc.gz
|
en
| 0.942847 | 303 | 2.984375 | 3 |
Bespectacled video conferencing participants have more to worry about than if their hair is uncombed or they have some spinach stuck between their teeth. According to newly-publicised research, they may also be unwittingly leaking sensitive information displayed on their computer screens.
Boffins from the University of Michigan teamed up with their counterparts at the Zhejiang University in China to investigate whether the wearing of eyeglasses while using a computer was a security risk.
Specifically, the researchers explored whether it was possible to determine what might be displayed on the screen by examining the reflections of a person’s glasses while they were on a Zoom call or Google Meet sessions.
The researchers’ paper, entitled “Private Eye: On the Limits of Textual Screen Peeking via Eyeglass Reflections in Video Conferencing,” describes how they set up a controlled lab experiment, which proved it was possible to reconstruct and recognise on-screen text with over 75% accuracy when reflected in the glasses of a video conference participant.
Of course, the effectiveness of the technique relies upon a number of factors. These include, the curvature of the eyeglasses’ lenses – with prescription glasses proving more successful at providing a useful reflection than glasses that are designed to block blue light.
Furthermore, of course, the quality of the video camera is key.
A typical 720p webcam can, according to the research, read on-screen texts via reflections that are as small as 10mm.
As researcher Yan Long told The Register:
“The present-day 720p camera’s attack capability often maps to font sizes of 50-60 pixels with average laptops.”
However, higher resolution 4k webcams become more common, the snooping technique could provide access to text displayed in smaller fonts:
“We found future 4k cameras will be able to peek at most header texts on almost all websites and some text documents.”
But it’s not just text reflected from a screen that could be leaked by a wearer of spectacles on a video conference call.
The researchers also found the technique would reveal which websites a user was viewing – with a 94% accuracy found when tested against the Alexa Top 100 most popular websites.
So, if you really feel that this might be a problem in your organisation, what can be done?
Well, the researchers have an unorthodox mitigation.
They suggest that Zoom users take advantage of a video filter feature (found under “Background and Effects” in the video conferencing app’s settings) that can automatically adorn your face with reflection-blocking cartoon sunglasses.
The likes of Skype and Google Meet don’t offer similar protection at the moment, but presumably wouldn’t find it too difficult if the threat genuinely became a concern.
Although it’s easy to make fun of a subject like this, reflections have leaked information in the past with serious results.
For instance, in 2019, an obsessed fan assaulted a Japanese popstar after he determined where she lived by zooming-in on the reflections in her eyeballs in selfies the star had posted on social media.
|
<urn:uuid:7fdb4a09-3416-467a-8317-541eff7b0ac2>
|
CC-MAIN-2022-40
|
https://cybersecurityworldconference.com/2022/09/21/reflections-in-your-glasses-can-leak-information-while-youre-on-a-zoom-call/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00458.warc.gz
|
en
| 0.938783 | 654 | 2.921875 | 3 |
America Recycles Day, a Keep America Beautiful national initiative, is the only nationally-recognized day dedicated to promoting and celebrating recycling in the United States. Each year, on and in the weeks leading into Nov. 15, thousands of communities across the country participate by promoting environmental citizenship and taking action to increase and improve recycling efforts in America.
On this America Recycles Day, the EPA recognizes the importance and impact of recycling, which has contributed to American prosperity and the protection of our environment. Consumers and companies all over the country are taking part in organized recycling events and others are doing what they can in their own homes and communities.
Being an R2 and ISO14001 certified company, HOBI believes in the importance of responsibly recycling electronic waste. However, we know that there can be some confusion when it comes to recycling e-waste. That’s why we’ve provided a few tips and tricks for you to remember when it comes to your retired tech devices.
What materials can be recycled? Recyclable household electronics include televisions, computers, computer monitors and displays, tablets, stereos, cell phones and more.
Currently, more than 80 percent of electronics that are no longer in use are just being stored in homes and businesses rather than being recycled. The longer devices sit around, the more the value of the materials decrease. Therefore, don’t hang on to devices you no longer use, and instead find a local recycler that will accept the material.
While many areas have responsible electronics recycling programs readily available to them year-round, consumers do not take advantage. However, there are so many benefits when it comes to recycling. Not only does it help conserve energy and raw materials but it can improve the economy as well. In fact, an EPA study found that every 10,000 tons of materials recycled supports nearly 16 jobs and $760,000 in wages.
The recycling rate has increased from less than seven percent in 1960 to the current U.S. rate of 35 percent. However, we still have a long way to go. That why even though America Recyclers Day comes once a year, you can still help improve your environment every day. HOBI urges you to take the pledge along with 76,340 other Americans and help take the necessary steps towards creating a more sustainable planet.
|
<urn:uuid:83b4aa51-7ad1-44c4-80e1-5e80fea3e5f6>
|
CC-MAIN-2022-40
|
https://hobi.com/join-americas-recycling-efforts-today/join-americas-recycling-efforts-today/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00458.warc.gz
|
en
| 0.950524 | 478 | 3.015625 | 3 |
According to Gartner by 2024, low-code application development will be accountable for more than 65% of app development activity
In one of earlier articles we talked all about GPT-3, its capabilities and limitations. However, since the launch of OpenAI’s GPT-3 and its ability to even write a code in any language, we have been wondering – Will the latest AI kill coding? Not very long ago, researchers wondered whether AI would be able to write code by 2040. With GPT-3, machine-dominated coding is right here knocking at our doorstep.
Apart from the rapid advances in AI capabilities, there are other trends forcing people to think about programming jobs of the future – “No-Code/Low-code”
“No-code” refers to visual tools that make it easier for anyone to build new products, whether it is websites, designs, data analyses, or models. They enable companies and professionals with minimal or no coding experience to build apps. WordPress, Wix, and Shopify are good examples of no-code tools that enabled millions of people to do things on their own rather than hire a developer or a designer.
Advantages of low-code/no-code approach
– Enables faster development and subsequent deployment
– Amplifies developer capabilities and organizational agility
– Can substantially reduce development costs
The trend of low-code/no-code is not just restricted to DevOps but is increasingly also becoming a part of AI development with growing number of companies turning to no-code platforms for machine learning. Automation is increasingly coming for tasks data scientists perform with every major cloud vendor has heavily invested in some type of AutoML initiative or no/low-code AI platform.
The user is supposed to, just, feed in data, and the AutoML system automatically determines the approach that performs the best for the particular problem and builds an ML model. It aims to automate the entire AI workflow.
My article on AutoML gives a brief of what it is, key market players and its advantages. It is a must read for a quick overview of AutoML.
Low-Code/No-code tools are definitely the future, a simple drag and drop will let people create magic. In addition to faster development and deployment, low-code/no-code tools, allow non-technical professionals to take up development roles. As building things on the internet becomes easier and accessible, more people will become makers. It will also nudge medium-skilled coders to consider upgrading their skills, become experts in multiple low-code platforms, or evolve into niche citizen developers for a particular industry.
The post Towards a “No-Code” era: Reinventing Development appeared first on NASSCOM Community |The Official Community of Indian IT Industry.
|
<urn:uuid:d08188bf-f3c5-4a8d-872f-016575cb5147>
|
CC-MAIN-2022-40
|
https://www.dailyhostnews.com/towards-a-no-code-era-reinventing-development
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00458.warc.gz
|
en
| 0.922292 | 637 | 2.515625 | 3 |
How Quantum Computers Could Cut Millions Of Miles From Supply Chains And Transform Logistics
(Forbes) Christopher Savoie, Ph.D., CEO and founder of Zapata Computing explains how quantum computers are poised to optimize supply chains involving a wide range of intersecting variables. This could transform the distribution of everything from life-saving drugs and critical resources to electronics, food and basic consumer goods.
The more data these systems have access to, the more effective they become. One can imagine a future where quantum optimization algorithms work with live IoT data from vehicles, roadways and inventory endpoints. This data would enable quantum algorithms to adjust routes in real time based on real-world conditions. The overall synergy could help logistics companies save money by continuously optimizing routes based on inventory stock-outs, vehicle performance, traffic patterns, weather conditions and more.
Quantum-powered supply chain optimization algorithms could mitigate costly downtime in the wake of natural disasters, political conflicts and other challenges. Indeed, in the future, quantum computers can help to quickly address and overcome supply chain disruptions such as those introduced by our current pandemic.
As an added benefit, quantum-optimized supply chains should also reduce the carbon footprint for entire industries, a universally recognized goal. Transportation accounts for 28% of all greenhouse gas emissions. Optimizing routes by just 5% for U.S. freight trucks alone would reduce carbon emission by roughly 22 million tons each year.
companies that start using quantum software now should reap the rewards as the hardware inevitably matures. My company is working with Coca-Cola Bottlers Japan Inc. to help it explore and test how quantum can better optimize its deliveries servicing approximately 700,000 vending machines.
The challenges that we face with regard to supply chains and logistics will only become more complicated. The good news is that quantum computing can provide a means for mastering this complexity.
|
<urn:uuid:ea1e3222-be9e-46e4-840f-511e4aa78b4a>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/how-quantum-computers-could-cut-millions-of-miles-from-supply-chains-and-transform-logistics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00458.warc.gz
|
en
| 0.93721 | 378 | 2.9375 | 3 |
Benchmarking – i.e. Comparing your business performance against certain reference points – is a popular and potentially powerful way to glean insights that can lead to improved performance. In this article, we’ll explore the various types of benchmarking and how these approaches can benefit your business.
What’s the difference between benchmarks and KPIs?
People often ask me this, and there seems to be a general assumption that benchmarks and KPIs are the same thing. But they are different.
- Benchmarks are reference points that you use to compare your performance against the performance of others. These benchmarks can be comparing processes, products or operations, and the comparisons can be against other parts of the business, external companies (such as competitors) or industry best practises. Benchmarking is commonly used to compare customer satisfaction, costs and quality.
- KPIs, on the other hand, are decision-making and monitoring tools, used to track performance in relation to strategic goals. In other words, KPIs chart whether an individual, project, team, business unit or entire company is on track to achieve its objectives. KPIs are a bit like an early warning system, flagging up where things might be heading off-course and where action might be needed.
So, when you use KPIs, you’re comparing progress in relation to a specific goal. And when you use benchmarks, you’re comparing against others. You can use benchmarking to put your own KPIs into context and to set targets for your KPIs.
Both – KPIs and benchmarks – are used to identify opportunities for improving performance, which may be where the confusion arises.
Exploring the different types of benchmarks
Broadly speaking, benchmarks break down into two core categories: internal and external. Internal benchmarking compares performance, processes and practises against other parts of the business (e.g. Different teams, business units, groups or even individuals). For example, benchmarks could be used to compare processes in one retail store with those in another store in the same chain.
External benchmarking, sometimes described as competitive benchmarking, compares business performance against other companies. Often these external companies are peers or competitors, but that’s not always the case; for example, you can use benchmarking to compare performance, processes and practises across different industries.
Three ways to use benchmarking
Benchmarking, whether internal or external, is used in three key ways. They are:
- Process benchmarking. This is all about better understanding your processes, comparing performance against internal and external benchmarks, and finding ways to optimise and improve your processes. The idea is that, by understanding how top performers complete a process, you can find ways to make your own processes more efficient, faster and more effective.
- Strategic benchmarking. This compares strategies, business approaches and business models in order to strengthen your own strategic planning and determine your strategic priorities. The idea is to understand what strategies underpin successful companies (or teams or business units) and then compare these strategies with your own to identify ways you can be more competitive.
- Performance benchmarking. This involves collecting information on how well you’re doing in terms of outcomes (which could mean anything from revenue growth to customer satisfaction) and comparing these outcomes internally or externally. This can also refer to functional performance benchmarking, such as benchmarking the performance of the HR team (using metrics like employee net promoter score or staff engagement surveys) or the marketing team (measuring net promoter score or brand awareness, for instance).
Why you might want to consider benchmarking in your organisation
Each of these different ways of benchmarking have one key goal in mind: to identify gaps in performance and uncover opportunities to improve, whether that means making processes more efficient, reducing costs, increasing profits, boosting customer satisfaction, or whatever. Ultimately, what drives companies to benchmark is the need (or want) for improvement.
So whether you want to simply compare your internal performance, catch up to a competitor, better understand and track your peers, or become a market-leader in your industry, benchmarking can be an incredibly useful tool.
However, benchmarking is not a magic bullet for improving performance – it’s a part of the solution, not the complete solution. The complete solution requires you to set clear strategic goals, identify your critical business questions, design KPIs that help you answer those questions and track performance against your goals, and compare performance using benchmarking.
I certainly wouldn’t advise a company to focus all their attention on benchmarking at the expense of tailored, carefully designed KPIs. But, when viewed as part of the complete performance management picture, benchmarking provides a useful way to glean valuable performance-boosting insights.
Where to go from here
If you would like to know more about KPIs and performance management, cheque out my articles on:
Or browse the KPI Library to find the metrics that matter most to you.
|
<urn:uuid:e1bb8c92-9a4e-49a8-87e6-872019ecec4b>
|
CC-MAIN-2022-40
|
https://bernardmarr.com/the-different-types-of-benchmarking-examples-and-easy-explanations/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00458.warc.gz
|
en
| 0.926097 | 1,023 | 2.515625 | 3 |
When people talk about IT training, they usually think about courses – either instructor-led training (ILT) or some combination of ILT and/or online self-paced courses. Although courses continue to be an important part of IT training, training today should consist of more types, or modalities, of training.
Here is a list of some of the most common IT training modalities available to learners today:
- Live instructor-led training (in classroom and/or virtual)
- Elearning courses
- Books (print and/or online)
- Audio (podcasts and/or audiobooks)
- Mentoring (in-person and/or virtual)
- Practice environments (on premises or virtual)
- Assessments and practice exams
- Resource files (PDFs, PPTs, Docs, etc.)
- Social learning (discussion groups, community sites, etc.)
Providing a diverse set of learning modalities for IT makes sense for many reasons.
1. Adapting to individual learning styles
Not everyone likes to learn in the same way. Some people prefer going to offsite classroom training, while others prefer to take training at their own pace from the comfort of their office or home. Some may prefer to learn from videos, while others prefer to learn from books. Providing access to multiple ways of learning will help assure that everyone can learn the skills they need in the way that feels most comfortable for them.
2. Reinforcing learning
The best way to learn is to study, and then have reinforcement of what you are learning. For instance, before sending someone to an ILT course, you might want to have them take some introductory eLearning courses or read a book so they can get the most value out of the expensive ILT class. Another example is to create a learning program that consists of short videos, sections of books, eLearning courses, assessments, and access to a practice environment where the learner can practice the skills they are learning. The various learning types all work together to reinforce the learning.
3. Real-world practice of skills
Courses, videos, books, mentors and other modalities are all great, but sooner or later the learner is going to have to put the skills they are learning to the test. Providing access to a virtual practice environment where the learner can practice writing code, practice configuring hardware, or practicing the skills needed to pass a certification exam can really solidify learning and prove that the learner has indeed learned the required skills.
4. Creating a scalable learning environment
Most IT departments do not have an unlimited training budget. Sending people out to offsite instructor-led training can be very expensive. For the cost of one multi-day boot camp for one person, you could provide many people access to a one-year subscription to unlimited eLearning.
Examples of learning modalities at work
Let’s take a look at a few examples of how providing multiple learning modalities can help your IT organization:
- If your company is hacked, you need to know how to stop it. Having someone take an hour-long course to figure out what to do doesn’t make sense. What your team needs is quick access to a short video or a section of a book that describes how to stop the specific attack. Or perhaps access to a mentor who has faced a problem like this before and who can provide a few quick tips on what can be done to help stop the attack.
- If someone needs to learn a new programming language, a few 5-minute videos or a few sections of a book will not be enough. They will probably need at least one course (or maybe a series of courses), supported by in-depth books, practice coding environments, access to source code samples, etc.
- If someone needs a quick refresher on a skill, they don’t want to retake an entire class or read a whole book. A few targeted videos and a section or two of a book may be all they need to get back up to speed.
Finding the right providers of multiple learning modalities for IT
Providing access to a range of learning modalities for IT can be challenging, but it is well worth the effort.
Most IT training vendors provide one, two, or at the most a handful, of learning modalities for IT training. Unless you shop carefully, you may end up purchasing from many vendors to get the mix of modalities best suited to your IT teams’ training needs. This can create a disjointed user experience with multiple instructional styles, formats, navigation, etc. However, there are a few vendors who can provide a wider range of modalities to help you establish a baseline of training, and then you can look at adding one or two other vendors (for instance an ILT provider) to address niche needs.
When you talk to vendors, ask them point blank how they can support your training needs across multiple modalities. If their answer is, “we just provide video-based training” or “we only provide ILT training,” broaden your net to include vendors who offer more modalities.
As you assemble the learning modalities for your teams, you may be challenged to find a good way to provide access to all the content. As part of your vendor evaluation process, ask them how their content works in today’s modern Learner Experience Platforms that have been optimized to support a range of learning modalities from many sources. They may have a platform that could easily address the needs of your IT teams, or you may have to look at a separate platform that will support all your learning.
|
<urn:uuid:ffd307bb-136d-4a54-bd32-6c323756708e>
|
CC-MAIN-2022-40
|
https://www.cio.com/article/228808/the-value-of-multiple-modalities-in-it-training.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00458.warc.gz
|
en
| 0.952452 | 1,169 | 2.890625 | 3 |
A bottleneck describes the scarcity of resources such as manpower, machinery, (partial) products, or manufacturing materials. A bottleneck worsens or delays the process or its performance, increasing the cycle time or even bringing the process to a complete standstill. Thus, a bottleneck usually holds significant potential for optimization. Bottleneck sources can be found and eliminated by process discovery and root cause analyses.
Where can bottlenecks occur?
In principle, bottlenecks can occur in any process at any time. The occurrence of bottlenecks depends, among other things, on planning factors, such as capacity or the failure of required deliveries. In general, a distinction is made between organizational, automatic, procurement-related, and financial bottlenecks.
Organizational bottlenecks occur for instance when a process step can only be executed by one person. If this is the case, the person may be overloaded, and their work may accumulate, delaying or even halting further process implementation. The same applies if the person responsible is absent and no replacement personnel is available. The principle is the same for machines. However, organizational bottlenecks can also be caused by the process itself. For example, in a production process with very long transport times for intermediate products, bottlenecks quickly arise because the continuation of the subsequent production process depends on the delivery.
Bottlenecks in procurement occur when there is not enough stock of a material to be procured for all planned process runs. At an automobile manufacturer, for example, there must always be enough windscreens available for a smooth and bottleneck-free run of the car manufacturing process. As soon as no more windscreens are available, the process comes to a standstill.
Financial bottlenecks describe the scarcity or even lack of monetary resources. Financial bottlenecks can cause organizational or procurement-related bottlenecks because invoices cannot be paid or required manufacturing materials cannot be procured.
How can bottlenecks be eliminated?
In general, bottlenecks can be prevented by detailed capacity planning. This includes capacity planning for necessary production equipment and employees as well as the calculation of default risks. Organizational bottlenecks can be identified by estimating or measuring employee or machine utilization. If the employee responsible or the machine carrying out the work is working to full capacity, they should be relieved. A relief can be achieved, for example, by hiring additional employees for an area or by purchasing additional machines. If the bottleneck lies in the process itself, a company will have to restructure. In this case, process mining analysis can be a good support to quickly identify bottlenecks and possibly their causes or more advantageous process variants.
If bottlenecks in procurement are to be eliminated, materials from other manufacturers can be obtained to supplement the process. In addition, systems or techniques should be introduced that signal early (depending on the procurement duration) as soon as existing reserves are used up. If such a signal was given, for example, by lights at a certain production step, a supply of new materials must take place. In the ideal case, the system permanently checks the stock levels so that there is no shortage of resources. If the required raw materials become too scarce there, it is conceivable that a system will automatically reorder them from the manufacturer. In this way, future bottlenecks can be avoided through automation.
|
<urn:uuid:eb4ed43d-c48e-43fc-9245-0cd448c22b95>
|
CC-MAIN-2022-40
|
https://appian.com/process-mining/bottleneck.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00458.warc.gz
|
en
| 0.923762 | 694 | 3.203125 | 3 |
A new website bot from the Centers for Disease Control and Prevention (CDC) can help members of the public decide what type of medical care to seek if they are exhibiting symptoms of the COVID-19 coronavirus.
The “coronavirus self-checker,” named Clara, considers symptoms and other risk factors through a conversational online assessment to help individuals decide the “right level” of clinical care. Launched through Microsoft’s Azure platform, it considers factors such as location, age, and physical symptoms.
In a March 20 blog post, Microsoft explains that Clara utilizes the company’s existing Healthcare Bot service. The service relies on artificial intelligence to help “frontline organizations” respond to COVID-19 inquiries and free up healthcare professionals for other critical care.
“The need to screen patients with any number of cold or flu-like symptoms … is a bottleneck that threatens to overwhelm health systems coping with the crisis,” Microsoft wrote in the post, but services such as the CDC chatbot help overcome that hurdle. In addition to risk assessment, the bot offers three other services: clinical triage based on CDC protocols, answers to frequently asked questions, and worldwide COVID-19 metrics.
CDC did not immediately respond to a MeriTalk request for comment.
|
<urn:uuid:070d431b-d612-4bd0-a89a-b7cce4a8959b>
|
CC-MAIN-2022-40
|
https://origin.meritalk.com/articles/new-cdc-bot-helps-self-assess-covid-19-symptoms/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00658.warc.gz
|
en
| 0.930627 | 271 | 2.5625 | 3 |
A laptop would just be a cold piece of aluminum with a flat battery if you don’t have a power socket at hand. It’s hard to get any work done when you’re peppered with pop-ups and warning messages when the battery power gets low. So here are some tips you can use to prolong the life of your precious laptop battery.
Some truths about your laptop battery
Batteries in many devices nowadays are lithium-based — either lithium-ion or lithium-polymer — so users must take note of the following guidelines for their proper maintenance:
- They can’t be overcharged, even though you leave your battery plugged in for a long period of time. When the battery hits 100%, it’ll stop charging.
- Leaving your battery completely drained will damage it.
- Batteries have limited lifespans. So no matter what you do, yours will age from the very first time you charge it. This is because as time passes, the ions will no longer be able to flow efficiently from the anode to the cathode, thereby reducing its capacity.
What else can degrade your battery
Besides its being naturally prone to deterioration, your battery can degrade due to higher-than-normal voltages, which happens when you keep your battery fully charged at all times. Even though a modern laptop battery cannot be overcharged, doing so will add a stress factor that’ll harm your battery.
Both extremely high temperatures (above 70°F) and low temperatures (between 32-41°F) can also reduce battery capacity and damage its components. The same goes for storing a battery for long periods of time, which can lead to the state of extreme discharge. Another factor is physical damage. Remember that batteries are made up of sensitive materials, and physical collision can damage them.
How to prolong your battery life
Now that you know some facts about your laptop battery, it’s time to learn how to delay its demise:
- Never leave your battery completely drained.
- Don’t expose your battery to extremely high or low temperatures.
- If possible, charge your battery at a lower voltage.
- If you need to use your laptop for a long period of time while plugged into a power source, it’s better to remove the battery. This is because a plugged-in laptop generates more heat which will damage your battery.
- When you need to store your battery for a few weeks, you should recharge your battery to 40% and remove it from your laptop for storage.
These are just a few tips on extending the life of your hardware. There are many more ways you can maximize your hardware efficiency and extend its longevity. Call our experts today to find out more!
|
<urn:uuid:5cffaf31-eff9-45e2-aac2-8a2f95cfa5e2>
|
CC-MAIN-2022-40
|
https://www.dimension.irissol.com/blog/2017-12-tips-and-tricks-to-prolong-laptop-battery-life/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00658.warc.gz
|
en
| 0.928021 | 568 | 2.984375 | 3 |
In a globally interconnected world, in which supply chains extend across both countries and continents, it only makes sense to consider the potential risks to those supply chains from a massive cyber attack. What happens, for example, if malicious threat actors decide to launch a cyber attack against the maritime ports of the Asia-Pacific region, which is home to 9 of the world’s top 10 container ports? Insurance company Lloyd’s of London, in partnership with the University of Cambridge Centre for Risk Studies and the Cyber Risk Management (CyRiM) project at Singapore’s Nanyang Technological University, has simulated such a theoretical attack, and projected that the cost of cyber attack could reach $110 billion in a worst-case scenario.
Cyber catastrophes vs. natural catastrophes
To put that figure into context, $110 billion is approximately one-half of the insured losses due to natural catastrophes in 2018. Last year, 394 different natural catastrophes around the world resulted in over $225 billion in insured losses. A single cyber attack has the potential to wreak a similar amount of damage, says Lloyd’s of London, which also points to the high rates of underinsurance in the Asia-Pacific region. The logic is clear: maritime port cities in the Asia-Pacific region need to take the threat of a cyber catastrophe just as seriously as they would take the risk of a natural catastrophe (such as a deadly typhoon).
And, yet, the region is dangerously underinsured for such a reality. Lloyd’s has crunched the numbers and found that 92% of all losses resulting from a cyber attack would not be insured. Using the $110 billion cost of cyber attack as a baseline, Lloyd’s has calculated that the “insurance gap” in the region is close to $101 billion. (Obviously, Lloyd’s of London would be one of the insurance giants that firms in the region might call on to address these high levels of underinsurance.)
The Shen cyber attack scenario
In coming up with the $110 billion cost of cyber attack, the researchers modeled three different scenarios – a base-case scenario, a best-case scenario and a worst-case scenario. The $110 billion cost of cyber attack represents an “extreme” worst-case scenario. The researchers referred to this as the Shen attack scenario.
In such a scenario, a software virus would infect 15 major ports across 5 different Asian markets: Singapore, South Korea, China, Japan and Malaysia. A virus affecting the computer systems of cargo transport ships would spread to the computer networks of these major port cities. The malware would scramble cargo database logs and cause all sorts of chaos. This would lead to major disruptions in not just the delivery of cargo to the ports, but also disruptions to the supply chains that are waiting to take all this valuable cargo and send it around the world.
Given the global nature of modern supply chains, a disruption in the all-important Asia-Pacific market would be felt around the world, as a sort of ripple effect. According to the researchers, Singapore’s transport sector would take the biggest hit, followed by the transport sector in South Korea. However, effects would be felt as far away as Europe and North America, further adding to the overall cost of cyber attack.
Calculating the cost of cyber attack
In deriving the cost of cyber attack, the researchers calculated the cost of cyber attack by region, by economic sector, and by stage along the supply chain. For obvious reasons, the transportation, aviation and aerospace sectors would take the biggest hit ($28.2 billion), followed by the manufacturing sector ($23.6 billion) and the retail sector ($18.5 billion). In terms of supply chain losses, the report estimates that port operators would be responsible for half of all insurance claims, followed by businesses along the supply chain (21%) and logistics and cargo handling companies (16%).
The longer any cargo gets tied up in the Asia-Pacific port cities, the worse will be the impact on each of these sectors. To model this risk, the researchers considered the “indirect” cost of cyber attack, as measured by losses in productivity and losses in bilateral trade. In practical terms, it means that a manufacturer urgently waiting for a supply of parts would experience a slowdown (or perhaps shutdown) in operations. And retailers waiting for finished manufactured goods would also be put into wait-and-see mode, as they might need to consider alternate supply sources. In Asia, these indirect economic losses would be close to $27 billion. In Europe, these indirect losses would be close to $623 million, while in North America, losses would be in the neighborhood of $266 million.
Preparing for cyber attack
Of course, the Shen attack scenario outlined in the report only represents an estimated cost of cyber attack. To this date, such a massive attack has never taken place. However, as Lloyd’s of London points out, there are two critical factors that must be taken into account: the world’s aging shipping infrastructure (which is more vulnerable to attack than ever before) and the global complex supply chain (which vastly complicates the impact of any cyber attack).
Also, with more technology and more automation comes more risk. The increasing application of technology comes with both costs and benefits. Seafaring ships from a hundred years ago did not have to deal with computer systems onboard, but today’s ships do. And major shipping management companies now invest in expensive port management software. That makes them a potential weak point that is vulnerable to hackers.
There are a variety of steps that companies can take to prepare for a cyber attack. One of these, of course, is to purchase an insurance policy that insulates them from the cost of cyber attack. (Since Lloyd’s of London co-wrote the report, you can presume that this is a preferable outcome) Another step, though, is to invest in new technologies and tools that are designed to protect assets from the risk of cyber attack in the first place.
Cyber risk, as Lloyd’s of London correctly points out, is a “critical and complex challenge.” As a result, those on the front lines of a potential cyber attack – such as port operators and logistics firms – should be taking steps now to protect themselves from a destructive cyber attack in the future.
|
<urn:uuid:16a505ce-7357-456f-8e3e-4d145710fc6b>
|
CC-MAIN-2022-40
|
https://www.cpomagazine.com/cyber-security/cost-of-cyber-attack-on-asia-pacific-ports-could-reach-110-billion/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00658.warc.gz
|
en
| 0.955264 | 1,315 | 2.96875 | 3 |
Australia, the summer of 2020 – China allegedly performs a calculated attack on primary Australian government resources. In response to this unprecedented provocation, the Australian prime minister does not send fighter jets. No bombs or missiles are launched. The people called to arms are not trained in hand-to-hand combat, but every one of them can tilt the balance in historic, decades-long conflicts: hackers.
It is no secret that in the 21st century, cyber threats are often as dangerous as bombs. A well-planned attack could shut down a city and cause massive financial losses, injuries, and even deaths. In the APAC region, digital threats reveal increasing diplomatic destabilization. Only several months ago, violent incidents on the Ladakh region border between China and India reportedly led to Chinese DDoS attacks on Indian sites. Similar incidents allegedly occurred in disputes between India and Nepal, North Korea, and Pakistan. Cyber violence could be the result of armed conflict, or it could very well lead to one. In the next years, we will see them playing a crucial part in conflicts already in place, as well as future points of friction. But why, exactly, did cyber-attacks become such a go-to modus operandi for countries and nations in recent years?
Both the best option and the last resort
“Cyber activity offers governments unique advantages over traditional warfare,” says Evan Davidson, VP of Asia Pacific Japan at SentinelOne. “From a contemporary point of view, the digital front is far superior to the historic battlefields of the 20th century. Using cyber-based reconnaissance, governments can collect valuable information faster, from the safety of their own country. The lives of their agents are in far less of a risk, too. The possibilities are incredibly varied, from military to trade secrets and intellectual property. China, India, North Korea, and other military superpowers have been employing those methods for years.
“What pushed cyber warfare over the edge was how hard traditional intelligence and stealth activities were becoming,” adds Zohar Rozenberg, Chief Security Officer at Elron. “There are cameras everywhere these days – nothing you do goes unnoticed. The prevalence of biometric identification and face recognition renders the use of fake identities nearly impossible. Consider the assassination of Mahmoud al-Mabhouh in 2010 – allegedly done by one of the most prestigious covert organizations in the world. The agents were detected almost instantly, with basic security footage and rudimentary police technology. Ten years later, any teenager with a smartphone could expose international operations with one tap. In that sense, cyber-attacks carry a smaller risk for militaries.”
The real threat is not military warfare
The number of governments cyberattacks is steadily growing. In Australia, the number of cybersecurity incidents reported to the ASD tripled from 2011 to 2014. Today, digital war fronts are still a playground for developed, wealthy countries with vast resources and advanced technologies. Proven statistics from countries such as China, Russia, and North Korea are not available, for obvious reasons – but based on reports, they are well-known for taking on offensive cyber activities as part of their military strategy. This trend is already shifting in the APAC region and worldwide as more countries develop similar abilities. However, the biggest unpredictable threats are not posed by governments – but by the real underdogs.
“Digital attacks are unique in that private entities and organizations can carry them,” says Rozenberg. “All you really need is a computer and a network connection. This creates new subsections of threats that are hard to track and prevent. Guerilla groups, semi-formal organizations, or even individual aggressors can now cause real harm to anyone they have a grudge against.”
Davidson agrees, adding: “It may be someone with a score to settle or a political or ethnic agenda. However, in most cases, the motives are economical. Countries, companies, and individuals alike work to gain a financial advantage over their enemies or competitors. In the future, we are likely to see a great deal of offensive activity coming not from militaries, but from business and financial entities.
“This means that any person or company may be at risk. The good news? This is a risk you can mitigate. With sound security infrastructure and air-tight solutions, you can give your assets 360-degree protection against all attack threats.”
|
<urn:uuid:a6db3de4-b57f-4f2c-ab1c-f72d5f4c7d47>
|
CC-MAIN-2022-40
|
https://www.cpomagazine.com/cyber-security/the-third-world-war-may-already-be-happening-online/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00658.warc.gz
|
en
| 0.95591 | 897 | 2.90625 | 3 |
What is a zero-day?
- May 23, 2019
- Posted by: Kerry Tomlinson, Archer News
- Categories: Archer News, Ask Archer, Cyber Crime, Cyberattack, Hacking, Industrial Control System Security, Mobile Devices, Posts with image, Smart Devices, Vulnerabilities
We answer your security question, “What is a zero-day?”
Here’s a hint: ‘zero’ is how much notice you get that this attack is coming.
Doug locks his house to keep burglars out.
He knows they can jimmy his front door or break a window, so he installed a security alarm and cameras.
But he doesn’t know that there’s a hidden door in his basement, behind an old dresser.
And that’s how the burglar gets in.
That is like a zero-day in the digital world, a surprise security hole that people now have to patch — often, when it’s too late.
Many Secret Doors
Unlike Doug’s basement, your digital house is a catacomb of unexplored doors and tunnels.
And cyber crooks are busy unlocking and infiltrating.
Luckily, researchers are, too, trying to find the zero-days before the bad guys do.
Last week, reports said attackers used a zero-day flaw in the messaging service WhatsApp to spy on people.
The attackers discovered that all they had to do was make a call and they could put spyware on your phone.
The bad guys could use the flaw to take over your smart home from the inside out, according to the researcher’s report.
On a larger scale, attackers used a zero-day flaw to get into the safety system at an industrial plant in the Middle East in 2017, giving them the opportunity to cause damage at the plant.
A number of companies fix the flaws right away when they learn about zero days in their products, but some do not, leaving you vulnerable.
What can you do?
Keep up good security habits so they’re less likely to affect you.
If you make your passwords strong and use two-factor authentication, you’re less likely to get hurt.
Use a password manager to keep track of those long, strong passwords.
And before you buy a smart device, do research to see how the maker responds to zero-days.
Do they patch them up, or do they leave you hanging?
For example, WhatsApp said it fixed the spy vulnerability and asked people to update their apps.
But the researcher who found the TP-Link security hole said TP-Link did not respond when he told them about the problem.
See more answers from Archer News:
Main image: Roulette wheel with ball on zero. Image: PIRO4D
|
<urn:uuid:cc6753b7-b9ee-45cc-a367-d10d96d41dd4>
|
CC-MAIN-2022-40
|
https://archerint.com/what-is-a-zero-day/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00658.warc.gz
|
en
| 0.9378 | 601 | 2.828125 | 3 |
Unfortunately, cybercriminals are continuing their rampage across America with no signs of slowing down. In fact, the FBI’s Internet Crime Complaint Center (IC3) received a record number of complaints throughout 2021. According to their official 2021 report, the IC3 logged 847,376 complaints from the American public, and calculated losses exceeding $6.9 billion. Ransomware, business e-mail compromise (BEC) schemes, and the criminal use of cryptocurrency rank as the top incidents reported.
Of course, the exponential increase of ransomware attacks is a very worrying trend for US business owners and the American public at large. Just in the past five years, 2.76 million complaints have been filed with the IC3 and $18.7 billion has been lost to cybercriminals.
Source: FBI Internet Crime Report 2021
So, what can companies do to protect themselves against this remarkable threat? Encryption is just one of the many safeguards a business should use in the face of inevitable ransomware attacks. Although no single solution can completely protect you from an attack, data encryption is a key component to creating a comprehensive data protection strategy. Data encryption software and ransomware data recovery services are critical to developing security protocols that prevent malicious parties from taking control of your sensitive data.
But how secure is encryption? How do cyber criminals use encryption against you? And how can ransomware protection as a service (RPaaS™) provide the comprehensive protection you need?
Can Ransomware Attack Encrypted Files?
In short, yes. Ransomware can technically attack anything, at any time. However, encrypting sensitive files helps prevent attackers from gaining access to that information. It also prevents an attacker from using your information, should your files fall into their hands during a breach. So, how does encryption work to protect your data? Here’s a simple breakdown:
- Application Whitelisting: Choosing a data encryption solution that includes application whitelisting blocks malware from entering your database by specifying which files and software applications are allowed to perform specific tasks.
- Access Control: In addition to application whitelisting, access control allows you to define who can access what information and limits what each user can do once that information is accessed.
- Encryption Keys: Encryption keys are created with algorithms that allow them to scramble and unscramble your data and files. Once your information is encrypted using the above methods, an encryption key is applied, making the information useless to any cybercriminal.
Lastly, we should note that encryption alone cannot secure your data. Any business or individual should follow all data security best practices in order to maintain a strong defense against cybercriminals.
How Do Cybercriminals Use Encryption?
Unfortunately encryption is a two way street, and cybercriminals can use it to their advantage. So, how does ransomware work? Cybercriminals often use encryption to conceal stolen information from their victims, and launch attacks. These attackers often use encrypted digital channels to infect their victim’s machine. These channels include:
- Compromised sites
- Phishing pages
- Malvertising attempts
Once the criminal has control of their victim’s machine, they will then encrypt the stolen files and data, and hold that information for ransom. But what can be done once this happens?
Can Ransomware Encrypted Files Be Recovered?
Sadly, some forms of ransomware are undecryptable, making that information extremely difficult to recover. However, using a ransomware data recovery tool or decrypt tool, most ransomware encrypted files can be recovered. The first step in this process is to identify the type of ransomware that has hold of your data. You can do this by comparing your ransomware note to other examples on the internet. After you have identified the type of ransomware that has affected your device, you should begin searching for the decryption tool that is compatible with whatever type of ransomware your attacker is using. However, a word of caution, some sites claiming to have free data recovery or decryption tools are actually malicious and can infect your computer.
Ransomware Protection as a Service: A Comprehensive Solution
Handling a ransomware attack and retrieving your information is a very tedious, difficult, and often distressing undertaking. That’s why we created the industry’s first and only holistic approach to ransomware threats—Ransomware Protection as a Service™ (RPaaS™). InterVision’s RPaaS solution delivers end-to-end cyber attack protection with:
- Layering hardened endpoint protection and SIEM
- Immutable backups
- Secured cloud recovery
- Support from certified security and disaster recovery experts who monitor and respond 24x7x365
Each part of our solution works together to prevent attackers from accessing your information, and fully restore your data and operations in the rare event of a breach. Attacks will happen, and InterVision is here to make sure you’re prepared. Learn more about how our RPaaS™ solution can protect you here.
Knowledge is Power
Stay up to date: explore the latest news stories, blogs, webinars, success stories and industry insights.
|
<urn:uuid:48b0cdfe-c5ef-4918-9b26-9e63bd0ecc65>
|
CC-MAIN-2022-40
|
https://intervision.com/blog-does-encryption-prevent-ransomware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00658.warc.gz
|
en
| 0.917819 | 1,051 | 2.546875 | 3 |
What is a credit score?
A credit score is a number that depicts a consumer’s creditworthiness. It is the most important measure of a person’s financial health and is therefore, a widely used indicator worldwide.
Essentially, the better a person’s credit score is, the easier it is for him/her to get credit from various financial institutions. It also determines the interest rate available at the time of borrowing. Therefore, maintaining a healthy credit score really eases the path to financial freedom for an individual.
With an attempt to improve one’s credit score, this blog discusses simple yet effective methods of doing so.
(Must catch: What is credit rating?)
Credit Bureaus and Credit Rating Models:
It is generally assumed that there is just one standard credit score applicable, but this is usually not the case. Different countries have different credit scoring models that essentially determine a score to gauge the creditworthiness. Even though the fundamentals governing these credit scores are the same, the relative proportion of these fundamentals and the scale used differentiate one credit score from the other.
There are organizations called credit bureaus which are responsible for the compilation of credit reports and credit scores which are eventually used by lenders. In the United States, the three national bureaus of national significance include Equifax, Experian and TransUnion.
These bureaus use various credit scoring models, of which, the most commonly used and widely accepted is the FICO Score, which was developed by Fair Isaac Corporation in as back as 1956. It however was only made available to consumers in 1989. The FICO score is a three digit number ranging from 300 to 850. Typically, a score in the range of 670 to 739 is considered a good credit score, helping secure loans easily and at suitable interest rates.
Another widely used scoring model is the VantageScore, created by the three major credit bureaus in 2006 as an alternative to the FICO Score to better address changes in behavioral trends and advances in data collection. The most current versions of VantageScore have the same range as the FICO Score (300 to 850).
(Read also: Centralized and Decentralized Cryptocurrency Exchanges)
Factors determining credit score:
Regardless of the credit bureau and the credit rating systems used by these bureaus, there are certain intrinsic factors that determine the credit score of an individual consumer. These factors need to be understood in order to understand the various methods that can be employed to improve one’s credit score.
Payment history: This takes into consideration on-time payments.
Amounts owed: This measures how much debt an individual is carrying relative to his/her credit limits.
Length of credit history: This measures the duration for which the individual has been handling credit.
New credit: This measures the frequency of application of new credit.
Credit mix: This measures the ability of a consumer in handling different types of credit including credit cards, loans, etc.
(Related blog: Types of financial risks)
5 Ways to improve credit score:
In order to improve one’s credit rating, one can systematically follow the ways written below which seek to improve upon the factors affecting the credit score listed above and thus, provide a consumer with an edge over other consumers, providing him/her with a better credit experience over a longer period of time.
Build a Credit File:
The first and the most obvious way of improving a credit score is by creating a timely credit file. It is essential for a consumer to open new accounts accepting credit from various credit lines right from an early age in order to build a reliable credit profile over time.
In order to lay down a good track record as a borrower in the future, it is required to open accounts in one’s name in the present and this is the first step towards building a good credit score as well.
Once an account is opened with a lending institution, they report the same to all the major credit bureaus.
(Suggested reading: Types of trading)
Make Timely Payments:
Being disciplined and punctual with repayment of outstanding debt is one of the most important methods of steadily improving credit rating and maintaining a good credit rating score.
As can be understood intuitively, a lending institution would like it's payments to be done timely in order to maintain a healthy balance sheet from time to time. No institution wants any delays in its payments since it loses out on the time value of money or the value generated by the money had it been returned timely.
Due to the inconveniences caused to the lending institution due to delays in payments, the borrower delaying payment suffers monetary penalties along with a downgrade of credit rating with time.
An individual looking for a decent credit score needs to put in measures in place to remind him/her of making timely EMI payments and form a habit of doing so. These measures can include notification systems or automatic payment systems.
Usually payments that are at least 30 days late can be reported to the credit bureaus and can end up hurting a consumer’s credit scores.
Keep Track and Catch Up on Past-Due Accounts:
Another seemingly obvious yet effective method of reaching a good credit score is to keep punctual checks on the past-due accounts, these could include past credit cards or loans including house loans or vehicle loans, etc.
While a late payment can remain on a person’s credit profile for as long as seven years, it accrues up late payment charges over time and puts a dent on the credit history and corresponding the credit score of the individual as well.
So, if for some reason, payments have been delayed, the reason should be conveyed to the lending institution at the earliest and the combined payment Including the initial amount and the late fee should be gotten rid of as soon as possible.
Additionally, a punctual analysis of old credit cards should be done and only if the bills can not be paid anymore should the card be discontinued, after settling the remaining dues. It is a general practice of people to switch between credit cards and then discard the previous ones. While this might seem enticing in the short term, it has a detrimental effect on the credit rating over a longer period and therefore, a detrimental effect on the long term financial goals of an individual.
(Must read: Basic principles of trading)
Customize Credit Limit:
A consumer’s credit utilization rate or credit utilization ratio is the amount of revolving credit being currently used to the total amount of revolving credit available to the consumer.
The credit utilization rate has a significant impact on a person’s credit score. In general, the lower the credit utilization ratio, the better the credit score.
Utilizing the credit to the fullest has a negative impact on the credit score since it fails to provide the lending institution a buffer and increases the amount of loss if the borrower defaults in his/ her payments.
A good practice is to customize credit limits based on expenses after discussing with the lender.
Limit applying for new accounts and taking too much debt at a time:
The amount of accounts opened in a particular period of time and the number of loans taken in a fixed period of time should be kept to a minimum to improve credit score.
An individual should repay one loan and then take another if keeping a good credit score is a priority for that individual. This is primarily because if a person takes multiple loans at once, it becomes evident that the person is in an unforgiving cycle of insufficient funds, hinting at the continual of the behaviour in the future as well. As a result, his/her credit score may fall.
Also, each loan application leads to inquiry and a coupling of several inquiries can have a negative compounding effect on credit scores. The inquiry aspect is often overlooked when constantly applying for loans.
(Recommended blog: Commonly used technical indicators)
The Bottom Line
This blog discusses the 5 key ways to improve an individual’s credit rating. Almost all of the 5 methods discussed are intuitive in nature and do not need any sort of deep financial understanding to understand.
The key aspect however remains in the implementation part of the methods. Reaching to and more importantly maintaining a decent credit score is a slow and steady process. It does not happen overnight and therefore, the sooner the journey begins, the better it is.
The journey of improving the credit scores involves more of a consistent approach rather than a smart one and therefore, can easily be adopted by anyone.
(Also read: Option trading strategies for beginners)
It needs to be understood that lending institutions need creditworthy borrowers as much as borrowers need trustworthy lending institutions for their financial requirements. The process is therefore both ways and sometimes takes longer than expected but success is guaranteed with small steps and consistent efforts.
|
<urn:uuid:65ee7c9d-3b3a-44f7-b87f-03cc772303b8>
|
CC-MAIN-2022-40
|
https://www.analyticssteps.com/blogs/5-ways-improve-credit-score
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00658.warc.gz
|
en
| 0.951869 | 1,825 | 3.21875 | 3 |
The following is adapted from Fire Doesn’t Innovate.
For as long as people have had possessions, other people have tried to steal them.
Up until recently, if someone wanted to steal something from you, they had to physically take it from you. In business, theft meant someone stole boxes off your company’s delivery truck while the driver was distracted, or an insider embezzled money by altering financial records, or a cashier stuffed cash in their pocket while no one was looking.
Times have changed. Criminals don’t need access to your delivery trucks to steal from you anymore. They can now steal your digital assets over the Internet. Digital assets have completely changed the nature of possession and theft as we know it.
Imagine someone breaks into your building and steals a physical file out of your office, which is full of bank statements and other sensitive information. Assuming you don’t have any copies, you’ve now lost that information. However, if someone steals a digital copy of your sensitive data, you still have your data, but someone else does too.
What that means is that rather than being stolen once, your data can be stolen many times. Once someone steals a copy of your data, that copy can be copied repeatedly with all of the fidelity of the original. Instead of being a photocopy of a photocopy, which becomes blurry and difficult to read, a copied digital asset is an exact duplicate.
This is a completely new paradigm of theft. Instead of being stolen once, your assets can be stolen and traded an infinite number of times. If we’re wondering what kind of digital assets cybercriminals might be after, here’s a list of popular targets:
- System administrator accounts
- Cash and investments
- Payroll data
- Credit card data
- Electronic health records
- Unpublished financial results
- Business intelligence
- Business strategy
- E-commerce systems
- Industrial control systems (such as air heating/cooling systems)
- Building video surveillance systems
What Happens to Your Digital Assets?
Once your sensitive information is stolen, it’s often traded on the Dark Web, which is a portion of the Internet that is unreachable through conventional means such as a search engine or common hyperlinks. You have to take deliberate actions to get to it.
Imagine a dark alley where illicit goods and services are traded away from the prying eyes of authorities. That’s the physical version of the Dark Web. Unlike the dark alley, the Dark Web is scaled globally and has millions of participants.
To access the Dark Web, a person must not only manually enter a specific web address, but they also have to use a special piece of technology.
The same way you use a browser, such as Google Chrome or Firefox, to access the Internet, people use a special browser to access the Dark Web. The TOR Browser uses the TOR network, which is short for The Onion Router network.
Interestingly, TOR was originally a creation of the US government, the same people who brought us the Internet. They made TOR as a means to place confidential information on the Internet while still making it restricted to the public.
However innocuous its original intentions, TOR protocol has been hijacked for more sinister purposes. Anyone can access the TOR network, meaning anyone can access the back alley of the Internet: the Dark Web. People sell everything on the Dark Web, from stolen credit card information and company payroll information to weapons, drugs, gambling, and illicit personal services such as murder for hire.
Don’t Be Scared – Be Prepared
Most companies are woefully unprepared for a cyberattack. Executives have no way of knowing when a foreign government or lone cybercriminal will release a cyberweapon.
Therefore, you have to prepare for the possibility of attack the same way you prepare your business for other unexpected events, such as hurricanes and earthquakes.
You can’t predict when a natural disaster will strike, nor should you live in fear of their occurrence. Your best practice is to be proactive in creating a cyber risk management program so that when disaster strikes, your company is still in business.
The US government recognizes that the Internet is becoming increasingly dangerous for everyone who uses it due to the activities of organized crime and foreign nation-states. They also recognize that they can’t be everywhere to protect everyone, so they created the NIST Framework that any person or business can use to be more prepared for cyberattacks.
The NIST Framework is a great starting place if your business has never thought seriously about mitigating the threat of cyberattacks. Even though this problem speaks to technological issues, don’t forget that cybersecurity is a business issue.
To learn more about the NIST Framework, visit: nist.gov/cyberframework
For more advice on securing your digital assets, you can find Fire Doesn’t Innovate on Amazon.
Kip Boyle is founder and CEO of Cyber Risk Opportunities, whose mission is to enable executives to become more proficient cyber risk managers. His customers have included the U.S. Federal Reserve Bank, Boeing, Visa, Intuit, Mitsubishi, DuPont, and many others. A cybersecurity expert since 1992, he was previously the director of wide area network security for the Air Force’s F-22 Raptor program and a senior consultant for Stanford Research Institute (SRI).
|
<urn:uuid:8221de26-1de6-4d50-b184-bfb49f99b2e6>
|
CC-MAIN-2022-40
|
https://www.cyberriskopportunities.com/dont-let-your-companys-digital-assets-end-up-on-the-dark-web/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00658.warc.gz
|
en
| 0.946596 | 1,114 | 2.578125 | 3 |
Common channel signaling (CCS) is signaling in which a group of voice-and-data channels share a separate channel that is used only for control signals. This arrangement is an alternative to channel associated signaling (CAS), in which control signals, such as those for synchronizing and bounding frames, are carried in the same channels as voice and data signals.
For example, in the public switched telephone network (PSTN) one channel of a communications link is typically used for the sole purpose of carrying signaling for establishment and tear down of telephone calls. The remaining channels are used entirely for the transmission of voice data. In most cases, a single 64kbit/s channel is sufficient to handle the call setup and call clear-down traffic for numerous voice and data channels.
The logical alternative to CCS is channel-associated signaling (CAS), in which each bearer channel has a signaling channel dedicated to it.
|
<urn:uuid:7829cde3-a803-404d-9bc8-92ec53260938>
|
CC-MAIN-2022-40
|
https://www.dialogic.com/glossary/common-channel-signaling-ccs
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00658.warc.gz
|
en
| 0.952042 | 187 | 2.765625 | 3 |
In my previous post I explained what MPLS is and how it works from a high level perspective. In this post I will explain MPLS label operations and how labeled packets are processed in MPLS networks.
When a labeled packet is received the label value at the top of the stack is examined to determine two things:
- The next hop and the exit interface to which the packet is to be forwarded.
- The operation to be performed by the LSR on the label stack before forwarding the packet.
Listed below are the operations performed by the LSR on the MPLS label stack of the packet:
Push operation: adds a new label to the IP packet or to the MPLS label stack of the packet. The push operation is commonly done by the ingress router except in some traffic engineering scenarios.
Swap operation: the top most label is swapped by another one before switching the packet to the next downstream LSR. This is commonly done by intermediate LSRs in the provider network.
POP operation: removes the top most label from the label stack to prepare that packet for its final destination. This is commonly done by the egress router or by the router preceding the egress router as Penultimate Hop Popping or PHP in brief.
Penultimate hop popping is an operation performed by a certain LSR in the MPLS network before sending the packet to the Label Edge Router (LER). The process is done by removing the top most label of the MPLS packet to reduce the overhead of the double lookup on the LER.
Have a look at the MPLS special Labels for more information about MPLS labels.
|
<urn:uuid:01fd984f-a031-44b1-9dab-d8c8b7737a07>
|
CC-MAIN-2022-40
|
https://www.networkers-online.com/blog/2010/03/mpls-label/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00658.warc.gz
|
en
| 0.932099 | 334 | 2.984375 | 3 |
A human firewall meaning is a group of people in an organization who protect the computer system from cyber threats. This post will discuss the human firewall meaning in cyber security and how to build a human firewall to ensure your cyber security.
What is A Firewall?
Before discussing human firewall meaning in cyber security, we’ll give you a basic idea about the firewall.
Generally, a firewall is a cyber security device that is designed to observe incoming and outgoing traffic based on a supported predefined set of rules. The main aim of a firewall is to create an obstacle between a trusted internal network and an untrusted external network in order to ensure data security.
A firewall is a first-line defense against cyber threats such as malware and viruses, and it can reduce cyber-attacks on your computer network.
Typically, there are three types of firewalls, such as software firewalls, hardware firewalls, or both. The best practice is to install both firewalls in order to achieve the maximum possible protection. Each type of firewall is used for the same purposes, but they have technically different functionality.
Examples of the most popular Next-Generation Firewalls are Cisco, Fortinet, Barracuda, Sophos, and Juniper. If you are on a home network then you can use Windows Firewall.
Tips to turn on Windows Firewall:
- Start→ Control Panel→ System and Security→ Windows Firewall.
- Click the Turn Windows Firewall On or off link in the left pane of the window.
- Select the Turn on Windows Firewall radio button for one or both of the network locations.
- And Click OK.
Human Firewall Meaning in Security
What does a human firewall meaning? The meaning of a human firewall is not a single person; it is a group of people in an organization who act as a human layer of protection. Human firewalls have the capability to find out the vulnerabilities of a system and they are educated, aware of cyber security, and know social engineering attacks. So, they can protect the system as well as ensure the cyber security of an organization.
We know that the firewall is a security device that protects the system from cyber-attacks. The human firewall does exactly this, but employees act as a firewall. A human firewall ensures that the data has not been breached or compromised.
The key point of the human firewall is as follows:
- A group of cyber awareness people.
- Capability to identify the weakness of a system.
- Knows the cyber threats such as phishing, and malware.
- Strengthen the technical skill of the human firewall.
- Update about the latest security threats.
Basically, a human firewall meaning protects against different types of cyber security threats such as phishing emails, malware, and phone scams. Most cyber-attackers send phishing emails to employees because the emails look legitimate and it seems to come from a reputable organization. The attackers also send a malicious code or file with an email attachment, when downloading the file the malware may install on the computer without the employee’s conscious.
Human firewalls should always be aware of phishing emails (such as email phishing, spear phishing, and whaling attacks), and also should know how to avoid phishing emails. In that case, if the employees of an organization are well trained and aware of these types of cyber threats, they will take necessary action to stop them.
Why Human Firewall is Important?
It is very much required a human firewall of an organization because they will secure your computer system and network. Human firewalls can protect data from cyber-attacks and ensure that the data will not be lost. Employees play a significant role in securing a business operation because they have permission to access sensitive data.
They have the ability to accurately identify the security risks or weaknesses of an application and can report to the higher authority or security team to solve the problems. So, the business will run smoothly and decreases the chance of a business falling by cyber-attacks.
When a new system has been installed and there may be security holes or vulnerabilities, so, there is a chance of a zero-day attack. In that case, the employees will act as a firewall to detect security holes and will take necessary action to prevent cyber hackers.
Finally, it is very much urgent to arrange a cyber-awareness program and train all employees on how to efficiently access sensitive data and ensure cyber security. An educated human firewall meaning helps strengthen your security system.
How to Build an Excellent Human Firewall?
An educated and strong human firewall meaning is the first-line defense to protect a system. It is observed that many organizations spend huge amounts of money to purchase cyber security tools but are not conscious of their employee’s security awareness.
There are different ways that data can be breached; human error is one of the major reason. Is there a human firewall in your organization? If not, then immediately train the employees and prepare a security policy for strengthening your human firewall.
Here, we’ve suggested some important tips on how to build a strong human firewall.
- Create a Cyber Security Policy and Team
- Educate Your Employees
- Cyber Awareness Program
- Keep Human Firewall Engaged
- Use Cyber Security Tools
- Keep Updated about the Latest Security Threats
- Reward and Incentive to Human Firewall
1. Create a Cyber Security Policy and Team
The first step is to create a strong cyber-security policy to build a strong and successful human firewall in your organization. The security policy may include clear instructions on who can access a system, responsibilities of authorized users, data access levels, system recovery, and protection.
The policies should cover topics related to cyber threats and security, such as email security, social engineering attacks, password policies, and phishing scams. All the employees should follow the instructions of the security policy.
You have to prepare a cyber-security team; the team member should have the proper skills and experience to protect your organization from cyber-attacks.
2. Educate Your Employees
We all know “education is the backbone of a nation”. So, education is the key factor and foundation of the human firewall. If your organization handles sensitive data, then it is required to ensure that your employees are educated. Employees must have the proper skills to detect potential cyber-attacks.
3. Cyber Awareness Program
This is another important tip for developing a strong human firewall. Therefore, arrange a cyber-security awareness program to empower your employees. The training program may include how to identify the weaknesses of a system, detect cyber threats, and how to handle or protect from cyber-attacks.
The awareness program should cover security threats such as phishing scams, social engineering attacks, device security, password security, and physical security. The training program should be organized on a regular basis and keep aware of the latest security threats.
4. Keep Human Firewall Engaged
This is a very interesting tip to keep employees engaged in real-world examples of previous attacks on a business, or a real-time cyber-attack, such as conducting phishing tests. Phishing attacks are the most common attacks in the cyber world. The phishing test will help to check if employees are properly trained or aware and how to protect themselves from phishing attacks.
5. Use Cyber Security Tools
If you have no cyber security tools in your data center or server room, then you should immediately purchase and install the security tools and devices. There are different types of cyber security tools such as antivirus software, encryption tool, network monitoring device, and web vulnerability scanning tools. After completion of security training, human firewalls use these tools to fight against cyber threats and they should be experts in using these tools.
6. Keep Updated about the Latest Security Threats
Cyber attackers have technical skills and are very much smart at new technology. They are changing their attacking technique to compromise your data. That’s why it is very much required to keep employees updated about their attacking techniques.
The human firewall means they should be aware of the latest cyber security threats and phishing emails. Employees can get updates or information from online forums, newsletters, and websites; so they can strengthen the human firewall.
It is also suggested, that creating a strong cyber security culture that will help to share regular security updates, malware attacks, and conduct phishing attacks may stay updated.
7. Reward and Incentive to Human Firewall
This is the last step in building a strong human firewall. You can encourage and reward your employees for successful completion of training, cyber awareness, and participation in the human firewall to protect your system.
The incentive doesn’t have to be a huge amount of money; it can be part of the monthly salary or it can be prizes or other awards. So, that can encourage employees to stay committed and do a good job as a human firewall to protect the system.
Threats to Human Firewall Meaning in Security
Threats to human firewall meaning that a human firewall can be affected by different types of cyber threats such as phishing attacks, malware, human error, and untrained employees.
Phishing is a type of social engineering attack that can impact the human firewall to gain sensitive information. The phishing attack can happen in different ways, such as by sending an email to the victim. A phishing email may contain a malicious URL link and if you click on the link then you are redirected to a vulnerable website.
Learn more about Phishing Attacks.
Malware is malicious and harmful software that has been designed to damage, disable, or gain unlawful access to a computer system without your consent. The malicious software can be installed through fraudulent email attachments, URLs, USB devices, social media platforms, and vulnerable websites.
The main cyber threat to human firewalls is human error. Phishing attacks and social engineering attacks are common attacks that are used by cyber hackers to exploit human error. Human errors can happen to employees such as lack of cyber awareness, untrained employees, no idea about phishing attacks, and careless employees.
Finally, a strong human firewall is a cyber-awareness group of employees within an organization who can identify threats as well as stop data breaches. If you have no human firewall then you may start to build a human firewall today! Although it is very difficult to build a successful human firewall.
However, you have to select the right employees, train them on cyber security, and keep engaging them in security testing to create a strong human firewall. The steps we have mentioned will help you to get started developing a human firewall. In this post, we’ve discussed human firewall meaning, elements of human firewall, and threats of human firewall; hope the article will be helpful for you.
|
<urn:uuid:1efa9aa1-fd70-4eba-a9a7-848a9de1eb40>
|
CC-MAIN-2022-40
|
https://cyberthreatportal.com/human-firewall-meaning/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00658.warc.gz
|
en
| 0.93046 | 2,233 | 3.359375 | 3 |
STANFORD, Calif. — Hot Chips is an appropriate term for a conference on future trends in microprocessors. On a rare hot day here in Memorial Hall on the campus of Stanford University, even the air conditioners fail to counterbalance the heat from notebooks that adorn practically every lap in the auditorium.
Inside, the talk among engineers and computer scientists is around multi-core and all things multi-core. Intel and AMD have shifted their strategy from clocks to cores and every demonstration, from graphics cards to research projects, were showing off their multi-core efforts as well.
The problem is while the hardware engineers have made a monumental effort to build the multi-core machines, the applications have not come. That’s because parallel programming is a complicated science that’s driving even the impressive collection of PhDs at this show up a wall.
“A lot of it is compiler science that needs to be updated to make programming [multithreaded applications] easier, and it will happen,” Peter Glaskowsky, technology analyst for Envisioneering, told internetnews.com. “Multi-core is really good at a narrow class of applications. A lot of people are doing a lot of work so multi-core will benefit many kinds of applications.”
But just throwing cores at the problem won’t help without careful design, said Erik Lindholm, an Nvidia engineer and veteran of Silicon Graphics in his keynote speech. Lindholm was discussing the scalar design of Nvidia’s most recent video chip, the G80, which is found in the 8800 line of cards.
“You can’t build infinitely wider hardware, your scalability goes down,” he said. There must be balance between workload units. In the case of a video card, that means balancing the pixel processors, vertex engines and triangle animation. “You don’t want to emphasize one part of the shader and stall out another. That will cause bubbles in the pipeline.”
Nvidia (Quote) discussed its Compute Unified Device Architecture, or CUDA, a technology for writing applications in the C language (define) that utilize the computation power of the G80. The company has introduced a line of computers under the Tesla brand name.
The Tesla products are designed to aid in heavy computation projects, especially floating-point calculations, in science and medicine. The G80 can handle up to 12,288 threads and has 128 thread cores. CUDA is designed to address the threading problem by allowing a programmer to write multi-threaded applications with just a few lines of C code.
AMD followed with a demonstration of its HD 2900 video card, but stuck to promoting it as a graphics processor. “To us, whether you are playing video or doing 3D, it’s a form of decoding and decompression… so our view of the graphics chip is it’s a decoder and decompressor,” said Mike Mantor, a Fellow at AMD (Quote).
Intel (Quote) showed off its 80-core prototype, which was designed to be a network on a chip with teraflop performance, and running at under 100 watts. The caveat to this prototype is that it’s not compatible with x86 systems. Right now, it remains a lab experiment.
The chip uses a tile design for the cores, in an eight-by-ten grid. Each tile has a router connecting the core to an on-chip network that links all the cores together, rather than make them go through the frontside bus like its Core 2 and Xeon processors. Due to its advanced sleep technology, Intel estimates it cuts two- to five-fold reduction in power leakage.
The many-core speeches continued with Madhu Saravana Sibi Govindan of the University of Texas at Austin, who discussed UT’s own multi-core project, TRIPS (The Tera-op, Reliable, Intelligently adaptive Processing System).
TRIPS uses a design known as EDGE, Explicit Data Graph Execution, which executes a stream of individual instructions as a block. Processors today function by executing instructions one at a time, very fast. EDGE attempts to run as many instructions as possible in one block.
TRIPS can execute up to 16 instructions per cycle, whereas the Intel Core 2 processor can only do 4. Because of its large blocks, a 366Mhz prototype was able to flatten a Pentium 4 in some benchmarks, while it was flattened in others. At this point, the processor and code for it is still in the development stages and Govindan said maximum performance required hand coding, a skill not many people have acquired.
|
<urn:uuid:102cbb34-b4d7-4719-9fa3-79ebb625cd6b>
|
CC-MAIN-2022-40
|
https://www.datamation.com/applications/multi-core-the-cool-factor-at-hot-chips-conference/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00058.warc.gz
|
en
| 0.947 | 974 | 2.796875 | 3 |
If you’re in information security you’ve probably heard a lot about serialization bugs. They are becoming increasingly common, and I wanted to give a basic overview of how they work and why they’re an issue.
The parsing problem
So much of security comes down to parsing. It’s the primary reason we need input validation, and the reason that software like antivirus and network protocol analyzers can have so many security issues.
The job of a parser is to take input from somewhere else and run it through your own software. That should frighten you. It’s like a CDC employee using the ‘open and lick’ method to test petri dish samples.
Bottom line: If you’re going to parse something, you have to get intimate with it.
And that brings us to serialization.
Serialization is the process of capturing a data structure or an object’s state into a (serial) format that can be efficiently stored or transmitted for later consumption.
So you can take an object, capture its state, and then put it in memory, write it to disk, or send it over the network. Then at some point the object can be retrieved and consumed, restoring the object’s state.
A basic example of serialization might be to take the following array:
$array = array("a" => 1, "b" => 2, "c" => array("a" => 1, "b" => 2));
And to serialize it into this:
At its core, serialization is a type of encoding.
So this brings us to the core issue: deserialization requires parsing.
In order to go from that serialized format to usable data, some software package needs to unpack that content, figure it out, and then consume it.
Unfortunately, this is precisely what parsers are so bad at. And doing it wrong can lead to all manner of security flaws, up to and including arbitrary code execution.
- Parsing untrusted input is hard
- Serialization takes data and encodes it into opaque formats for transfer and storage
- To make use of that content, parsers must unpack and consume it
- It’s extremely hard to do this correctly, and if you do it wrong it could mean code execution
- Don’t deserialize untrusted data if you can avoid it
- If you can’t avoid it, just realize you’re asking your parsing software to lick some petri dishes labeled “SAMPLE UNKNOWN”, and explore your options for making it so you don’t have to do this anymore
This overall concept applies to most any language that uses serialization, but some languages (like Java) are in worse shape than others.
|
<urn:uuid:1d8185f4-c9bd-4ac1-bd55-dc16c6f892dc>
|
CC-MAIN-2022-40
|
https://danielmiessler.com/study/serialization-security-bugs-explained/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00058.warc.gz
|
en
| 0.891772 | 654 | 2.890625 | 3 |
A lot of problems can be solved without needing to modify the anneal time at all.
The anneal time is controlled by the annealing_time parameter.
But what are some scenarios that might benefit from longer anneal times? When do we want to change the anneal time?
The following are a few scenarios that might benefit from increasing, adjusting, or optimizing the anneal time.
Problems with a small minimum gap will benefit from longer anneal times.
This gap is referring to the difference in energies between different output variable states, namely the smallest difference in energies between two states.
If you are trying to generate an output sample that matches some distribution (such as a Boltzmann distribution), the distribution may be better with longer anneal times.
Different lengths of anneal time produce different distributions in general.
For both Quantum and Classical Annealing, the optimal anneal time is not guaranteed to match the default anneal time, so it can be beneficial to optimize this value for all problems submitted.
Often for harder or more complex problems the optimal anneal time is longer.
Highly connected problems, or worded another way, problems with more quadratic terms also benefit from longer anneal times.
Problems with many and/or long chains will also benefit from a longer anneal time.
In addition, when anneal offsets are involved, longer anneal times can be beneficial.
Often with longer chains, they will "freeze out" (choose and stick with a value) early in the anneal cycle.
The longer the chain the earlier the freeze out, so the later we would want to offset them in the anneal cycle.
Finding lower energy states can also benefit from longer anneal times.
When we consider the same total time for longer anneals vs the same total time for higher number of reads on the same problem, longer anneals are more effective than taking more samples in finding lower energies.
This is partly due to the overhead involved in writing problems to, and reading results from the QPU.
|
<urn:uuid:38e4c8c1-2278-4e60-8f40-662311713522>
|
CC-MAIN-2022-40
|
https://support.dwavesys.com/hc/en-us/articles/360046653474-When-Would-Results-Benefit-from-Having-a-Longer-Anneal-Time-
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00058.warc.gz
|
en
| 0.906547 | 445 | 2.5625 | 3 |
Recent decades have witnessed a rapid growth in technological advancement. From raising budget-tight efficiency to rendering the smart sensing technology, IT industries not only contest for the top spot but also play a vital role in transforming the world as we perceive it. Artificial Intelligence (AI) is not an unusual term nowadays, but the importance bestowed upon it is somewhat undernourished. Coupling the technology with other recent technological advancements, AI can be optimized at even higher levels. Big data is another growing area whose full potential is still unknown. So far, IT has de-duced numerous benefits of big data interplay, but, those seem to be just a fraction of the lucrative repertoire big data has in its lap.
A new strategy, where Big Data is employed in AI, turns out to be a total game changer. Best in its class, Big Data, which uses customer and organization generated information to help firms make better decisions concerning efficiency and cost-effectiveness, meets one of the best technological feats that humankind has achieved—AI, and we can all guess the possible results.
AI can perform such complex tasks which involve sensory recognition and decision-making that ordi-narily require human intelligence. The advent of robotics has further introduced an autonomy that re-quires no human intervention in the implementation of those decisions. Such a technology when paired with Big Data, can rise to unforeseen immensities that we cannot presently articulate. Howev-er, some of the primary outcomes of this merging are as follows:
Soaring Computational power
With continually emerging modern processors, millions of bits of information can be processed in a second or less. Additionally, graphics processors also contribute exponentially to the rising CPS (calcu-lations per second) rate of processors. With the help of Big Data analytics, the processing of big vol-umes of data, and the rendering of rules for machine learning, on which AI will operate, is possible in real time.
Cost Effective and Highly Reliable Memory Devices
Memory and storage are the essential components of any computing machine, and their health de-termines the overall strength of the computer. Efficient storage and quick retrieval of data are critical for a device to work smartly, even more so for AI.
Memory devices such as Dynamic RAMs and flash memories are increasingly in demand for they make use of information merely for processing and not for storage. Data, thus, doesn’t become centralized in one computer but is instead accessed from the cloud itself. With the aid of Big Data, memories of more precise knowledge could be built, which will inevitably result in better surface realities. Addition-ally, the ready cloud which indulges into this large-scale computation is used to produce the AI knowledge space. With the better memory of information, indeed, higher AI learning will be imparted along with reduced costs.
Machine Learning From Non-Artificial Data
Big Data is proven to be a source of genuine business interaction. Big data accumulated for analytics provide a better grounding for prospects of actions and planning of the organizations. Earlier, AI was used to deduce learning from the samples fed in the storage of the machine, but with Big Data analyt-ics it is now possible to provide machine learning with “real” data which helps AI perform better and more accurately.
Improved Recognition Algorithms
With technological advancements, it has become possible to program AI machines in such a way that they can make sense of what we say to them almost as if they were humans. However, humans can produce an infinite set of sentences through combinations based upon underlying linguistic and per-ceptive analysis. Big Data is also capable of empowering AI in the same way as it can form algorithms that the human brain possesses. The voluminous data renders a broad base for building algorithmic analysis, which in turn enhances the quality of AI perception. Alexa, HomePod, Google Home, and other virtual assistants are good (if not the best) examples of improved recognition in AI.
Promoting Open-Source Programming Languages
In the past, due to cloud unavailability (thereby unavailable Big Data), AI data models could use only simple programming languages. These scripting languages such as Python or Ruby where excellent for statistical data analysis, but with the help of Big Data, additional programming tools for data can be uti-lized.
With the introduction of new developments in technology such as Big Data, the scope, and future of AI has been soaring in new dimensions. With the merging of Big Data analytics and AI, we can create a highly efficient, reliable, and dependable in its nature AI defined infrastructure.
|
<urn:uuid:9eab4807-4341-415f-aa7a-6c2f9529efe1>
|
CC-MAIN-2022-40
|
https://www.idexcel.com/blog/tag/big-data-and-machine-learning/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00058.warc.gz
|
en
| 0.934396 | 941 | 3.296875 | 3 |
Who Do You Trust?
I was reading an article today about blockchain in 5G and it got me thinking about how this type of blockchain system would actually work. We have written in the past about a new type of blockchain system that is optimized for systems at scale such as the IoT. This post is not about that type of explanation. Instead, the focus in this post is about the inherent complexity that is represented by the multitude of parties involved in a given 5G network. The article I read describes numerous use cases between companies, consumers, government and providers and how all of these parties have their own requirements and needs for any given transaction. The author correctly states that only a blockchain approach – although I will state that traditional blockchain will fail – can possibly handle these requirements. The question, however, is who is the Point of Authority (POA) for these transactions? Sure a business will take precedence over a consumer but how about between two businesses? Or a business and a provider? Who wins this battle between different government agencies – especially when at state and federal levels? When it comes down to trust, who wins?
The Nonobvious Choice
I would assert that the devices themselves become a distributed POA. I know, I know – devices?!? Hear me out. The reality is that modern devices are smart and not the dumb terminals from the 1990s. Properly protected and verified, these devices can become autonomous points of authority and handle any disagreements between any parties. At the end of the day, all transactions can be reduced to device communications. Those communications can be held as atomic units of any transaction and, collectively, can authenticate and even authorize any given transaction of record. This is, of course, the entire premise of the blockchain. It is interesting that the foundational definition of blockchain empowers device-based POA but the principles in charge do not use it to this end. Despite blockchain-based transactions, different parties still vie for control over the authority of those transactions. At the scale of a 5G network – wherein billions of transactions occur every second – this contention simply cannot survive. I will go further and suggest that the system that controls the authenticity of devices has to be separate from the system making the recordings of transactions but that is a discussion for another blog post. For now, I think it is clear that the only real option for POA has to be the smart devices involved in all of these transactions and that an independent standard has to be supported that empowers these devices accordingly.
|
<urn:uuid:c2ae1bd9-65a7-4ac4-a69a-59fdb6a32c7b>
|
CC-MAIN-2022-40
|
https://bearsystems.com/2019/04/01/the-blockchain-poa/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00058.warc.gz
|
en
| 0.961857 | 507 | 2.515625 | 3 |
There is no denying the statement that almost all businesses face the risks of cyber threats such as data breach, malvertising, ransomware, and phishing attacks. Some studies show that startups or small-sized firms are specifically susceptible. Research in 2019 showed that approximately 43% of cyber-attacks are targeted at startups or small corporations. This is why IT security has become the biggest concern for many business owners.
Why are small businesses common targets for intruders? Experts say that the lack of security measures, resources, and improper planning can lead to the risk of cyber attacks that cause business loss and affect online credibility. It is crucial to have all the security controls in place to prevent your confidential business data from any authorized access.
If you want to ensure your IT security, consider how a managed service provider can help you take the following precautionary measures:
- Train Your Employees: Employees often make mistakes they’re unaware of because they lack basic knowledge of IT security. Employee education is the most important factor you can do to reduce the risk of data theft. Through malicious links and emails, employees are invited to click on phishing emails that install a virus on your system. Ransomware can hold your computer hostage until you pay the ransom. A managed service provider can help you train your employees about basic IT security and advise them to never open any suspicious attachments or files, confirm the legitimacy of information source before entering any personal details and avoid any malicious attachments. It’s often said that the security of your business is only as strong as the weakest link.
- Use Unique & Strong Password Protection: Cybercriminals use several ways to access your password and enter your network. It is important to educate your employees about using unique, strong, and long passwords for all their accounts and networks. Advise them to use a complex password using a variety of characters, letters, and symbols. Passwords should be different for accounts that they use to access business files and documents.
- Protect Confidential Information: Whether its employees’ personally identifiable information (PII), business sales secrets, files related to your business model, financial data, or any other sensitive information, it is imperative to secure your confidential information. If these details go in the wrong hands, they can ruin your business, online reputation and customers. You should make sure that the management team keeps all the paper files and data storage devices at a safe place when not in use.
- Keep Your Operating System And Software Updated: Keeping your operating system and software updated with the latest version can add an extra layer of protection to your system and network. You should enable the firewall of your operating system or purchase any reliable firewall software. A managed service provider can help keep your Wi-Fi network encrypted and secured. When it comes to the work from home policy, make sure that the VPN (Virtual Private Network) is configured so that your employees can work remotely without any hassle.
Also read, COVID-19 Pandemic: 7 Pro Tips for Working from Home.
- Two-Factor Verification: It is one of the great ways to stay assured about that it is only you who are accessing your account. In addition to this, you will need another device such as a mobile phone to get a code for password verification. This helps you make a safe log in as you will enter the code which was generated by you. Two-factor authentication is gaining popularity with both Microsoft and Google are providing mobile apps through which you can implement this highly secured authentication method.
- Data Monitoring: A managed service provider can keep an eye on what data is getting shared with the third parties and ensure that no data is shared with the people who are not associated with your business. Once your business data is breached, it might be hard to recover and can cost you huge money that you have earned through your efforts, time, and dedication. If you or your team members share any confidential data within the organization or even outside the organization, make sure that the data is being shared with the reliable parties.
- Avail Managed IT Services: Availing managed IT services is one of the best ways to monitor your systems and network proactively, resolve IT issues, and streamline all the IT operations with a great level of efficiency and expertise as compared to other solutions. Managed IT service providers know each and every aspect of IT security that benefits your business in many ways. You can get benefits from improved employee efficiency, reduced operational cost, and business growth.
If you are seeking the best managed IT Company near me in Texas, then it is the time you should choose CTG Tech. They are known for offering the best support to companies who are concerned about their network and system protection from any unauthorized access.
|
<urn:uuid:7b9f4843-a486-425a-8a5d-9d79855794ac>
|
CC-MAIN-2022-40
|
https://www.ctgmanagedit.com/secure-your-business-data-from-unauthorized-access/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00058.warc.gz
|
en
| 0.951709 | 957 | 2.515625 | 3 |
Containerization approaches bring advantages to the operation and maintenance of systems across physical compute resources. In the enterprise IT world, containers are leveraged to decouple computational workloads from the computing substrate on which they run. This allows, for example, compute hardware to be treated more as a utility, allowing deployment of multiple workloads across racks, scaling the hardware resources, such as processors, memory, and storage, as necessary to handle the workloads.
Multiplexing software loads across fixed hardware resources allows for more efficient use of the hardware investment and more robustness against hardware faults. It also enables easier maintenance and evolution of the software workloads themselves by allowing schemes where a centralized container provisioning or configuration can be updated and then pushed out to the execution environment. Containerization technologies as applied to traditional enterprise IT have been a key enabler of the modern cloud.
Typically, a container can be thought of as a lightweight virtual machine. A full virtual machine is capable of complete emulation of a target hardware layer on a host machine, including the CPU instruction set, peripheral set, etc. Virtual machines are capable of offering high portability but incurring significant overhead due to simulating every aspect of the target machine within the host machine. This practically requires the host machine to be overspecified as compared to the target machine being emulated. In many cases, such level of emulation is not necessary.
Hypervisor-based virtualization requires fewer host resources than a full virtual machine. A hypervisor provides each execution environment a private view of the underlying hardware but is most often bound to the underlying host machine architecture, so it does gain some additional efficiency by constraining the hardware architecture to that of the host machine. In Industrial Internet of Things (IIoT) applications, the level of abstraction and isolation provided by full virtual machines or hypervisors is often not necessary.
Containers are not full virtual machines but, instead, operate under the constraints and architecture of the host machine. As such, containers are able to interface to the CPU architecture and low-level operating system (kernel) of a host machine, directly sharing the hardware and kernel resources of the host machine.
Containers depend on the low-level operating system of the host machin, but can encapsulate and provide portions of the higher layer operating system (userspace). This allows an application within the container to be built and run against a private, fixed set of versioned operating system resources.
Most system administrators or UNIX application developers are probably familiar with the concept of “dependency hell” — making all of the system resources available in order for an application to run. It can often be a tricky and tedious exercise to maintain multiple application dependencies across all applications that are provisioned to run on the same server. Containers allow each application to bundle a controlled set of dependencies with the application so that these applications can independently have stable execution environments, partitioned and isolated from other containerized applications on the same server. Even application updates are often packaged and deployed as container updates for convenience. Thus, containers provide strong partitioning between application components on a target machine.
Since containers execute within the context of a container engine, it allows enhanced security policies and constraints to be imposed on an application by constraining the container engine itself. In a Linux hosted environment, for example, using mechanisms like 'cgroups,' process space isolation, file system controls, and kernel-level mandatory access controls, the container engine can be forcibly constrained to operate under those controls — e.g., to limit memory usage, CPU usage, access to specific parts of a file system, access to network resources, or to allow only certain a priori-approved subsets of kernel operations.
By applying those constraints through the mechanism of the container engine, such security controls are imposed, even if the enclosed application is unaware or uncooperative to participate in those controls. This is consistent with modern IT security best practices.
Containers, similar to applications, can be signed and authenticated such that the content is distributed to a compute node and can be authenticated and validated under strong cryptography by the container engine.
Modern containerization systems also include or interoperate with orchestration systems. Orchestration systems provide the means to dispatch containers to host machines and to determine which containers are to be dispatched to which hosts. Additionally, most
orchestration systems allow applying configurations to parameterize containers and support for management metrics/dashboards to monitor a system. When it comes to coordinating the deployment, provisioning, and operation of containers at scale, the capabilities provided by orchestration systems are necessary.
Containerization Approaches and Benefits
In terms of constructing and maintaining containers, some systems have more capabilities and features than others. A container can always be constructed by hand, but there are often tools and materials within an open source ecosystem that can aid the effort. A modern system will usually allow a container to be derived from a composition/library of reference containers. These libraries promote reuse, leverage the ecosystem, and allow for rapid development and deployment of a container.
Broadly, containerization schemes decouple the challenge of provisioning an application and its execution environment in a controlled manner to effectively utilize underlying hardware compute resources. Containers bring benefits of partitioning, security, and orchestration. The approach is cheaper than full virtual machines and still results in duplication of operating system/user space components.
Leveraging Containerization Approaches for the IIoT
Although containerization technologies have been primarily developed for traditional enterprise IT, there are clear parallels and advantages to adopt similar schemes for the IIoT.
One thing to consider is the type of IIoT host machine on which the container shall be deployed, which often entails use case, future-proofing, and ROI considerations. In some cases, this may be a high-value installation warranting highly capable compute resources at the edge node, similar to the servers deployed in an enterprise data center. In other cases, the requirements may justify a lower-cost and lesser-capable machine to be allocated at that edge node. In a fully instrumented IIoT deployment, there will likely be different tiers of assets that are associated with different classes of edge hardware. How to economically enable each class of assets at the associated scale can quickly become an important driver in the selection of edge node hardware and architecture.
Another thing to consider is how to leverage the partitioning properties of a container, i.e., sandboxing. Is there a single, monolithic container deployed at the edge that contains all of the application functionality? Or is it preferred to get a better and more robust posture by isolating application components into separate spaces/separate containers?
For example, by partitioning edge functionalities amongst different containers, one container might be granted more privileges. An application component whose job is to periodically read, assess, and report alarms could be granted read-only privileges to interact with an edge asset. An application intended to perform a software upgrade on the edge asset would need more privileges, but different role-based security can be applied to interact with that application.
This architecture can map into a layered security approach where strong enforcement of permissions and mapping to roles can be orthogonally constrained around separate applications hosted on the same edge node. Further, being able to separate application components can lead to more robust implementations where the behavior (or misbehavior) of one application does not directly influence another. This approach also allows to easily add incremental enhancements to the edge device.
Interaction of application components with each other is an additional consideration. Since the applications are separated, an inter-process communication (IPC) scheme/remote procedure call (RPC) scheme would need to be implemented for separate applications to interact within the edge node. Such IPC/RPC schemes should also be authenticated and controlled to allow only approved interactions. Note that typical containerization schemes do not provide these mechanisms out of the box.
Security features of containerization schemes are consistent with modern operating system designs and modern security best practices. Being able to impose OS level controls and policies provides better ability to limit by design the potential impact of security breaches on a system. The mechanisms to validate and authenticate the application components that run at the edge are also consistent with the approach required by modern security posture.
Orchestration schemes have clear value in the IIoT. It is absolutely necessary to leverage a scheme for managing a fleet of IIoT edge nodes in a controlled and centralized manner to manage, version, maintain, and push containerized application components to the edge.
Unlike in a traditional IT environment, one challenge here is to group and coordinate the containers targeted to specific edge devices. Container workloads must be mapped to concrete, physical deployments of edge devices, since those devices are directly tied to field assets. The orchestration system cannot select any hardware to run a container, but needs to be flexible enough to target specific edge nodes with ease.
Orchestration schemes may not also be entirely sufficient to manage an IIoT system, as there are additional considerations of the host system that need to be managed and/or provisioned (network interfaces, VPNs, security credentials, cellular modems, etc). These resources are usually managed directly by the host operating system and simply made available for use by the container. Traditional approaches to IIoT platforms encapsulate this function under device management, where the management of containers/applications hosted in the device may be a subset of a unified device management.
Selection of an open-source or closed-source container engine also needs consideration, as there might be dependencies on third parties to maintain it. Ongoing support for third-party technologies, customizing applications within a container, evolving capabilities, and integrating with different protocol stacks and clouds are some of the other factors to consider.
|
<urn:uuid:2fdfc3e3-4a60-4393-ac6f-b34a799a3450>
|
CC-MAIN-2022-40
|
https://www.missioncriticalmagazine.com/articles/92943-containerization-approaches-for-the-industrial-internet-of-things
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00058.warc.gz
|
en
| 0.914617 | 2,005 | 3.265625 | 3 |
Despite the relentless urging of the data quality gurus, not all data issues can be prevented or managed proactively. Data volumes are growing exponentially, the variety of data is becoming more diverse, and the ability to ensure quality is diminishing as organizations seek to take advantage of fusing data from uncontrolled sources. Although data correction is not necessarily the desired choice, many data sets may remain unusable unless data standardization and cleansing methods are applied.
This tutorial focuses on three fundamental algorithmic techniques used for data quality and cleansing. Data standardization is a process for transforming data values into their recognized standard forms. Identity resolution employs both deterministic and probabilistic methods for determining that two records refer to the same entity. Record linkage uses standardization and identity resolution to link sets of records together so that the “desired values” can be selected for updating, cleansing, or correction.
Attendees will learn about:
- Using data standards
- How standardization works
- Deterministic identity resolution
- Probabilistic identity resolution
- Record linkage
- Data quality and master data management
|
<urn:uuid:93b1d630-a5ca-4360-99ee-654af3408e8f>
|
CC-MAIN-2022-40
|
http://knowledge-integrity.com/blog2/training-2/data-quality-tools-standardization-identity-resolution-and-record-linkage-for-data-quality/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00058.warc.gz
|
en
| 0.896477 | 221 | 2.921875 | 3 |
Data breaches are devastating, but they require some effort and expertise for hackers to execute. On the other hand, data leaks are just as dangerous but can happen much more easily because they occur primarily due to human error. The effects of a data leak can be far-reaching, including financial loss, reputation damage, steep fines from regulatory authorities, and even criminal prosecution.
Here’s how you can prevent data leaks within your organization.
Step 1 – Conduct Data Inventory and Classification
If you don’t have a complete picture of all your data, it is difficult to tell what information needs to be protected. You may also be compelled to inventory your data by regulations such as the California Consumer Protection Act (CCPA) or General Data Protection Regulation (GDPR).
The best place to start your data inventory is to assess where your data comes from. Familiar sources of data may include:
- Internal business systems such as point-of-sale or accounting systems
- Third-party systems such as electronic data interchange (EDI)
- Cloud-hosted and cloud storage systems
- Internet of Things (IoT) devices such as smartphones, cameras, and sensors
- External data sources like public information, geolocation, and maps
It’s generally worth assigning a project manager to oversee this process. You can also recruit supervisors from each department who report to the project manager. The supervisors will act as the contact people to discuss their departments’ data.
Next, identify the specific type of data your company collects, stores, and transmits. Common types of data in this case include:
- Personally identifiable information (PII) – PII includes first and last names, email addresses, business or home addresses, bank account and credit card numbers, medical records, taxpayer identification numbers (TIN), and social security numbers. The information may also extend to date of birth, age, gender, phone numbers, and license numbers.
- Intellectual property – This information includes sensitive and proprietary business data, including human resource records, product designs, financial records, internal reports, and internal correspondence.
- Customer information – This data may include purchasing history, verification codes, shipping addresses, and phone numbers.
- Finally, you can classify your business data based on its type and sensitivity. Typical data classification categories include:
- Public data – Public data is generally the lowest data classification. It includes information that is available to the public. This type of data may consist of press releases, job descriptions, and other data that can be freely used and redistributed without ramifications.
- Internal data – As the name suggests, internal data is meant strictly for internal company employees and personnel. Examples here include business plans and internal-only memos.
- Confidential data – Confidential data refers to data that requires specific clearance or authorization to access. Such information may include mergers and acquisitions documents, cardholder data, and Social Security numbers.
- Restricted data – Restricted data refers to data that may result in adverse effects if exposed. These consequences may include reputation damage, legal fines, or criminal charges. Examples of restricted data may include financial data, patient health data, and proprietary information.
Step 2 – Implement Access Control
Data inventory and classification make it easy to determine who should have access to which data. After all, you want to ensure that the right people have access to the correct data. You can start by assessing the access levels and controls in place currently, which makes it easier to identify flaws in the current process.
You can refer to your applicable regulations to determine the type of access control you need to implement. For instance, you’d need to restrict access to personal financial information under the Payment Card Industry Data Security Standards (PCI-DSS). Similarly, the Health Insurance Portability and Accountability (HIPPA) regulations mandate access to personal health information on a need-to-know basis.
It is also good to automate account provisioning, de-provisioning, and password management. This strategy will help reduce the workload for employees tasked with account administration. Similarly, segregating access using roles also helps with access control. For instance, only developers and their direct managers should access the developer environment.
Other possibilities for improving access control include:
- Auditing all access
- Controlling remote access
- Monitoring access patterns for unusual activity
- Applying the principle of least access
- Centralizing access management
- Creating and defending chokepoints
- Updating access control rules regularly
Step 3 – Encrypt Your Data
Besides categorizing data and controlling who has access to it, data encryption adds an extra layer of security. Data encryption refers to converting information into ciphertext, a kind of encoding that requires a unique decryption key to convert it back into readable form. You can encrypt data during storage or transmission.
There are many data encryption solutions to help you protect data at scale. These tools use advanced encryption algorithms to encrypt data. They also help set access policies, manage keys and passwords, and deploy encryption.
Some of the features to prioritize when searching for an encryption solution include:
- Strong Encryption Standards – Advanced Encryption Standard (AES) with a 256 key is currently the strongest encryption available. It is also the industry standard.
- Encrypt Data in Transit – A good solution should use transport layer security (TLS) to encrypt data during transmission.
- Encrypt Data at Rest – Choose a tool that can encrypt data regardless of where it is stored, including employee workstations, databases, file servers, and the cloud.
- Key Management – This feature is critical for encryption management. Choose software that quickly generates encryption keys, gets them to the right hands, and destroys the keys in case of revoked access. The tool should also be able to back up the keys.
- Granular Control – A good encryption tool should allow you to encrypt specific data without requiring you to encrypt all data stores regardless of their information.
- Consistent Encryption – Insist that your solution encrypts sensitive information even when they are modified, emailed, or copied.
You’ll also need to identify which data to encrypt. Again, you can refer to the relevant compliance standards for guidance. Furthermore, consider the worst-case scenario if a specific data set is compromised. This approach should clarify which data sets to prioritize for encryption.
Finally, don’t forget to encrypt your backups. This step is vital for covering all your bases. You may also be subjected to fines or criminal prosecution if your unencrypted data is leaked, subject to relevant compliance regulations and standards.
Step 4 – Secure All Endpoints
An endpoint refers to any remote point that can communicate with your business. This includes devices such as computers, laptops, and smartphones. So again, you can start by taking a complete inventory of all your endpoints.
Furnishing employee endpoints with anti-virus software, automated application updates, and multi-factor authentication is a good start. Next, it’s worth deleting any unused or unnecessary employee or customer data discovered during the inventory from endpoint devices.
Then, it’s worth enrolling the help of advanced endpoint protection software. For instance, the software may automatically use threat intelligence to detect and stop potential attacks.
Step 5 – Don’t Forget Third Parties
Third-party data leaks can be just as likely and devastating as internal leaks. Fortunately, there is much that you can do to avoid or minimize the possibility of a third-party data leak.
The first line of defense is thoroughly vetting any third parties you work with. Automated security questionnaires and external attack surface assessments are great ways to determine your vendors’ security posture.
It is equally prudent to have a centralized communication platform for your third-party vendors. Security teams and vendors can openly discuss any security gaps or concerns. This line of communication should also extend to the remediation phase. Vendors should report the steps to resolve security concerns within a specific time frame, such as 72 hours.
It is also worth incorporating cyber risk into vendor contracts. For instance, you may require that any third-party vendors that handle credit card information must maintain a security rating above 900. This strategy is perfect for holding vendors accountable for your organization’s data leak prevention program.
Lastly, monitor your vendors consistently for security risks. Vendors’ security controls are bound to change over time, and you need to be aware of their security posture throughout the contract term. For instance, keeping track of the vendor’s security rating over time is a good indicator of their commitment to cyber security.
Step 6 – Conduct Employee Security Awareness Training
Most data leaks happen as a result of human error. So all the data leak prevention measures won’t help if your employees put your data at risk. Employee security awareness training complements the efforts mentioned above perfectly. This training should include all employees connected to your company network.
Some of the topics to cover during security awareness training include:
- Physical security
- Desktop security
- Password security
- Wireless networks
- Information security
- Social engineering
- Removable media
- Incidence response
- Browser security
- Mobile security
- Business email compromise
Admittedly, it is challenging to create a cybersecurity course in-house from scratch. But there are plenty of companies that provide comprehensive training. Some great examples here include Curricula and Mimecast. Finally, remember that practical security awareness training is continuous.
Common Problems When Attempting to Prevent Data Leaks
Protecting company data should be a priority for any company. The steps we’ve outlined are perfect for getting you started in the right direction. However, some challenges are bound to come up during implementation. Understanding these challenges is crucial for helping you develop a robust strategy right out of the gate.
Below are some common challenges you’re likely to face when embarking on data leak prevention.
Lack of Executive Buy-In
Preventing data leaks within an organization sounds good on paper. But the process can be exhausting for everyone involved. You’ll likely need to purchase new and expensive software. Implementing new data protection policies also requires a shift in the company culture. Furthermore, they’ll probably be at least some disruption in workflow.
Securing executive buy-in can be challenging given these circumstances. There’s always the temptation to maintain the status quo, especially if your company hasn’t yet been the victim of a significant data breach.
The best strategy is to arm yourself with numbers and results. C-suites are always interested in metrics like financials and brand image. Show them how implementing your Data Loss Prevention (DLP) program will guarantee compliance, boost reputation, and save costs on potential breaches.
Similarly, highlight how your new access management policies will make data transfer to the right people more efficient and improve productivity within the workforce.
It is also necessary to quantify the risk of maintaining the status quo. This is especially true when scaling the organization. Explain to C-suite the actual cost of data leaks with examples of similar organizations.
Finally, recruit the employees responsible for day-to-day workflows, such as mid-level managers, to your cause. It is much easier to get executive buy-in when the higher-ups see your peers already understand the value of your proposed changes.
Government Regulation and Legal Requirements
Data privacy concerns have led to stringent government and legal regulations such as the California Consumer Privacy Act (CCPA) and General Data Protection Regulation (GDPR). These regulations are well-meaning but present some challenges in how organizations handle data.
These regulations change often, can be difficult to understand, and are challenging to comply with when sending data across global networks. However, security tools with built-in compliance help overcome these challenges.
It is also worth having a dedicated data controller or data protection officer (DPO). This individual is responsible for compliance with data protection principles. This position is crucial in large organizations that process large amounts of personal data.
Exponential Data Growth
Data continues to grow exponentially, which can put a strain on even the best data privacy policies. This growth can also increase the risk of data leaks and the costs associated with preventing the said leaks. The key to this challenge is efficient data management.
Cloud-based data archiving is a practical first step to managing data growth. This process involves identifying inactive data and moving it to separate long-term storage. This data may be retrieved at any time for business reasons but is separated from day-to-day information. Fortunately, data archiving systems can help to automate this process.
You can also take advantage of cloud-based storage consolidation. Here, the cloud service provider is responsible for consolidating, managing, and maintaining your data in the cloud. This frees up your IT department, reduces administrative complexity, and offers centralized and managed storage.
Lastly, take advantage of technology to store only the needed data. For example, various tools can help eliminate duplicate or irrelevant data.
Controlling the Cost of Data Leaks
Sometimes information may be leaked regardless of your best efforts. Therefore, it helps to have contingency measures against any eventuality.
Automated incident response is a reasonable precaution in case of a breach or leak. SIEM tools that incorporate artificial intelligence can help to automate responses such as identifying potential leaks and alerting incident response and remediation teams,
Lastly, a cyber insurance protection policy can come in handy in the unfortunate event of a significant leak. It is a reactive approach, but it can help mitigate the financial impact of data leaks.
The insurance premiums may hire forensic experts or acquire additional resources to contain the incidents. The insurance payout can also help cover business losses such as repairing reputation damage or legal expenses.
|
<urn:uuid:f3e6a24f-1b92-43ca-a187-49aa4dc910ca>
|
CC-MAIN-2022-40
|
https://nira.com/how-to-prevent-data-leaks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00058.warc.gz
|
en
| 0.916559 | 2,845 | 2.640625 | 3 |
It’s no exaggeration to say that most people in the US with a computer connected to the internet have been affected by a cybersecurity incident in the last few years. Data loss costs businesses and individuals time, resources, and money, and the problem isn’t going anywhere. Hackers are constantly looking for new and sophisticated ways to steal data, and cyber crime is now a global problem.
The good news is that it’s possible to stay safe and protected online. Here are 7 top ways to protect your business and personal data.
You should always choose the most complex or unusual password you can–this makes it harder for a hacker, or password-stealing programs, to access sensitive data. Use a mixture of capital letters, numbers, and symbols where possible.
Password managers help you generate strong, obscure passwords for different accounts and devices. This is especially helpful for businesses entrusted with sensitive client data.
Passwords are helpful, but two-factor authentication is even better. Once you input a password, you receive a code by text or by email. You then input this code to prove you are who you say you are. Two-factor authentication is a valuable extra layer of security, especially when you’re protecting business and client data.
Chances are you carry a portable device, such as a cellphone, everywhere. Although modern cell phones have impressive built-in security features, they’re not impenetrable. With more people working from home and using cell phones, iPads, and laptops to conduct business, end-user devices can present complex security challenges.
A good start is to carefully choose the apps you install and limit the access they have to your data. Keep all devices current with the latest software updates.
Encrypting data makes it much harder for hackers or unauthorized parties to access privileged information. When you encrypt data, you translate it into an unreadable code, and only someone with a decryption key can unscramble it. Disk encryption is another layer of security for data stored on a disk, including USB drives. Disk encryption software is readily available.
Whether you’re accessing the free WiFi in a coffee shop or using a public computer, unsecured public networks put your data privacy at risk. You can use a virtual private network, or VPN, to hide your personal information so that hackers can’t see your activity. You can also use incognito or private browsing modes to hide your browsing history.
Make sure you use a trustworthy VPN service provider: if you’re unsure, ask a reliable IT specialist for advice.
You can’t protect yourself or your business against cyberthreats if you don’t know what threats are out there. While hackers evolve all the time, it’s still possible to stay ahead of them. Keep up-to-date on the current cybersecurity risks and trends, and understand how they affect your business.
Your employees are vulnerable to hackers, too. Provide training to your employees on cybercrime and data protection so they don’t run into trouble online.
It’s not easy to stay on top of the latest data protection and privacy issues. That’s why it’s a great idea to hire IT specialists to help you devise a solution for your business needs.
Managed IT service providers, for example, offer all the benefits of an in-house IT team without the associated costs, and they’ll ensure your systems and data are secure. They’ll also assess your IT strategy and check to make sure it’s strong enough. IT service providers let you focus on running your business without fretting over cybersecurity.
Don’t miss out on the latest news from Entech. Submit your e-mail to subscribe to our monthly e-mail list.
|
<urn:uuid:d1468cb6-76e1-4acb-bbca-fad5455c77e4>
|
CC-MAIN-2022-40
|
https://www.entechus.com/blogs/7-best-practices-to-protect-your-data-online
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00058.warc.gz
|
en
| 0.903536 | 785 | 2.625 | 3 |
With so many of us relying on the internet in ways we simply haven’t before, it follows that a safer internet is more important than ever before too.
June marks Internet Safety Month, a time where we can look back at the past year and realize that the internet was more than just a coping mechanism during the pandemic, it evolved into a survival tool.
Our research published earlier this year showed how. It found that we relied heavily on the internet for our banking, personal finance, shopping, and even healthcare—not to mention the ways we worked, studied, and kept in touch with each other online during the pandemic. For millions of families globally, the internet was their connection to the rest of the world.
None of that would have been possible without a safer internet that we can trust. The truth is, part of creating a safer internet rests with us—the people who use it. When we take steps to protect ourselves and our families, we end up helping protect others as well. How we act online, how we secure our data and devices, how we take responsibility for our children, all of it affects others.
Here are just a few ways you can indeed make a safer internet for your family, and by extension, safer for others too:
1. Protect all your devices from hacks, attacks, and viruses
Start with the basics: get strong protection for your computers and laptops. And that means more than basic antivirus. Using a comprehensive suite of security software like McAfee® Total Protection can help defend your entire family from the latest threats and malware, make it safer to browse, help steer you clear of potential fraud, and look out for your privacy too.
Protecting your smartphones and tablets is a must nowadays as well. We’re using them to send money with payment apps. We’re doing our banking on them. And we’re using them as a “universal remote control” to do things like set the alarm, turn our lights on and off and even see who’s at the front door. Whether you’re an Android owner or iOS owner, get security software installed on your smartphones and tablets so you can protect all the things they access and control.
Another thing that comprehensive security software can do is create and store unique passwords for all your accounts and automatically use them as you surf, shop, and bank. Further, it can keep those passwords safe—unlike when they’re stored in an unprotected file on your computer, which can be subject to a hack or data loss—or sticky notes that can simply get lost.
2. Check your child’s credit (and yours too)
With stories of data breaches and identity theft making the news on a regular basis, there’s plenty of focus on the things we can do to protect ourselves from identity theft. However, children can be targets of identity theft as well. The reason is, they’re high-value targets for hackers. Their credit reports are clean, and it’s often years before parents become aware that their child’s identity was stolen, such as when the child enters adulthood and rents an apartment or applies for their first credit card.
One way you can spot and even prevent identity theft is by checking your child’s credit report. Doing so will uncover any inconsistencies or outright instances of fraud and put you on the path to set them straight. In the U.S., you can do this for free once a year. Just drop by the FTC website for details on your free credit report. And while you’re at it, you can go and do the same for yourself.
You can take your protection a step further by freezing your child’s credit. A freeze will prevent access to your child’s report and thus prevent any illicit activity. In the U.S., you’ll need to create a separate freeze with each of the three major credit reporting agencies (Equifax, Experian, and TransUnion). It’s free to do so, yet you’ll have to do a little legwork to prove that you’re indeed the child’s parent or guardian.
3. Smartphone safety for kids
Smartphone safety for kids is a blog topic in itself. Several topics, actually—such as when it’s the “right” time to get a child their first smartphone, how they can stay safe while using them, placing limits on their screen time, and so on.
Taking it from square one, make sure that all your smartphones are protected like we called out above—whether it’s yours or your child’s. From there, there are eight easy steps you can take to hack-proof your family’s smartphones, such as juicing up your passwords, making sure the apps on them are safe and setting your smartphone to automatic updates.
If you’re on the fence about getting your child their first smartphone, you’re certainly not alone. So many parents are drawn to the idea of being able to get in touch with their children easily, and even track their whereabouts, yet they’re concerned that a smartphone is indeed too much phone for younger children. They simply don’t want to expose their children to the broader internet just yet.
The good news is that there are plenty of smartphone alternatives for kids. Streamlined flip phones are still a fine option for parents and kids, as are cellular walkie-talkies and new lines of devices designed specifically with kids in mind.
And if you’re ready to make the jump, check out our tips for keeping your child safe when you purchase their first smartphone. From basic security and parental controls to keeping tabs on your child’s activity and your role in keeping them safe, this primer makes for good reading, and good sharing with other parents too, when you get serious about making that purchase.
4. Know the signs of cyberbullying
Cyberbullying is another broad and in-depth topic that we cover in our blogs quite often, and for good reason. Data from the Cyberbullying Research Center shows that an average of more than 27% of kids have experienced cyberbullying over the past 13 years. In 2019, that figure was as high as 36.5%. Without question, it’s a problem.
What exactly is cyberbullying? Stopbullying.gov defines it as:
“Cyberbullying is bullying that takes place over digital devices like cell phones, computers, and tablets. Cyberbullying can occur through SMS, Text, and apps, or online in social media, forums, or gaming where people can view, participate in, or share content. Cyberbullying includes sending, posting, or sharing negative, harmful, false, or mean content about someone else. It can include sharing personal or private information about someone else causing embarrassment or humiliation.”
Part of the solution is knowing how to spot cyberbullying and likewise taking steps to minimize its impact if you see it happening to your child or someone else’s. The important thing is to act before serious damage sets in or even a criminal act can occur.
The painful truth is that someone’s child is doing the bullying, and what could be more painful than finding out your child is doing the bullying? If you suspect this is happening, or have seen evidence that it’s indeed happening, act right away. Our article “Could Your Child (Glup) be the One Cyberbullying,” outlines ten steps you can take right away.
If you’ve taken steps to solve a situation involving cyberbullying and nothing has worked, know there are cyberbullying resources that can help. Likewise, don’t hesitate to contact your child’s school for assistance. Many schools have policies in place that address cyberbullying amongst their students, whether the activity occurred on campus or off.
5. Internet ethics
With all the emphasis on technology, it’s easy to forget that behind every attack on the internet, there’s a person. A safer internet relies on how we treat each other and how we carry ourselves on the internet (which can be quite different from how we carry ourselves in face-to-face interactions).
With that, National Internet Safety Month presents a fine opportunity to pause and consider how we’re acting online. Very Well Family put together an article on internet etiquette for kids, which covers everything from the online version of “The Golden Rule” to ways you can steer clear of rudeness and drama.
Granted, we can’t control the behavior of others. Despite your best efforts, you or your children may find themselves targeted by poor or hurtful behavior online. For guidance on how to handle those situations, check out our article on internet trolls and how to handle them. There’s great advice in there for everyone in the family.
Internet safety begins with us
If we didn’t know it already, the past year proved that a safer internet isn’t a “nice to have.” It’s vital—a trusted resource we can’t do without. Take time this month to consider your part in that, what you can do to make your corner of the internet safer and a thriving place that everyone can enjoy.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
|
<urn:uuid:00dbb3ce-bb85-447c-9c37-9a0e222e5e02>
|
CC-MAIN-2022-40
|
https://www.mcafee.com/blogs/consumer/consumer-cyber-awareness/a-safer-internet-for-you-your-family-and-others-too/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00058.warc.gz
|
en
| 0.942995 | 1,965 | 2.765625 | 3 |
Table of Contents
- What is ransomware?
- Ransomware trends
- Ransomware prevention
- Ransomware detection
- Ransomware simulation
- Ransomware security terms
- How NetSPI can help
What is ransomware?
In this section, you learn what is ransomware, how it fuels criminal activity, how ransomware works, and how to stop ransomware.
Ransomware, a definition
Ransomware is a set of malware technologies, hacking techniques, and social engineering tactics that cybercriminals use to cause harm, breach data, and render data unusable. Ransomware adversaries hold the data hostage until a victim pays the ransom. Increasingly, they also threaten to leak stolen data.
Ransomware is a business model for cybercriminals. Victims pay ransomware adversaries for decryption keys through cryptocurrency, such as Bitcoin. Many victims pay a second ransom to get assurance that the threat actor won’t release stolen data.
How does ransomware fuel criminal activity?
Figure 1: The ransomware economic lifecycle fuels more criminal activity.
Ransomware fuels a criminal economy through five steps:
Step 1: Cybercriminals execute ransomware attacks.
Step 2: Attackers make money when they collect a ransom.
Step 3: Ransoms fund the purchase of new exploits, lists of vulnerable networks, and ransomware-as-a-service toolkits.
Step 4: Attackers use malware and exploits off-the-shelf or customize the tools to create ransomware variants and new techniques.
Step 5: Ransomware developers engage with attack partners who use the tools and techniques to perform the attacks.
How does ransomware work?
A ransomware attack follows a series of steps called a kill chain. Most ransomware attacks follow a variation of this ransomware kill chain: gain access, escalate privileges, target data, exfiltrate data, remove recovery capabilities, deploy ransomware, and get paid.
Tip: Attackers have options, so every ransomware attack is different. Defenders must prepare to detect and block the many choices available to a ransomware attacker, not a single path.
Figure 2: A ransomware kill chain traces the seven steps in a ransomware attack: access, escalate, target, exfiltrate, remove, deploy, and get paid.
Do antivirus and endpoint detection and response (EDR) tools stop ransomware?
Only about 20% of the ransomware tactics, techniques, and procedures (TTP) used by ransomware attackers are identified out-of-the-box by antivirus (AV), endpoint detection and response (EDR), and security information and event management (SIEM) tools. Given AV, EDR, and SIEM vendors choose to focus on limiting false positives, many true positives are missed in Windows, Linux, and mainframe environments.
How to stop ransomware
Every step in the ransomware kill chain is an opportunity for defenders to detect and stop a ransomware attack—but you don’t need to achieve 100% detection at every step. Instead, if you can detect one or more malicious events present in most kill chains before the attackers meet their objective, then you can prevent ransomware attacks.
Tip: Detecting a ransomware attack earlier in the kill chain delivers more value, so prioritize detective controls that enable detection when an attacker accesses systems, escalates privileges, or targets data.
In this section, learn how ransomware attackers gain access, escalate privileges, target data, steal data, and deploy ransomware as well as the average ransomware payment.
Source: IST Ransomware Task Force Report
How do ransomware attackers gain access?
Ransomware attackers get into a network in many ways:
- Social engineering. Users unintentionally download and execute ransom malware via malicious emails, PDFs, drive-by downloads, malvertising, forced download, and browser exploits.
- Unpatched exploits. Most ransomware attackers use exploits that have been around for years. An attacker can easily scan the internet for websites that haven’t patched a vulnerability for which the attacker has an exploit.
- Ransomware-as-a-Service (RaaS). Malicious software developers provide ready-made malware to criminal groups who already have access to environments or the ability break in.
- Logins without multi-factor authentication. Without MFA to stop them, attackers gain access to the same powerful tools used daily by IT administrators who manage corporate networks and IT resources. Administrators who access IT management interfaces—e.g., terminal services, virtual private networks (VPNs), and remote desktops—often use weak passwords and do not require MFA. Attackers guess the passwords easily, find them in open source code repositories, or collect them via phishing.
How do ransomware attackers escalate privileges?
Ransomware attackers work to exploit bugs, design flaws, and configuration oversights in an operating system or application to gain access to protected databases, file shares, and business sensitive data. They often use Server Message Block (SMB) exploits, weak passwords, and insecure Active Directory configurations to gain more privileges on systems and those of trusted partners.
Ransomware attackers may go after a subsidiary or service provider with weaker security controls and then ride the third-party trust relationship into your environment. Or vice versa: your organization may be used to spread ransomware to your customers and partners.
What data and resources do attackers want?
Ransomware attackers search the network and systems for valuable data and resources to target, such as:
- Non-public information
- Regulated data, such as personal healthcare data (HIPAA) and payment card information (PCI)
- Operational technologies in manufacturing, industrial control systems (ICS), and other critical infrastructure
- Hardware and software supply chains
- Cyber insurance policies that reveal the maximum payout
To find resources to target, ransomware attackers may follow a workflow like this:
- Perform Active Directory reconnaissance for all domain computers, SQL Server databases, and server message block shares.
- Attempt access to file and SQL servers with privileged accounts.
- Search for sensitive data patterns across file servers and SQL Server databases.
How and why do ransomware attackers exfiltrate data?
In addition to encrypting data and holding it hostage, ransomware attackers also upload valuable data to other systems on the internet. This enables the attacker to extort more money in exchange for a promise not to leak the exfiltrated data.
Rather than stealthily copying the data, ransomware attackers may upload the data quickly to a website via FTP using SSH encryption. They may exfiltrate the data in one large file or in parts using common protocols such as Secure Message Block (SMB), Secure Sockets Shell (SSH), File Transfer Protocol (FTP), and HTTP/HTTPS.
Tip: You may be able to detect ransomware exfiltration by monitoring for large file uploads, excess bandwidth usage, or data loss prevention (DLP) via alternate protocols.
How do attackers deploy ransomware?
Once they have encrypted and uploaded the data, many ransomware attackers remove the victim’s ability to recover independently. Often, ransomware families follow steps like this to deploy ransomware:
- Verify correct platform, language, and time zone.
- Disable or bypass detective security controls.
- Hunt and destroy or encrypt backups hosted in local and cloud networks as well as virtual machine snapshots.
- Target IT management systems that an administrator could use to recover from ransomware.
- Search for targeted file types, generate a unique set of encryption keys, and encrypt the target files, often with custom libraries.
- Remove system restore capabilities by killing processes and services, removing restore points, deleting volume shadow copies, and overwriting master boot records on local workstations and servers.
- Propagate the ransomware using worm-like self-propagation to network shares via server message block (SMB).
- Remove their own files, scripts, and tools.
- Leave payment instructions.
How much do ransomware victims pay?
Average ransomware payouts are on the rise as attackers target bigger companies, specific sectors, and markets with deeper pockets. About 1 in 4 victims pay the ransom. Some can’t afford not to pay, and some are covered by cyber insurance. To date, the largest known ransom payment is $70 million.
The Ultimate Guide to Ransomware Attacks
Learn about ransomware trends, how ransomware works, and how to prevent and detect a ransomware attack.Get the Guide
In this section, learn about ransomware preparedness resources and leverage our ransomware prevention checklist.
What ransomware preparedness resources are available?
Several toolkits provide guidance to help organizations prepare for and become more resilient to ransomware:
- Ransomware Response Checklist (CISA)
- Ransomware Guide, which includes ransomware prevention best practices (CISA)
- Ransomware Tips (CISA)
- Preparing for a Cyber Incident (US Secret Service)
- Ransomware Protection and Response (NIST)
Checklist: How to prevent ransomware attacks
The best ransomware protection is prevention. Invest in security and ransomware prevention to protect sensitive data and avoid paying a ransom and downtime. The following checklist of ransomware prevention best practices can help you to minimize the risk of ransomware:
Reduce the attack surface presented by internet-facing systems, applications, and clouds. This requires an asset inventory. In general, the fewer assets you have exposed to the internet the better, so if it doesn’t need to be out there, remove it, and bring it inside your virtual private network (VPN).
Enable multi-factor authentication. Inventory all management interfaces of internet-facing assets—e.g., email, remote desktops, and Citrix—and secure them with MFA.
Make your vulnerability management program a priority, including asset management, configuration management, patch management, application management, Active Directory management, and cloud management.
Segment and isolate sensitive systems, applications, data, and privilegesto slow down or block threat actors. Isolate privileges between user levels. Isolate administrative management platforms to prevent ransomware attackers from using these tools.
Protect your backup systems. Does backup protect against ransomware? In some cases, but ransomware can infect NAS (network attached storage). That’s why off-site backups are critically important for recovery. In some cases, cloud storage is safe from ransomware, but it needs to be isolated, too. Be sure to segment and isolate access to your backup management interfaces.
Protect and validate recovery capabilities. Test your ability to restore from backups. In addition to making sure they are functional, consider the costs and time required to restore from backups. Have an incident response plan in place.
Tip: Replicated data will replicate ransomware. Immutable offsite backups are required to restore point-in-time systems.
Tip: Tabletop exercises are a good start but not enough. Don’t assume that technical security controls will work as expected. Test them.
Should I get a ransomware cyber insurance policy?
Many organizations have used cyber insurance to recover from ransomware attacks. Because ransomware insurance losses have increased, however, common ransomware scenarios may now be excluded. The insurance company may require you to manage your risk and follow ransomware prevention and mitigation best practices before they issue a cyber insurance policy. Read the fine print of any ransomware policy to understand your coverage.
Defenders only need to detect one malicious event to recognize a ransomware attack in progress, quarantine the attacker, and prevent damage. In this section, learn about the detective control lifecycle and how to detect ransomware.
Checklist: How to detect a ransomware attack
Successful ransomware detection requires research from which data is fed into a detective control lifecycle with the following phases:
- Measure and track key performance indicators (KPIs) for detective control baselines to identify gaps and improve performance in data source logging, detection, blocking, alerting, and response.
- Identify high-impact and common tactics, techniques, and procedures based on current ransomware trends and historical data. TTPs are found in corporate annual reports, CISA, threat intelligence feeds, user groups such as Financial Services Information Sharing and Analysis Center (FS-ISAC), offensive security trends, and MITRE ATT&CK groups and software.
- Understand trending ransomware families to identify data sources and artifacts associated with the TTPs in your environment. Maintain this list over time. Know what artifacts are left behind by each ransomware family and its known bad behavior. At a minimum, learn about the following ransomware families:
|Related to ChaCha||Related to REvil and Sodin||Related to Hermes||A fileless ransomware||Related to Samas and SamsamCrypt|
- Map your current detective controls coverage of the identified ransomware behaviors. Ensure data sources are available to provide your security operations teams and partners with enough information to develop detections for common malicious behavior, such as file modification events, registry modification events, process creation events, image load events, network connection events, Windows endpoint security event logs, command line event logs, PowerShell event logs, NetFlow/PCAP (packet capture) data, and security event data from third-party software and devices.
- Develop new detections that work for your environment, based on data sources and the known bad behavior of ransomware families, while excluding the known good behavior of your users. Ensure detections cover common defensive evasion techniques.
- Test new detections to determine fidelity, block, alert, and response levels. Ensure that alert levels trigger an effective response for high-risk behavior associated with high-fidelity detections.
- Deploy new detections with a rollback plan.
- Monitor, monitor, monitor. Deploy or configure monitoring for high-risk command execution related to scheduled tasks, service manipulation, and living off the land binaries (lolbins). Monitor for the deletion of shadow copies and modifications to SafeBoot.sys (SafeBoot) and similar restore capabilities. Monitor for high CPU utilization on individual systems and across the network. Ensure security tool tampering logs are enabled and forwarded to the SIEM.
- Repeat, because trending ransomware families and your environment will change.
- Continuously evolve and grow your detective control capabilities.
How to Build and Validate Ransomware Attack Detections
Learn tips to make your organization more resilient to ransomware attacks.Watch the Webinar
In this section, learn about ransomware attack simulation.
What is ransomware attack simulation?
Ransomware attack simulation is a collaborative, live test with a ransomware simulation tech-enabled service like NetSPI’s and a member of your security operation center (SOC) team. During a ransomware simulation, we test your team’s visibility into your security controls and ability to detect real ransomware attack TTPs.
Deliverables include a baseline report of your detective controls, a robust inventory of your security controls, custom recommendations to improve your security posture, as well as access to NetSPI’s continuous AttackSim platform to track your progress over time.
Tip: A ransomware simulation may identify an opportunity to stop paying for a software tool, such as a redundant endpoint detection and response tool, and free up budget for more valuable security efforts.
What is a breach and attack simulation platform?
A breach and attack simulation platform is a collection of pre-built plays that align directly to tactics, techniques, and procedures observed in real-world ransomware attacks. In NetSPI’s AttackSim platform, a web application enables you to continuously orchestrate the delivery and running of ransomware plays in your environments even after the service engagement ends.
Can My Organization Effectively Detect a Ransomware Attack?
Get answers with NetSPI’s Breach and Attack Simulation services.Explore the Data Sheet
Ransomware security terms
In this section are security acronyms that you may encounter as you learn about ransomware.
- AC: Access Control
- APT: Advanced Persistent Threat
- ASR: Attack Surface Reduction
- AV: Antivirus
- C2: Command and Control
- CIA: Confidentiality, Integrity, and Availability
- CIRT: Computer Incident Response Team
- CISA: Cybersecurity and Infrastructure Security Agency
- CMDB: Configuration Management Database
- CSF: Cybersecurity Framework
- CSIR: Computer Security Incident Response
- CSP: Cloud Service Provider
- CVE: Common Vulnerabilities and Exposures
- DAST: Dynamic Application Security Testing
- EDR: Endpoint Detection and Response
- FSISAC: Financial Services Information Sharing and Analysis Center
- GRC: Governance, Risk, and Compliance
- HIDS: Host-based Intrusion Detection System
- IAM: Identity and Access Management
- ICS: Industrial Control Systems
- IDS: Intrusion Detection System
- IOC: Indicators of Compromise
- IOT: Internet of Things
- IPS: Intrusion Prevention System
- IT: Information Technology
- ITAM: Information Technology Asset Management
- ITSM: Information Technology Service Management
- MFA: Multi-Factor Authentication
- MSP: Managed Service Provider
- MTD: Maximum Tolerable Downtime
- NAC: Network Access Control
- NAS: Network Attached Storage
- NDR: Network Detection and Response
- NVD: National Vulnerability Database
- OSINT: Open-Source Intelligence
- OT: Operational Technology
- RaaS: Ransomware as a Service
- RBAC: Role-based Access Control
- RCE: Remote Code Execution
- RPO: Recovery Point Objective
- RTF: Ransomware Task Force
- RTO: Recovery Time Objective
- SAR: Suspicious Activity Report
- SAST: Static Application Security Testing
- SCA: Software Composition Analysis
- SEIM: Security Information and Event Management
- SEM: Security Event Management
- SI: System and Information Integrity
- SOAR: Security Orchestration, Automation, and Response
- TIP: Threat Intelligence Platform
- TTP: Tactics, Techniques, and Procedures
- TVM: Threat and Vulnerability Management
- VM: Vulnerability Management
- VPN: Virtual Private Network
- WAF: Web Application Firewall
How NetSPI can help
Reduce risk with ransomware attack simulation services
NetSPI’s ransomware attack simulation service raises the ransomware security awareness in your organization, measures ransomware prevention and detection controls, and provides prescriptive guidance to improve your ransomware security posture. NetSPI’s cybersecurity experts work with your team to evaluate your security controls against the tactics, techniques, and procedures (TTPs) used by real-world ransomware families. You can continue to use our AttackSim technology after the engagement to run custom ransomware exercises and develop and test your ransomware playbooks.
Contact us to learn more and get a quote.
|
<urn:uuid:c7ba3498-4dce-4528-971f-0e47889f96ac>
|
CC-MAIN-2022-40
|
https://www.netspi.com/ransomware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00058.warc.gz
|
en
| 0.8614 | 3,947 | 3.296875 | 3 |
A botnet to a group of computers that’s been hacked and put under the control of one controller called a bot herder, and all this without the computer owner’s knowledge. They do it by planting a bot into the system and then activating it when it suits their ends.
How they work
Bot herders try to target the machines with broadband internet like those of home users, small universities and enterprises, which are typically with limited resources and knowledge of protecting their systems. These computers often run on Windows without up-to-date patches. The computer are infected by using an e-mail attachment, or more recently, using Internet Relay Chat (IRC). Once infected, the bot logs onto an IRC server to receive commands from the bot herder. Though firewalls, anti-spyware and antivirus programs can stem the flow of attacks, even more programs are being developed to evade detection.
Once a computer is commandeered, the bot herder can use it in a variety of ways. It can be used to download a variety of adware that pays per download, send spam to people listed on the owner’s addressbook, gain confidential information through keylogging, and even cause a directed denial of service attack (DoS) to a selected website by sending huge amounts of traffic and page requests, shutting it down until the attack is over. Because of the flexibility of IRC networks, computers from different countries can be easily connected and controlled through a botnet. Botnets proliferate because the potential of profit is great.
There are signs to tell if a computer has become a zombie in a botnet. Monitor to see if it’s receiving data from a server the user isn’t accessing. Organizations intent on finding and shutting down botnets establish networks specifically made to lure these bot herders out in the open. They allow these to control a computer in their system and tracing the source down. They reverse engineer bots and listen into botnet conversations to find them. If the track a zombie, it will be reported and the data of the infection logged for possible criminal investigations.
|
<urn:uuid:6bb36973-9f9f-468e-a74c-07e8917c8d31>
|
CC-MAIN-2022-40
|
https://www.it-security-blog.com/it-security-basics/how-botnets-work/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00058.warc.gz
|
en
| 0.938717 | 442 | 3.265625 | 3 |
Nadia and her developer team were working tirelessly on their company’s new, groundbreaking web application that would disrupt the way the world exchanged currency. A few days later, Nadia received an alert that their source code was available on GitHub. “No way,” she thought. Once Nadia went to the GitHub link, she couldn’t believe it—their git repository was published for the world (and their competitors) to see and exploit to their heart’s content. Upon further investigation, the company’s incident response team found the server that hosted the git repository wasn’t configured correctly, and their entire directory was visible online for attackers.
In 2018, a security researcher was able to access the entire source code for various businesses, including India’s largest telecom service provider, due to a git repository misconfiguration which left components publicly accessible. And according to another security researcher’s findings, 400,000 websites were unearthed with exposed .git folders.
Obtaining a target application’s source code from a .git directory leak enables attackers to perform easier, in-depth reconnaissance. Attackers use tools, such as static code analysis (SCA), to then find code vulnerabilities to assist in exploit creation against the target application.
Below, you will discover how threat actors can take control of your entire source code through unintentional Git leaks. We will also share common mitigation techniques and how our security researchers can assist you in verifying that your Git repositories are not inadvertently exposed.
Threat actors start by using tools to compile lists of target web application domains and subdomains to then check for git repository disclosures in the next step.
Publicly disclosed .git directories could be found using both automated and manual techniques. Bruteforcing tools enumerate through directory keywords to locate exposed repositories.
Attackers can also locate the .git directory manually by typing https://example.com/.git or https://example.com/git/ in their browser and analyzing the response. If the response is a 404 error, the .git directory doesn’t exist on the server. If it’s a 403 forbidden response, it does exist. If there is no error and the .git directory tree displays, hackers can rejoice in the ease of access.
After the .git directories are located, bad actors download the directory contents and can restructure the project source code from there. If the full directory isn’t visible, there are methods to reconstruct each folder by searching for the expected files in a / .git folder directory listing.
The extracted source code can then be used to search for exploitable vulnerabilities and sensitive information, such as hardcoded credentials, tokens, encryption keys, new or deprecated endpoints for further study, developer comments, and more information gold.
Git repository leaks can present a significant threat to your company’s reputation by showing that attackers can breach the confidentiality of your proprietary source code. Below are common ways to mitigate the risk of code exposure resulting from unintended git repository disclosures.
Primary risk mitigation techniques to protect against git repository leaks include:
Git repository exposures can present serious risks to the integrity of your web and mobile applications and can hurt your reputability among consumers. Without proper server and git repository management, you put your applications, source code, and sensitive data at risk.
At Inspectiv, we are here to help protect your business from source code exposure, among other known and unknown attack scenarios. Inspectiv program managers work with both you and our security researchers to identify, verify, and validate security vulnerabilities in your web and mobile applications, such as source code exposure, which can lead to targeted attacks.
Inspectiv manages the entire bug-hunting process—there's no need to hire costly in-house security testers. Inspectiv provides you with actionable guidance to mitigate vulnerabilities and prevent potential security breaches and data theft.
Contact us to discover how our crowdsourced security platform can aid in protecting your company from ever-present threats and vulnerabilities to your online applications.
|
<urn:uuid:21347515-d443-4007-b2ab-4e192b75edc6>
|
CC-MAIN-2022-40
|
https://www.inspectiv.com/articles/source-code-reconstruction-from-git-directory-exposure
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00058.warc.gz
|
en
| 0.91858 | 826 | 2.578125 | 3 |
The MITRE Corporation is a US Government federally funded research and development center (FFRDC), and the MITRE Engenuity is a foundation dedicated to using the research and technology developed there for the public good.
One of the services that MITRE Engenuity provides is MITRE ATT&CK evaluations. These exercises simulate attacks by major cyber threat actors based on the threat intelligence collected in the MITRE ATT&CK framework.
The MITRE ATT&CK framework is a tool to increase understanding of cyber threats and the cyberattack lifecycle by breaking this lifecycle into fourteen stages called Tactics. Each of these Tactics describes a particular objective that an attacker may need to achieve during an attack. Example Tactics include Initial Access, Privilege Escalation, and Lateral Movement.
Under each Tactic, MITRE ATT&CK describes the methods by which an attacker could accomplish that goal in Techniques and Sub-Techniques. Each Technique is a distinct method of achieving the goal, and each Technique can have zero or more Sub-Techniques based on whether there are multiple ways of carrying it out. For example, the Brute Force Technique under Credential Access has four Sub-Techniques (Password Guessing, Password Cracking, Password Spraying, and Credential Stuffing).
Each MITRE ATT&CK Technique and Sub-Technique has its own page describing how the attack works, affected platforms, detection mechanisms, and mitigations. It also includes a listing of the malware, tools, and threat actors known to use the Technique or Sub-Technique, which is based on threat intelligence data and vital for MITRE Engenuity.
The MITRE Engenuity ATT&CK Evaluations are intended to provide an independent third-party assessment of cybersecurity vendors’ products and their ability to protect against cyber threats. Using the MITRE ATT&CK framework as a guide, MITRE Engenuity can perform a structured and comprehensive evaluation of whether a product can detect or prevent a particular type of attack.
MITRE Engenuity does not provide rankings, scores, or ratings of the products that they analyze. Their objective is to highlight the differences in approach that various cybersecurity vendors take to cyber threat detection and prevention, and whether those approaches effectively protect against cyber threats.
The MITRE ATT&CK framework includes a Procedures section in each Technique or Sub-Technique page that describes the tools, malware, and threat actors known to use that particular method. Each of these entities also has its own page that provides a description of it and a complete listing of the Techniques and Sub-Techniques that they have been observed to use in the wild.
MITRE Engenuity’s annual evaluations are structured around these collections of known Techniques employed by threat actors. Each year, MITRE Engenuity selects two advanced persistent threat (APT) groups and emulates their tactics and techniques based on the MITRE ATT&CK framework. This provides a realistic evaluation of the solution’s ability to detect and protect against the attacks by the simulated APTs.
Unbiased, realistic assessments of the effectiveness of cybersecurity solutions are difficult to perform. Cyberattacks are complex and the realism of a simulation can be undermined by even small mistakes.
The MITRE Engenuity Evaluations are invaluable because they provide a third-party simulation of security solutions using extremely realistic attacks. The MITRE Engenuity simulations are built using the information contained within the MITRE ATT&CK framework, which describes the attack chains commonly used by different threat actors.
Each MITRE ATT&CK Engenuity simulation only covers the tactics and techniques used by a few threat actors. However, there is often overlap between groups (such as the use of phishing for initial access), and each annual evaluation focuses on different threat groups. This combination means that a high score in the ATT&CK evaluations demonstrates strong protection against real-world threats, and consistent high scores across multiple evaluations show extremely high-performance and comprehensive cyber threat protection.
The 2021 MITRE ATT&CK Engenuity Evaluations focused on the Carbanak and FIN7 APTs. Both of these groups use the same Carbanak malware in their attacks but appear to be different groups with different targets and techniques. The MITRE ATT&CK evaluation included tests for 65 MITRE ATT&CK Techniques across 11 Tactics. This includes 12 techniques across 7 tactics that are in scope for the Linux portion of Round 3 evaluation of the Carbanak evaluation.
Check Point Harmony Endpoint achieved a leading result in this evaluation, detecting 100% of the unique techniques simulated during the exercise. For 96% of these unique techniques, Harmony Endpoint also achieved the highest detection level of the twenty-nine solutions evaluated by MITRE Engenuity.
MITRE ATT&CK Engenuity’s evaluations provide an independent third-party attestation of the effectiveness of Check Point Harmony Endpoint at protecting against attacks by Carbanak, FIN7, and other APTs. To learn more about the MITRE ATT&CK Evaluations, check out this guide. You’re also welcome to learn more about the capabilities of Harmony Endpoint by signing up for a free demo.
|
<urn:uuid:1de4c137-a031-4496-a63b-a0986491aead>
|
CC-MAIN-2022-40
|
https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-mitre-attck-framework/mitre-engenuity-attck-evaluations/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00259.warc.gz
|
en
| 0.927442 | 1,075 | 2.609375 | 3 |
High Risk Data Protection Strategy
For years, security experts focused primarily on protecting their organization’s networks from malicious use. Sites like privacyrights.org have documented successful attacks against all sectors of commerce, government and education. Only recently have governments started to change the goals from securing networks and devices to protecting sensitive data. Data breach notification laws, cybersecurity insurance and government data protection requirements provide motivation to change existing security strategies. The emergence of cloud computing in its various forms forces companies to figure out ways to protect their high risk data. Before we start with data protection, we should note that cloud computing has forced us to assume the network is hostile. We cannot protect the “network” because we don’t know where its “borders” are located. Here’s a straightforward strategy that can serve as an example.
Finding the right worklable encryption for your high-risk data depends on the travel of the data within or outside your organization
• Create data management framework
Who are the owners of the data in your organization? Who has the final say if a business process wants to access and process your financial data? Typical data owners include the Chief Financial Officer, the Controller/Comptroller, a governance group consisting of members of business processes who handle financial data. Data stewards are usually the people who make the day to day decisions. Your organization should have this framework defined in policies and standards.
• Create data classification framework
Simple is better. We adopted Stanford University’s data classification definitions and reduced our data categories to 3 – High, Moderate and Low. Here’s our high risk definition:
Data and systems are classified as high risk if: 1. Protection of the data is required by law/regulation, and 2. Virginia Tech is required to self-report to the government and/or provide notice to the individual if the data is inappropriately accessed; or 3. The loss of confidentiality, integrity, or availability of the data or system could have a significant adverse impact on our mission, safety, finances, or reputation.
A clear and concise data classification framework provides the foundation for the next steps in your data protection strategy.
• Create Sensitive Data Search framework
Simply put, you have to find high risk data before you can protect it. Some examples of high risk data include spreadsheets that contain employee travel information, medical records, scanned purchase orders and strategic business dealings. How do you find this data? There are commercial tools that will search your systems for social security, passport, driver’s license, bank, debit and credit card account numbers. You should run these tools on all of your company owned assets. Start with the business processes that handle high risk data on a daily basis.
• Create Sensitive Data Protection framework
Now that you’ve found where your high risks data, how do you protect it? The generic strategy is use an encryption system based on peer-reviewed mathematical algorithms. If a vendor or developer says they’re using a proprietary algorithm, run away. Selecting a workable encryption system is difficult. It’s relatively straightforward if your data is only passed around internally. The challenge is when your data travels outside of your organization. In this case, both sides have to use a common encryption solution. Do you use certificate based authentication or multi-factor authentication? As you can imagine, this can be quite challenging.
• Create Sensitive Data Breach framework
What happens once a high risk data breach is confirmed? Do you have processes for notifying affected people, paying for credit monitoring, have prepared press statements for the media, have the funds available for paying fines, judgments, etc., making cyber security insurance claims? Are there any processes not mentioned here that need to be? Do you have a governance committee?
Security awareness programs should be proactive as part of a “prevention” program. The last step of a generic incident response process is “follow up”. Follow up security awareness is technical where the technical staff learns what went wrong, how it was fixed and more importantly, steps to take to hopefully prevent another breach.
Hopefully, these general steps will help you develop your own strategy or help validate an existing strategy.
|
<urn:uuid:cb48b77f-7c88-44d3-9952-48587391d20e>
|
CC-MAIN-2022-40
|
https://identity-governance-and-administration.cioreview.com/cxoinsight/high-risk-data-protection-strategy-nid-30888-cid-180.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00259.warc.gz
|
en
| 0.923547 | 863 | 2.546875 | 3 |
By Karen Reed, Positive Health Wellness
We hear all the time about how technology is bad for us. Since the introduction of computers, we spend more time sitting at a desk than moving around at work. We have created this sedentary lifestyle that is causing havoc in our overall life.
What if I were to tell you that technology has produced benefits? Would you believe me if I said that technology is good for your health?
Most of you wouldn’t look at first. Well, you may be able to think of a couple of ways that the computer has helped, but you are still stuck on all the negatives that ‘experts’ have shared in the past.
The problem with the ‘experts’ is that they are only focused on the negatives. They haven’t looked at so many of the benefits.
So, that’s what we’ll do today. We’ll consider all the ways that technology improves our health. We’ll discuss just how it has boosted results in certain areas of healthcare and what it does for us daily.
Technology Is Everywhere in Medicine
Before we do move onto all the benefits, it’s worth discussing just how technology is used. It is found everywhere in medicine. Think about the x-ray machines, MRI scanners, and even the research equipment used daily.
There are people using it every day of the week to find cures to ailments, discover why diseases spread and creating ways to prevent the diseases.
There are individuals performing tasks far more accurately than they ever did before, with keyhole surgery now a popular option for some of the most routine medical needs.
And the technology isn’t just in the hospital. It’s used in your own doctor’s office and even at home. It’s used to prolong life and create a better quality of life for those on around the clock care.
The improvements don’t just lead to better physical health. They support better mental health, which in turn improves the physical health. Technology improves connections and relationships, offering support to everyone.
We can’t get rid of technology. If we did, we would suffer greatly. Here are just eight ways that technology is improving our health and our lives.
It Pushes Us to Do More Activity
Sure, technology has led to us sitting more. And sitting is the new smoking when it comes to health problems. However, technology has also helped to push us to do more activity.
We just must take the examples of the Fitbit, pedometers, and apps that track our steps. They all encourage us to meet our daily targets—setting personal targets to get us to walk more and meet the goals that we know are realistic to us.
While there is the goal to walk at least 10,000 steps a day, that just doesn’t seem realistic for many. The pedometers and smartphone apps give us more control.
The chances are that as we get closer to a goal, we’re going to work harder to achieve it. We see how we do daily and look for ways to improve the chances of meeting those goals. They don’t mean getting to the gym daily.
They just involve getting out and doing more. Some can involve doing home workouts and even walk on the spot to increase our step count.
There isn’t much that we need to do to set up these pieces of technology. Most of them involve some type of phone app or computer software just to sign up and create free accounts. We sync devices, and we get to go off and work our ways to being healthier and fitter.
The devices also come with different settings. Some are just designed to count your steps. They’re basic items to get you to do a little more throughout the day. Those who want to increase the amount of exercise they do and track their heart rate will be able to get more advanced options.
Some will have exercise modes, count stairs, count calories burned, and even monitor your sleep.
The aim for so many of these new devices isn’t just to improve your activity levels. They are there to improve your overall lifestyle. Devices are set to help you live a healthier and more fulfilling life, helping you monitor your sleep patterns and make sure you drink enough water throughout the day.
There’s more to them than just improving one element of your life and making sure your whole body and mind are working together to create a better quality of life.
These apps and devices can also monitor your weight loss efforts. They help you stick within a healthy BMI, so you focus on protecting your heart health.
You will feel better for it, knowing that you can keep yourself from accidentally going over regarding calories throughout the day or over a certain weight. Of course, being within a healthy weight range is essential to help keep yourself healthy overall.
These are all personal devices. There’s no major cost for them, with many of them available for less than $200. Some of the apps are completely free to download, so you don’t even need to spend a penny on technology to improve your health.
Better Ability for Communication Between Doctors and Patients
With technology being widely available, there are chances that everyone has some sort of access to doctor and health websites.
These sites can create chat boxes and instant messengers, where real doctors and nurses can monitor communications. When a patient comes on with a question, the doctors and nurses can provide factual answers and share their thoughts and advice.
Better ability to communicate is essential for keeping the health protected. It helps to keep the questions over information online to a minimum and reduces the number of people queuing up in the hospital with fears they are dying.
The people online can read the symptoms and share their beliefs based on them, helping to minimize worry.
Individuals who do need to seek medical help will be able to get to the hospital or their own doctor right away. They can take the transcript of the chats to aid with a discussion of the symptoms and working through the reasons for certain medical beliefs. They also have a better understanding of how doctors or nurses come to certain decisions.
Those who don’t need to seek immediate medical attention can reduce their anxiety over their health. This helps to improve the health since anxiety leads to stress and that leads to high blood pressure and other health problems!
People who avoid doctors fearing that they are wasting time can get confirmation that they need to get the help. That’s that fear of people thinking they are silly for their thought processes eliminated, so they have more confidence in discussing all their health problems with their doctor.
When chat boxes aren’t available, telephones have made it easier to communicate and talk to a genuine doctor or nurse. This is the case with many emergency medical phone numbers, who can then arrange out of hours’ appointment when the case is necessary.
Getting people seen immediately protects their health. It also helps to reduce a number of times they will need to visit a doctor and keep the waiting times down since the minor ailments are taken care of before they can turn into something major.
More Ability to Do Research into Problems
The internet has certainly opened the ability to research. We all tend to turn to Google, calling it Dr. Google at times.
The search engine allows you to input your symptoms or ask questions about a certain symptom to find out all the ailments that involve them/it. People can look through a list of other symptoms to determine the chances of suffering from certain ailments.
This is useful when it comes to determining whether to speak to a doctor. An individual can get the basic information and use it to decide whether their condition needs immediate attention.
They can also use that basic research to get onto the chat boxes to get the advice from real doctors and nurses, as mentioned above.
Those that already have a diagnosis can take to the internet to do their own research into it. This is especially the case for a condition that they haven’t heard of before or that could be hereditary. They want to find out future symptoms, especially if it is a condition that doesn’t have a form of treatment or cure.
Individuals can find out if there are natural remedies that they can try and talk to others with the same condition. They can follow blogs for people who have that same condition and are living with it.
They get to hear about success stories with treatments and learn about support groups in the area. This is especially important for treatments that are either terminal or lead to a lower quality of living.
Those caring for people with certain conditions can also get some support and help. There will be support groups online for carers and advice for people who care for individuals 24/7. Suddenly, the world doesn’t seem as isolating, which can quickly help to improve the mental health.
People have more confidence in their abilities and find someone who can listen to vents or problems without judgment and with full understanding—friends are good, but they’re not always able to be the most supportive.
It is important to use the internet sparingly. Unfortunately, it can also have the opposite effect and make the health worse. You spend all this time researching conditions and fearing the worst, and you end up with problems with anxiety.
You can end up researching more than talking to a real doctor, hearing about the horror stories of other patients. It’s important to take a step away and look out for success stories and real doctors’ opinions to help balance out some of the negatives.
When you are on websites, you will also need to check where the information is coming from. Who writes it and is it checked by someone in the medical profession?
Does the person write a personal blog really suffer from the same condition? People can write absolutely anything, and there is plenty of misinformation online.
There are reputable medical websites. They usually include links to official studies and reports to help you get all the medical information that you could need.
They will consider both pharmaceutical and herbal remedies to help you save money and put your health first. Check the reviews and reputation of any website before you start looking through the information and start trusting it!
There Are Devices That Keep the Body Working as It Should
Some devices are created purposely to help promote a healthy body. They are placed inside or outside to help keep the body working as it should.
There are also other types of treatments that cause reactions in the body to support organs and the overall health.
The pacemaker is just one that will come to mind for everyone. This is a device created for those who have heart problems.
The pacemaker helps to send electrical currents into the heart to prevent it from suffering from spasms. This little device is a lifesaver for so many people.
It keeps the heart pumping as it should, which will support the rest of the body.
This is one of those small devices that you will barely know that you have. It can be used on the young and the old to protect the heart and make sure it works exactly like it is supposed to.
In one episode of Grey’s Anatomy, a 16-year-old girl was fitted with a pacemaker to stop seizures, which turned out to be a side effect of a heart defect rather than epilepsy.
The small electrical device is battery-less and powered by the heart’s rhythms. Those without it would live shorter lives and must restrict the things they do, as there will always be the risk of the heart’s natural rhythm and beat getting out of sync.
Pacemakers aren’t the only devices that help to keep the body working as it should. Bypass machines also help to sustain organ health while waiting for treatments or transplants.
They are also used throughout surgeries to protect the health while undergoing some transplants and operations. For example, heart bypass machines are regularly used during some cardiac operations and for heart transplants. Without them, there is a higher risk of bleeding out and death on the operation table.
Bypass helps to change the flow of the blood. It isn’t just used for the heart and can be used for the kidneys for operations that involve the intestines, colon, and other organs around this area.
Bypass helps to keep the other organs working as they should while going through the operations to ensure a fully healthy life afterward.
There is now technology that keeps organs working while they are outside of the body. This helps to keep organs working while they are in the middle of transplants, which is exceptionally important when it comes to heart transplants.
This is another side of medicine that was touched on in Grey’s Anatomy. Cristina looked after a heart that was in a box—the technology kept the heart pumping until the time came to place it into the recipient’s body.
It is a very real side of medicine that is being adapted and improved. Without it, there would be people on the transplant list that would need to wait longer for a replacement. They could end up dying or others lower down on the list would lose out on transplants because they can’t be moved up.
The use of technology to keep organs alive outside of the body will also help to reduce the problem of long donor lists. While the donor organ may not be a match for anyone immediately or anyone within a hospital immediately, the organ can be kept alive while waiting for a recipient to become eligible.
But can’t organs be used without being ‘kept alive.’ There is the use of ice, and organs are sent around countries without being kept alive by a machine.
However, there is a risk that the organs won’t work when they get into the recipient’s body. They have lost the blood flow during transition causing other problems. The technology eliminates that issue.
Without the advances in technology, there would certainly be people who are left without. The transplant list would grow longer, and people would remain on the lists until they die.
Better Treatment Options for Various Ailments and Diseases
It’s no secret that treatments have advanced in recent years to the point where some ailments are virtually unheard of. Vaccinations and various medical advances have completely eradicated the likes of smallpox and led to the point where polio is now less common and far more treatable.
Some of the advances have only come in the last few years, and are all due to technology. We’re able to do more research and test without the use of animals and humans. There are ways to create vaccinations and treatments without putting people at risk, increasing the chance of a better quality of life.
Just look at how HIV treatments have changed since the disease was noted in the early 1980s. It is now at a point where the virus doesn’t have the chance to develop into AIDS.
There are treatments for small and major ailments. Even cancer patients have better life expectancies than they would have done in earlier years.
There is the technology for earlier diagnosis and treatments to eradicate the cancerous cells. While not all is successful, there are certainly some positive steps—and that is all because of technology advancements.
Some of the treatments are to help keep the body working until a cure or transplant is possible. For example, dialysis is used by many patients waiting for organ transplants. Dialysis helps to remove the waste from the body when the kidneys will no longer do the work for them.
This is an intermediate treatment option to keep someone alive while they wait for a kidney transplant.
Others will be on other machines and treatments while they wait for aliver, heart, and other transplants. Technology has helped to prolong life, allowing them the time that they need. Some technology has even helped them live some sort of life outside of hospitals, rather than be hooked to machines.
There isn’t just a physical benefit to these treatments. The benefits have helped to support the mental health. Being stuck in a hospital bed forever is boring and depressing. Patients start to worry about the bills that are mounting up and the loss of time with their friends and family members.
When they are in a positive mindset, the patients are more likely to fight against the ailments that are keeping them tied down to machines. They are in a better state to accept transplants and focus on fighting infections and diseases.
Their positive mindsets help the treatments work, and this is all because of the technology advancements.
And we can’t forget about the ongoing research. This isn’t just about the treatment options but how the viruses work and adapt. While there are vaccinations and treatments available, there is always something new that comes out.
Viruses adapt to their environment to avoid being wiped out completely in some cases. They mix with other viruses or bacteria to create a far more superior virus.
Technology helps to assess when this happens. Scientists can locate the newly created viruses and get to work almost immediately on a cure.
There is the ability to transform some viruses into cures and help to create vaccines and treatments that have never been heard of before. It’s because of technology that the medical field can keep adapting.
Better technology has also made scans clearly. People can get better angles and catch problems early. Doctors can perform surgeries and use treatments that were never possible, simply because they could catch conditions before they advanced too far.
To top all this off, technology has opened the chance of developing organs and valves. While Grey’s Anatomy is just a TV show, it does rely on the current medical research and ideas.
There are studies into 3D printing organs and heart valves to help support the health and life of an individual. The printing would use a person’s own cells to reduce the risk of rejecting organs, improving life expectancy and treatment of conditions.
There is still a long way to go until all the research is finished. In fact, it will never be finished. However, technology is opening doors to improve the health in ways that wouldn’t have been imagined just 50 years ago,
Improved Prediction of Diagnosis and Life Expectancy
Ever wondered if you could get a disease later in life? Maybe you wonder if a current symptom is a sign that you could develop a condition. You could even wonder just how long you have left to live when you are diagnosed with a condition.
Technology has helped to improve the prediction process of a diagnosis. Doctors will have information all in one place and can see all the symptoms at the same time. They have formulas to work out averages of when a condition occurs.
You get this type of risk assessment, and doctors will be able to predict if you are more likely to suffer from a certain type of disease or ailment.
We just must look at the pre-diabetes checks. You may have been told that you are a pre-diabetic. This doesn’t mean that you currently have it but that you have a high risk of developing it if you continue in the way that you are going.
Before technology advances, you would have only found out about diabetes once you started suffering from it. There wouldn’t have been the warning signs to help you change your lifestyle to prevent it from occurring.
In some cases, you wouldn’t have even known that you have the side effects. You wouldn’t have known that you had a disease until is cause a serious medical issue and even death. Doctors didn’t have the ability to predict anything because it was so difficult to get all the information.
Technology has made it possible for information to be kept in one place, updated in real life. Once blood test results come back, they can be added directly to your file; a file that is visible by any doctor by looking up your own details.
Your family doctor has your hospital records, even if the records have nothing to do with a current ailment.
While looking at all this information, doctors can see similarities and warning signs earlier. They can see symptoms that crossover and lead to specific conditions—similarities that could have been overlooked due to loss of paperwork or not having all the information in one place.
At the same time as predicting a condition, doctors can use technology to work out how long you must live. There are plenty of cases in history where individuals have lost out on events because they have been given a life expectancy that isn’t right.
Doctors give people six months to live and then find out three years later that they are still alive and could have done some of the things they wanted. At the same time, people are given years to live, and then their health deteriorates within six months because the doctors got it wrong.
Technology has allowed for the creation of algorithms. Doctors can input certain figures and information into the algorithm to get the information that they need. There is more information stored about other patients with the same condition to help ensure that the algorithms got it right.
There is just far more accuracy to help with the life expectancy prediction because of technology.
With better prediction, people aren’t just living healthier and changing their overall lifestyle. Their mental health is supported. Patients find that they can act and are more interested in doing so.
Gene mapping has also become a technological advancement to help with the prediction of conditions. Patients no longer need to have early symptoms to make changes to their lives. Doctors can look at the genes to determine if they are at a risk of developing certain health conditions.
This has become popular for some of the most damaging conditions for the whole family. People want to know if they have the genes that put them at a higher risk of breast cancer or Alzheimer’s disease.
Angelina Jolie is just one celebrity that stands out when it comes to this technological advancement. She found out that she had a high risk of developing breast cancer and decided to take preventative measures to avoid it by having a mastectomy.
Many patients before her have had to wait until cancer has occurred and hoped there is a treatment, but she could prevent the heartbreak for her family and protect her health because of technology.
Cervical screening for women has improved thanks to technology. Researchers will see when cells are abnormal between tests to make sure that there are no earlier signs of cancer.
While the cells could be abnormal for other reasons, patients get the help they need immediately to avoid lifelong and potentially terminal diseases.
Faster and More Accurate Diagnosis of Conditions
While the prediction side of diagnosis is improved, technology also improves the accuracy of a diagnosis. Like before, doctors gather all the information in one place and will be able to keep an eye on results more closely.
They can also put together symptoms and signs sooner than before, meaning an earlier diagnosis for many people.
There have been many cases where doctors just haven’t had all the information. In some cases, the conditions are so rare that doctors haven’t even bothered considering them.
Instead, individuals are treated for conditions that are more common or more believed to have. The treatments do nothing, and by the time they are diagnosed with the right condition, there is nothing they can do.
People lose out on time with their family due to a lack of diagnosis or incorrect treatment. They lose out because the diagnosis has just taken too long—and not because of inaccuracy for the doctors.
In some cases, the technology hasn’t been fast enough to get blood work back. Technology has been too poor to assess all the symptoms, or the waiting list is too long, so patients lose out. Scans aren’t clear enough, so earlier symptoms aren’t picked up in time.
This slowness of diagnosis means that people don’t get the treatments soon enough. Their conditions advance and may become untreatable and terminal. This is the case with some cancers, as it takes so long to get the diagnosis that the treatments spread.
The accurate diagnosis means more accurate treatments. There are cases where treatments can make a condition worse if it is used in the wrong way or has been given the wrong diagnosis.
For example, some over the counter medications can make the chicken pox virus far more severe and cause hospital admission.
Technology Improves Recording of Information in Real Time
Many of the benefits mentioned above rely on an accurate and timely recording of information. There is no denying that recording of symptoms between doctors has led to issues of conditions not being diagnosed and the right treatment not being administered.
Before computers, doctors would write all the information on charts. They would document it through paperwork, and that paperwork would need to be sent to various doctors.
If you changed family doctor, there was a chance of the information going missing. If you went to see a different doctor in between visits, such as at the hospital or a locum, you ran the risk of the information not being sent to your regular doctor.
Some key symptoms are often missed. It’s up to the patient to remember to share the previous symptoms with their regular doctor to make sure all the information is up to date.
Blood work information and other paperwork would take thetime to be sent through practices. Individuals were left waiting for phone calls for diagnosis and treatments, and we’ve already looked at how that could lead to problems.
Technology has made it better for the recording of information. This initially started with the use of larger computers. While paperwork would still be used, the information could be typed up, and doctors around the country could get the information and any symptoms. They could make a more accurate diagnosis.
However, this didn’t help when it came to immediate diagnosis needs. There were also issues with inaccurate reporting. Some doctors wouldn’t do their paperwork or would miss out things that patients said.
Others who were trying to transcribe notes may not have been able to read handwriting, so elements were missing.
This is where technology continues to advance. Many doctors will now update the information on tablets and smart devices. They all have the software that allows for easier and far more accurate reporting. The information is updated in real time, meaning other doctors will be able to get the information later.
If you return, doctors will be able to see how often you are in. They will be able to look at previous visits quickly. This means they can spot any precursors or underlying problems, as we’ve already discussed above.
There is the ability for the software to alert doctors to a problem. Doctors may have set the wrong dosage for medication, or there may be an issue with crashing medication. Doctors can stop themselves in their tracks and make sure your health is put first.
The better recording also helps with communication between doctors and patients. Patients feel like their health is being put first, so they are likely to be more forthcoming. They don’t feel ignored or like certain symptoms are being dismissed.
There Are Two Sides to Technology
Technology has helped to improve the health. It will continue to do this as there are more advancements made.
There is no denying that technology can be bad. We are at a point where we sit more because we don’t have the need to go outside anymore. Socializing is possible online, and recreation is often spent watching TV shows and movies.
‘Experts’ tend to focus on all these negatives of technology, without really focusing on the ways that technology is helping us.
While a lot of the advancements have meant that doctors have it easier. These advancements have helped us as patients. We will find it easier to get a more accurate diagnosis, and the treatments are more likely to work.
It’s easier to make changes to our lifestyle because technology has noted the warning signs that we are more likely to suffer from something if we stick to our current paths.
There are ways that technology helps us daily. Smartphone apps and small devices have led to the ability for us to track our health and any symptoms. Fitbit and pedometers track the steps that we take, encouraging us to do far more exercise than we usually would.
Food tracking apps help us to keep our calorie intake under control or boost the amount of water that we drink. Symptom trackers make it possible to keep an eye on potential health problems.
We can also get in touch with doctors and nurses much easier. We no longer must pay a fortune to see our family doctor and clog up the waiting room, feeling like we are wasting someone’s time.
Technology opens the doors to discuss symptoms online and get advice immediately. This could make all the difference in getting the treatment we need.
And it’s not just about the physical health. Technology opens the doors to getting the mental and social support when it comes to living with a condition or caring for someone. There are support forums online and places to go to do our own research.
We feel far more control in life and with our condition, and we can focus on holistic approaches. A better mental health will help to improve our physical health since we have a better chance of fighting infections.
Don’t just write technology off. This is something that really can improve our lives and our health. It just must be used in the right way.
|
<urn:uuid:a4d4dbc0-84c7-41a8-a177-12f866916976>
|
CC-MAIN-2022-40
|
https://americansecuritytoday.com/8-ways-technology-improving-health/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00259.warc.gz
|
en
| 0.967391 | 6,028 | 2.78125 | 3 |
The journey that software developers have been experiencing with their hardware is long and winding. Only a few years ago it was difficult to get the right kind of support for their development and deployment needs. Organizations needed to buy the right servers and networking equipment and deploy it in order for the applications to run correctly. With the advent of new platforms like those found in the cloud, the need for hardware procurement became much less important.
Thanks to public cloud providers, developers can purchase server resources, develop applications, and run them with almost no effort required to build out hardware infrastructure of their own. This ease has led to the focus of a software application changing from the capabilities of the infrastructure to the limits of the developer imagination. With the freedom of platform independence, the view becomes based more on the application itself.
This shift in viewpoint has led to applications becoming more modular and easier to deploy. Instead of thinking about software in terms of how many servers it takes or how much storage is needed, developers have started using microservices models to build scalable programs that can be used by any number of users or organizations of any size. If you need more capability, you just need to bring more modules online. These microservices containers make it simple to build code that can run anywhere.
No matter the location of the program code, security is still a major concern. With the amount of data being generated and analyzed in the modern application it becomes paramount that you must ensure that information is protected at all costs. Containerized workloads make it easier to deploy services but harder to track what is in use and how long it has been there. Proponents of these new software architectures will say that the ephemeral nature of microservices makes it harder for them to be hacked. However, that same nature also means it is harder to detect when those same services have been violated.
It’s not just the data that attackers are looking for. The intellectual property of your organization is as valuable as the information it helps to generate. The race to create new software or applications or even algorithms to make use of data collection and understand your userbase is heated. If you have an edge on that market what is to stop some unscrupulous people from trying to obtain that IP from less-than-legal means? If the attackers were able to nab their information from the cloud by capturing a container would you even know what happened?
One of the biggest targets that attackers are looking to exploit is memory. If the application is doing any kind of work, it must do so in the system memory or store data in a CPU cache. If attackers are able to write tools that allow them to violate these shared areas, they can get access to anything being analyzed by the software. This means that these spaces need to be secured against other programs reading from areas not allocated to them. The nature of RAM and cache makes this difficult under the best of circumstances.
This exploitable hardware issue requires a hardware solution. How can you protect memory from other programs? How can you ensure a zero-trust architecture for a CPU cache? With containers relying on the infrastructure without any visibility into what’s going on, how can you solve this problem without modifying them and making the entire system less flexible or less scalable?
Intel SGX Secure Enclaves
It’s no surprise that Intel is one of the best when it comes to making hardware that meets the needs of developers looking to do advanced operations with software. The list of advancements that Intel has developed over the years to increase performance and provide utility to those that write software is long and impressive. Intel also realizes that securing workloads is the responsibility of every part of the IT infrastructure stack. That realization has led to Intel Software Guard Extensions (SGX) which solves for the problem of shared memory exploitation, thereby securing data while it is resident in memory. SGX allows for the creation of secure enclaves that are encrypted and unreadable to any programs that do not have the key to decrypt the enclave. This means that the execution of secure workloads truly has isolation from any other processes running in memory.
The first thing that might come to mind when you think about secure memory isolation of these workloads is protection against hackers. This is another angle I find exciting. For a number of organizations there is a significant challenge in sharing data, especially data related to customers. The easiest example of this is something like a patient record in a healthcare environment. This data is very important to the patient and very critical to the operation of the healthcare organization. However, most of these organizations don’t focus on the kind of analysis that could provide insights into the health and well-being of a patient. They rely on third parties to do that work. How can the organization ensure the security of the data as it is being analyzed?
With Intel SGX, you can not only secure data against exposure but you can verify access for third parties using the attestation capabilities of SGX. If you want to secure the amount of data that the outside party can access and set a limit on how long they are able to analyze it, you can do that. Set up an SGX enclave for them and when the window of time has expired all you need to do is revoke the key. The data is then secured and you can verify that no one else has access to it. Since you are the one that generates the key, you can also verify that the data was accessed at a specific time. This helps confirm the outside organization did the work they were supposed to do and provides positive confirmation for any audits done at a later date.
This kind of compartmentalized access to data works in concert with the container development method. Because the services for the application are scalable you can provide a number of SGX enclaves of varying security for access. You can protect some data with one enclave and require a separate enclave for more sensitive information. You can set keys for both and even create keys for specific users to prevent data leakage between them. Intel has been doing this kind of work already for medical researchers to use advanced AI algorithms to do research to help increase medical science in the future.
Bringing It All Together
Intel SGX is a hardware security solution that has significant potential to secure microservices and containerized workloads. The development of technology to create zero trust architecture for the most shared spaces on a system means that we can begin to think about data security in new ways. We can build isolation and access rights and verify that data is truly safe at every point of the journey from creation through use and eventual retirement or long-term storage. We can also provide access levels for the data to help further isolate those workloads to provide true “need to know” access and revoke it when necessary. With the work that Intel has provided and the effort they have put in with the Confidential Computing Consortium, there is a bright future for Intel SGX on the horizon for security professionals.
|
<urn:uuid:20301fb4-a2fe-4db5-8834-a9cc40fcb59b>
|
CC-MAIN-2022-40
|
https://gestaltit.com/tech-talks/tom/securing-modern-workloads-requires-secure-hardware-support/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00259.warc.gz
|
en
| 0.958554 | 1,408 | 2.53125 | 3 |
The primary goal of a smart city is to stimulate economic growth, improve city operations, and enhance the quality of life for citizens. A well-designed smart city is data-driven to streamline processes and create a safe, sustainable environment that meets the needs of the people who live and work within it. It addresses issues such as accessibility, healthcare, transportation, as well as minimizing waste and inconvenience.
Effective city planners and leaders understand that becoming a smart city is essential for attracting residents and businesses, and for fostering overall economic and social prosperity. And although every city has the capacity to drive transformation, the process varies from city to city based on factors such as size, population, and location. Although the path to a smart city is often ambiguous for many city leaders, there are some things that apply to cities nationwide.
Here are five things city leaders should keep in mind as they map out the road to a smarter city.
- First Things First
A long-term strategy requires a clear vision and understanding of the resources that will be required to ensure an effective, sustainable program. Before taking any action, municipal leaders should first assess the concerns and requirements of citizens and businesses. Leaders must ensure their initiatives align with the priorities of these stakeholders to solicit their support and help the program reach its full potential.
- Establish the Infrastructure
When it comes to smart cities, ‘infrastructure’ doesn’t just mean physical architecture anymore. Today’s foundational infrastructure includes all the layers necessary to implement smart initiatives. Cities need established infrastructure to support effective smart city transformation. This includes broadband systems, fiber optic cabling, fiber-less technologies, premium wireless infrastructure, and scalable systems. This infrastructure is critical for allowing the flood of connected devices to send and receive all the information they’re collecting in real-time, without interference.
- Invest in Technology
Digital innovation is progressing faster than ever, and cities that fail to adopt these technologies will lose out to more advanced metro areas when it comes to attracting businesses and residents. To accelerate digital transformation, city leaders should look to both internal teams and external technology partners and suppliers. Communications solutions and electrical infrastructure service providers such as Hylan take the guesswork out of establishing and optimizing smart city initiatives.
- Capitalize on 5G
Adopting next generation 5G wireless networks is a key factor in meeting the requirements of increasingly mobile residents and workforces. To relay massive amounts of data among connected devices and systems in near-real time will require 5G’s gigabit-per-second throughputs, extremely low latency, increase in base station capacity, and significantly improved quality of service (QoS), as compared to current 4G LTE networks. Relaying massive amounts of data among connected devices and systems will require 5G’s gigabit-per-second throughputs, which provides low latency, increased base station capacity, and significantly improved quality of service as compared to current 4G LTE networks.
- Leverage Multiple Data Points
Data is the driving force behind smart city technology, and cities need to ensure they are compiling, analyzing, and incorporating a wide range of data. Among the most common smart city applications are smart meters for utilities, intelligent street lighting and traffic signals, and Radio Frequency Identification (RFID) sensors that are embedded in pavement for monitoring road damage and traffic flow. Some cities are incorporating data into more advanced social initiatives such as public health. New York City, for example, analyzes how pollutants impact air quality in different neighborhoods using air quality monitors, mounted 10 to 12 feet off the ground on public light and utility poles.
Most city planners and leaders recognize that there are a combination of factors that go into making a smart city, and that it all begins with a well-developed infrastructure. Leaders with a deeper grasp of the necessary steps, the resources they’ll require and how to tie it all together are at the front of the pack on the road to building a smarter city.
Discover how Hylan helps city planners and municipal leaders make smart cities a reality by partnering with private companies and government agencies. Our teams build out the foundational infrastructure for smart cities to connect utilizing our decades of experience in both wireless and fiberless technologies.
|
<urn:uuid:ec804500-5e53-493b-905f-37037dd383d6>
|
CC-MAIN-2022-40
|
https://hylan.com/smart-leaders-building-smart-cities-five-things-they-need-to-know/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00259.warc.gz
|
en
| 0.937312 | 857 | 2.59375 | 3 |
Halfpoint - Fotolia
The UK’s National Cyber Security Centre (NCSC) has partnered with Girlguiding South West England to run an interactive cyber security workshop for girls aged 12-14 under the auspices of its popular CyberFirst skills scheme.
Part of the NCSC’s drive to get girls and young women interested in cyber security and increase representation in the field, the event saw 100 Guides gather at the University of West England to take part in a range of activities such as website customisation, use of big data, digital forensics and cryptography.
All the tasks in the programme are tailored to children who are soon to select their GCSE choices, and are supposed to help them understand the variety of jobs and career paths that computer science could offer. The Guides explored a number of fictional scenarios to help get across the cyber security message, including – in a somewhat timely exercise – using digital forensics and open source intelligence to track down patient zero in an infectious disease outbreak.
“It’s great to see Guides from across the South West learning about the fascinating world of cyber security, enabling them to see how worthwhile and fulfilling a career in this field can be,” said Chris Ensor, NCSC deputy director for skills and growth.
“We will continue to support and encourage the UK’s next generation of cyber professionals, through our world leading CyberFirst programme, helping to attract the most diverse minds,” said Carole Pennington, chief commissioner for Girlguiding South West England. “We were delighted to be working with NCSC on our first CyberFirst activity day.
“Part of the ethos of Girlguiding is that girls can do anything, and events like this are key to our members being able to try out a range of activities with experts in their field. These activity days form part of the Region Swebots programme [a local scheme created by Girlguiding South West England to encourage interest in science, technology, engineering and maths, or Stem, subjects].
“The most recent resource, On the net, was produced in collaboration with NCSC and has proved to be very popular with our members of all ages,” she said. “Awareness of cyber security is vital for all our members, and we hope that many more girls will have the opportunity to take part in activity days like these which provide a fun way of learning about the topic.”
Read more about security education
- Research by the SANS Institute finds that while parents are aware of cyber security, they don’t know enough to encourage their children into cyber roles.
- More than 80 schoolgirls spent a day learning about computer hackers and rocket science – Cyber Girls First hopes they will become the next generation of technologists.
- Security industry partners have launched an initiative aimed at raising individuals’ digital safety skills to enable them to protect themselves and their families from the most common cyber attacks.
Meanwhile, the NCSC’s new-look CyberFirst Girls competition hosted its regional contests on 8 February 2020 at 18 venues around the UK. Now in its fourth year, the popular competition attracted entries from over 12,000 girls at more than 520 schools up and down the country.
The event, targeted at girls aged 12 and 13, saw teams of up to four take on a series of codebreaking challenges set by the NCSC and other security experts. The winners of the local contests will now move forward to a grand final to be held in Wales, where they will face off in a bid to become national champions.
More information on the CyberFirst scheme for young people can be found on the NCSC’s website.
|
<urn:uuid:12f63b56-cf07-48ed-96ca-75d368196533>
|
CC-MAIN-2022-40
|
https://www.computerweekly.com/news/252478778/Girlguiding-hosts-interactive-cyber-security-workshop
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00259.warc.gz
|
en
| 0.95176 | 759 | 2.546875 | 3 |
It has been announced that cybercriminals have launched a ransomware attack against the parliament in Bosnia and Herzegovina, which has brought critical activity to a standstill.
According to reports, this has brought the parliament in Bosnia and Herzegovina to a complete standstill.
The website for the parliament has been rendered completely inoperable, while MPs have been told not to even turn on their computers. But, the consequences of this attack are far greater than just digital downtime.
While these services are down, parliament workers are unable to perform their jobs, which will have a knock-on effect on other services and society.
This attack follows a string of major ransomware attacks on governments recently, with Albania, Montenegro and Costa Rica all coming under assault.
It is time that governments work to improve their defences against cybercrime, because they are very clearly one of today’s prime targets.
One of the best ways to achieve this is by implementing better control over network access.
We all know, credentials offer criminals the keys to the digital kingdom, but if organisations encrypt their access, employee credentials cannot be stolen or phished since their employees do not know them.
This closes important doors on attackers, and also gives government organisations back control over their data.
Information Security Buzz (aka ISBuzz News) is an independent resource that provides the experts comments, analysis and opinion on the latest Information Security news and topics
|
<urn:uuid:ed78ad97-2ab2-44c1-b533-535af3dcd6f6>
|
CC-MAIN-2022-40
|
https://informationsecuritybuzz.com/expert-comments/bosnia-and-herzegovina-cyberattack/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00459.warc.gz
|
en
| 0.954023 | 294 | 2.515625 | 3 |
How much is your data worth to you? For victims of ransomware this is no longer a rhetorical question but painfully real. Ransomware locks or encrypts data on a device and then demands a ransom for the key to release it. It is a comparatively lucrative business for cybercriminals: while the majority of traditional attacks involve seizing data and then finding ways to cash that data in, with ransomware they can earn the money at once. Victims feel the effect of cybercrime very directly.
Our security analysts predicted ransomware would grow fast in 2016, and it seems that they were right. The first quarter of 2016 saw a huge spike in samples of ransomware as it becomes fashionable among malware writers. These types of attacks are also expanding into new markets – one of the most recent shockingly taking place against the healthcare industry, with a hacker stealing 9.2 million US medical records. However, ransomware still represents just a small share of the total number of malware samples we detect.
The massive publicity for ransomware is giving rise to a worrying misconception. There seems to be a belief that the IT security industry can’t stop ransomware. But this is wrong. First, the detection rates for ‘cryptors’ are as high as for any other type of malware. Modern security solutions can even detect unknown attacks by analysing the behaviour of an executed file. Second, the vast majority of these attacks rely on rather classic malware technology and are therefore easy to block. Only a very small number of samples have been found to be using more elaborate techniques in an attempt to avoid detection by security software. So from a security point of view, we can say that ransomware is not that different to other malicious software.
There are a number of reasons behind its growing popularity. As mentioned before its success lies in its very direct approach. As a criminal, you infect a machine and get money for disinfecting it. This is straightforward and doesn’t need much additional effort. With stolen credit card data you have to find a way to cash in, but with ransomware you just wait for the money to arrive.
The public awareness of ransomware can also be explained easily. Victims of ransomware attacks feel its effect far more directly and severely than they do those of other types of attacks. Your data is blocked, your device unusable, you feel totally helpless. This is very unpleasant for consumers and can cause immense hardship for organisations, for example, if a hospital is hit, as has happened several times recently. In such circumstances the motivation to pay the ransom can be very high. But we advise people and organisations not to do so, as decryption is in no way guaranteed.
For those infected, the situation is tough. The encryption algorithms are usually strong and it can be difficult or even impossible to get your data back. The ‘No Ransom’ project initiated by Kaspersky Lab and the Dutch Police collects decryption keys and is able to help many victims – although not all of them.
But it doesn’t have to reach this point. The malware’s infection vectors are classic: malicious advertising, malware planted in websites, and infected email attachments and social networks. Modern security technologies can protect users and businesses from that. These days, internet security software has technologies like exploit protection, URL filtering, emulators and cloud technologies which protect users from known and unknown threats. We advise users of our own products to turn on the System Watcher component and Kaspersky Security Network. This should be complemented by a mitigation strategy, including regular backups and software updates. Users should also be alert to the kind of things to look out for.
And what will the future bring? It is hard to speculate. We have seen cryptors on Android, OSX and Linux, so in theory ransomware can spread to different platforms and devices. But in the end the question is where do criminals expect to earn the most money? And currently Windows and Android are the most lucrative platforms.
[su_box title=”About David Emm” style=”noise” box_color=”#336588″][short_info id=’60695′ desc=”true” all=”false”][/su_box]
|
<urn:uuid:7cefaf09-8bc5-41d0-b4dc-d89b6c6f4bef>
|
CC-MAIN-2022-40
|
https://informationsecuritybuzz.com/articles/rise-ransomware-time-stop-paying/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00459.warc.gz
|
en
| 0.952351 | 864 | 2.625 | 3 |
Installing NodeJS on a Raspberry Pi can be a bit tricky. Over the years, the ARM based processor has gone through several versions (ARMv6, ARMv7, and ARMv8), in which there are different flavors of NodeJS to each of these architectures.
Depending on the version you have, you will need to manually install NodeJS vs grabbing the packages via a traditional apt-get install nodejs.
Step 1: Validate what version of the ARM chipset you have
First let's find out what ARM version you have for your Raspberry Pi. To do that, execute the following command:
You should receive something like: armv61
Step 2: Find the latest package to download from nodeJS's website
Navigate to https://nodejs.org/en/download/ and scroll down to the latest Linux Binaries for ARM that match your instance. Right click and copy the address to the instance that matches your processor's architecture. For example, if you saw armv61, you'd copy the download for ARMv6
Step 3: Download and install nodeJS
Within your SSH/console session on the Raspberry Pi, change to your local home directory and execute the following command (substituting in the URL you copied in the previous step in what's outlined in red below). For example:
cd ~ wget https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-armv6l.tar.xz
Next, extract the tarball (substituting in the name of the tarball you downloaded in the previous step) and change the directory to the extracted files
tar -xvf node-v8.11.3-linux-armv6l.tar.xz cd node-v8.11.3-linux-armv6l
Next, remove a few files that aren't used and copy the files to /usr/local
rm CHANGELOG.md LICENSE README.md cp -R * /usr/local/
Step 4: Validate the installation
You can validate that you have successfully installed NodeJS by running the following commands to return the version numbers for NodeJS and npm
node -v npm -v
That's it! Have fun!
|
<urn:uuid:6b6417aa-cfca-42e0-bbdb-231717f0391a>
|
CC-MAIN-2022-40
|
https://jackstromberg.com/tag/nodejs/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00459.warc.gz
|
en
| 0.823789 | 487 | 2.546875 | 3 |
Engineers working on a chip-enabled soccer ball are optimistic about
the technology being used at the FIFA (Federation Internationale de
Football Association) World Cup soccer tournament in Germany next year.
“We’ve been testing the technology at the main soccer stadium in
Nuremberg for some time and more recently in an under-17 FIFA
tournament in Peru,” said Gunter Rohmer, director of
performance-optimized systems at the Fraunhofer Institute for
Integrated Circuits in Erlangen, Germany. “The technology has performed
well, and we’re pretty optimistic that it will be used at the games in
Germany next year.”
FIFA has shown interest in the technology — largely to help referees
make crucial goal-line calls — but has yet to make a final decision.
The radio-based tracking system could also be used to determine whether
or not a ball has gone out of bounds, to compile statistics about
individual players and more, said Rohmer, in an interview at the
Systems IT exhibition and conference in Munich.
The chip-enabled soccer ball is being developed by German sportswear
manufacturer Adidas-Salomon AG, software company Cairos Technologies AG
and the Fraunhofer Institute.
The technology is based on an ASIC (application-specific integrated
circuit) chip with an integrated transmitter to send data, according to
Rohmer. The chip is suspended in the middle of the ball to survive
acceleration and hard kicks via a system developed by Adidas. Rohmer
was unable to provide information about the Adidas system.
Similar chips, but smaller and flatter, have been designed to insert into players’ shin guards, he said.
At the Nuremberg stadium, 12 antennas in light masts and other
locations distributed around the arena collect data that is transmitted
from the chips. The antennas are linked to a high-speed fiber optic
ring, which routes data to a cluster of Linux-based servers.
The chips use the same 2.4GHz unlicensed frequency band used by Wi-Fi
systems, according to Rohmer. “In our tests, we have noticed that
although no Wi-Fi systems have interfered with our technology, our
technology has caused some interference with Wi-Fi systems in isolated
cases,” Rohmer said. “We are looking at ways to avoid any possible
interference because we know that Wi-Fi will be used at the games.”
FIFA aims to test the technology later this year at another tournament
in Japan before ultimately deciding whether or not to introduce it in
all 12 stadiums in Germany selected to host next year’s World Cup
“Even if the technology is very accurate, it’s not perfect — no
technology is,” said Rohmer. “Our technology is meant to be an aide.
Ultimately, the decision whether or not to call a goal will still be up
to the referee.”
The Systems event runs through Friday.
By John Blau – IDG News Service (Dusseldorf Bureau)
|
<urn:uuid:58a94f0d-dbe0-461d-a513-b1fa5dbaaebc>
|
CC-MAIN-2022-40
|
https://www.cio.com/article/252363/consumer-technology-chip-enabled-ball-at-2006-world-cup.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00459.warc.gz
|
en
| 0.925045 | 668 | 2.515625 | 3 |
The tiny robots, millionth of a millimeter in size, programmed to move and build molecular cargo, using a tiny robotic arm. Scientists from the University of Manchester have created the world’s first ‘molecular robot’ that is capable of performing basic tasks including building other molecules.
The robots operate by carrying out chemical reactions in special solutions. Controlled and programmed by scientists to perform the basic tasks.
Each individual robot capable of manipulating single molecule and made up of just 150 carbon, hydrogen, oxygen and nitrogen atoms. To put that size into context, a billion of these robots piled on top of each other. Still the same size as a single grain of salt.
In the future such robots used for medical purposes. Advanced manufacturing processes and even building molecular factories and assembly lines.
Professor David Leigh, who led the research at University’s School of Chemistry, explains All matter is made up of atoms and these are the basic building blocks that form molecules. Our robot is literally a molecular robot constructed of atoms just like you can build a very simple robot out of Lego bricks. The robot then responds to a series of simple commands that are programmed with chemical inputs by a scientist.
Similar to the way robots used on a car assembly line. Those robots pick up a panel and position it. Riveted in the correct way to build the bodywork of a car. So, the robot in the factory, our molecular version programmed to position and rivet components in different ways to build different products, on much smaller scale at a molecular level.
Miniaturization of machinery
The benefit of having machinery that is so small is it massively reduces demand for materials. It can accelerate and improve drug discovery. Reduce power requirements and rapidly increase the miniaturization of other products. Therefore, the potential applications for molecular robots are extremely varied and exciting.
However, molecular robotics represents the ultimate in the miniaturization of machinery. Our aim is to design and make the smallest machines possible. This is just the start that within 10 to 20 years molecular robots begin to use. To build molecules and materials on assembly lines in molecular factories. Building and operating such tiny machine is extremely complex. The techniques used by the team based on simple chemical processes.
The robots assembled and operated using chemistry. This is the science of how atoms and molecules react with each other. Also larger molecules constructed from smaller ones.
Moreover, it is the same sort of process scientists use to make medicines and plastics from simple chemical building blocks. Then, once the Nano-robots constructed and operated by scientists by adding chemical inputs tell the robots just like a computer program.
|
<urn:uuid:96eec03f-b924-45ce-a7c6-efa2667c56ff>
|
CC-MAIN-2022-40
|
https://areflect.com/2017/09/21/molecular-robot-capable-of-building-molecules/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00459.warc.gz
|
en
| 0.919532 | 536 | 4.15625 | 4 |
Researchers poring over brain scans may soon have an easier time integrating that data with information about the genes and proteins that make brain cells tick.
A software vendor and a nonprofit group are teaming up to create NeuroCommons.org, a free, shared repository of data and other tools to speed research on brain function and disease.
Informatics company Teranode will provide an infrastructure and means to store disparate data in common formats. Science Commons, a project of the nonprofit corporation Creative Commons, will develop a community of users and experts, plus work to help create an intuitive interface to find and analyze content.
Science Commons is housed at the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. Teranode and Science Commons announced the partnership on Monday and plan to launch NeuroCommons.org in the second half of 2006.
There’s a real need for a shared platform in neurology, said John Wilbanks, executive director of Science Commons. Separate research foundations exist to fund different rare diseases, but they cannot share information without running afoul of technical and legal complications.
One hope is that researchers can gather preliminary evidence for their hypotheses using other researchers’ datasets. NeuroCommons.org should also allow researchers to readily compare proposed mechanisms about what, how, and when various genes and proteins interact.
Neurologists would use an interface much like a Web search engine, but instead of finding relevant Web sites, they would be able to find other researchers’ datasets and protocols, as well as working models of how genes, proteins and brain regions interact.
Even better, NeuroCommons.org could automate such tasks and analyze the results. Researchers would not need to spend days doing literature searches or hunting with several available databases for useful data, said Matthew Shanahan, CMO for Teranode. That’s especially important as the number of proteins and genes associated with diseases swells. “The thought that a scientist can do that manually efficiently doesn’t make sense; you really need the aid of software now.”
Teranode is responsible for figuring out how to get data from widely varying sources, from brain scans to gene chips, into a format that can be searched intuitively. Because XML is “insufficient to represent the complex data of life sciences,” another mark-up language, RDF (resource description format), is used to support what is being called the “semantic Web,” said Shanahan.
“All the Web can do is find a document for you and display it for you,” said Wilbanks. “The semantic web marks things up in a more concrete manner; it says that there are relationships.” For example, a scientist could search for peer-reviewed articles about a particular gene, data related to that gene, or models about how that gene might affect other genes and proteins.
Neurocommons.org is set up to be maintained by its community of users. Researchers will be able to annotate each others’ data.
Wilbanks hopes that, eventually, researchers will see contributing information to the semantic Web as part of their scientific duty, much like peer review. But he admits that it isn’t yet part of scientific culture. “It’s hard to get someone to take the time to say, ‘I’m going to make my data reusable by someone that doesn’t know me.’ ”
He believes scientists will be converted once NeuroCommons.org demonstrates that it can help ask new kinds of questions.
Check out eWEEK.com’s for the latest news, views and analysis of technology’s impact on health care.
|
<urn:uuid:2ba387ed-2a21-4179-a89f-ef7c04493e44>
|
CC-MAIN-2022-40
|
https://www.cioinsight.com/case-studies/new-brain-trust-to-work-like-the-web/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00459.warc.gz
|
en
| 0.916964 | 765 | 2.921875 | 3 |
Data governance is a data management concept that addresses the risk versus value of data across an organization from ingestion to analysis. Data governance is policy-driven to manage regulatory compliance, data quality, and data access. Good data governance means getting the right data to the right people at the right time to drive faster time to insights.
Data governance policies and procedures practices are underpinned by intelligent technology. Examples of technology enablers to enforce said policies include metadata-driven data lineage tracking, automated masking of sensitive information, role-based access to information to ensure trusted data is available to trusted data consumers.
Data governance is focused on mitigating risk while improving data accuracy. Example use cases include GDPR and CCPA compliance for data privacy, role-based access to information to foster collaboration and self-service for data consumers and masking sensitive financial or medical information. Good data governance is growing in importance as demand for multisource data and associated insights is growing.
Data governance is one of the tenets of a DataOps practice. DataOps (intelligent data operations) is a methodology: a technological and cultural change to improve your organization's use of data through better collaboration and automation. That means improved data trust and protection, shorter cycle time for your insight's delivery, and more cost-effective data management.
|
<urn:uuid:6c5e6b1e-0d0f-4b42-b08f-152598d23de1>
|
CC-MAIN-2022-40
|
https://www.hitachivantara.com/en-asean/solutions/data-management/what-is-data-governance.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00659.warc.gz
|
en
| 0.900101 | 308 | 2.765625 | 3 |
A network TAP (test access point) is a monitoring device that mirrors the traffic that is passing between network nodes. A TAP is a hardware device inserted at a specific point in the network to monitor specific data. As an essential part of the Gigamon Hawk Deep Observability Pipeline, network TAPs acquire traffic to provide the visibility required to secure, monitor and manage your enterprise's network infrastructure continuously and efficiently.
How Taps Work
A TAP monitors one network connection and generally comes with four ports: network A port, network B port, monitor A port and monitor B port. TAPs allow network traffic to flow through the device’s network A and B ports uninterrupted, while simultaneously copying the same data to the monitor A and B ports. The monitor ports can feed tools such as VoIP recording devices, network intrusion detection and prevention systems, network analytics, protocol analyzers, packet sniffers and traffic aggregation/packet broker systems.
|
<urn:uuid:4e259fd8-be27-4dcd-8a6f-c428f6b4cf20>
|
CC-MAIN-2022-40
|
https://www.gigamon.com/cn/products/access-traffic/network-taps/g-tap-m-series.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00659.warc.gz
|
en
| 0.911224 | 194 | 2.875 | 3 |
There’s no shortage of sustainable activity going in the data center space. Hyperscalers such as Amazon, Microsoft, and Google have made carbon-neutral commitments and have made significant investments in sourcing renewable energy for their facilities. And in 2021 alone, the likes of ChinData, MTN, and IBM have made similar pledges to become carbon neutral before 2040.
However, much of the conversation is still around operational sustainability and ensuring the facilities use as little power as possible, use green energy where they can, and have minimal or even carbon negative impacts on the local area through district heating initiatives and natural cooling.
But is enough thought being given to the environmental impact of the construction phase around data centers and the material impact of the construction materials they use? Data centers use huge amounts of concrete and steel, which are major sources of CO2, and as the sustainability gains from operational efficiencies dry up, firms will have to look to embodied carbon in the construction phase if they are serious about being climate neutral.
“Until now, the modern green building movement has largely focused on reducing operational energy – the energy used to heat, cool, and power buildings – which is easy to see and measure,” says Stacy Smedley, Chair and Executive Director of Building Transparency.
“While this effort has produced many successes, it’s not enough.”
This article appeared in 2021 sustainability supplement. Read it for free today
Embodied carbon the next phase
Embodied carbon is the sum of all the greenhouse gas (GHG) emissions resulting from the mining, harvesting, processing, manufacturing, transportation and installation of building materials, and is a major source of carbon globally.
Cement and steel are the most carbon-intensive construction materials, two materials data centers use in abundance. On average, a ton of cement will produce 1.25 tons of CO2, largely from the roasted limestone and silica. And as a result, buildings are responsible for around 40 percent of annual global greenhouse gas emissions and 40 percent of all raw material
Building Transparency says embodied carbon makes up half of a building’s total carbon emissions, but with data centers, the fact they are energy-intensive powered shells changes the equation slightly. The operational carbon impact of a data center can be more than twice the embodied carbon impact, according to Michael Riordan, Managing Director of Linesight UK, and focus has historically been on the operational side more than the embodied carbon aspect.
However, due to their large size and ever-increasing number, it’s not a topic that should be avoided just because there are more savings to be had in operations.
“At some point, most energy is going to come from wind, solar, and other renewable types of sources,” says Rob Ioanna, Principal at Syska Hennessy, “And when that happens, the conversation is starting to turn to embodied carbon because that's really going to be where we're going to have emissions reductions.”
Much of the construction considerations around sustainability still revolve around the impact of operations; how to cool the servers in the most energy-efficient way, what to do with excess heat IT hardware generates, whether the facility uses renewable power, or if there more sustainable options for backup than diesel generators.
These are important considerations, but IT hardware continues to evolve and become more efficient, and energy grids rely more on renewable energy, the sustainability gains and carbon reduction companies are looking to make will be harder to come by through operational efficiencies alone.
“As we start to reduce our operational emissions and the energy grid start to get cleaner, emissions of the materials that we're building with on our construction projects actually become a larger source of emissions,” says Smedley. “For some of these large data center owners that are already purchasing 100 percent green energy for some of their markets and projects, they'll already view themselves as carbon neutral on the energy side, and for those owners, the embodied carbon emissions are really what's left to tackle.”
The fact that data centers are often fairly standardized in their construction also means once low-carbon practices and standards have been established at one facility it should be easy to replicate across future facilities without too much heavy lift.
Slow progress in CO2 reduction
Linesight’s Riordan says that concrete often accounts for as much as 40 percent of a data center's construction, followed by fuel (~25 percent) and then steel – both reinforcement and structural – which can account for 10 percent of a project’s carbon footprint each. He adds that adopting low carbon approaches to new builds can result in 13 percent less carbon during construction, but repurposing a building saves a lot more.
Building a new facility creates eight times as much carbon as repurposing, so upgrading an old building can save 78 percent of the carbon emissions of construction. The Global Cement and Concrete Association has committed to zero emissions concrete by 2050. And while there will be no silver bullet to reach that goal, there are a number of startups and trends in the materials space looking to reduce the carbon impact of this core building material.
CarbonCure reduces the emissions of the concrete industry, by injecting waste CO2 into the mix. It hopes to remove 500 megatons of carbon dioxide annually from the concrete industry by 2030. Compass Data Centers are a CarbonCure customer, with CIO Nancy Novak saying the company estimates an average of 1,800 tons of CO2 per campus as a result. Amazon and Microsoft have also invested in the company. Novak tells DCD that Compass are also looking into other embodied carbon technologies for aggregate.
In terms of the spoil or ground that is raised during a data center’s construction, Novak says Compass will process it so it can be used for structural fill whenever possible, and will check if other projects in the vicinity of the site need clean fill before hauling it. If the spoils are unsuitable for structure, Compass will often make berms and natural landscapes to enhance security and add to the amount of green space on a campus. In 2018, AWS used 100,000 tons of spoil from its Stockholm data center to raise the altitude of the Vilsta ski resort by ten meters.
She adds that more offsite construction is key to reducing impact. “We need to be thinking in the mindset of manufacturing, where transportation and utilization of local materials as well as sustainable materials, is paramount.”
“Everything from advanced work packaging, prefabricated components and fully modularized rooms and buildings, needs to be more widely adopted and normalized in the construction industry.”
A number of architecture and design firms tell DCD that broadly we are still very early in the conversation around embodied carbon, but progress is being made slowly as awareness of the issues increases. There are clients that engage their sustainability teams really early on in the design process, says Todd Boucher, Principal & Founder, Leading Edge Design Group (LEDG), “and in other cases where the construction is more driven from that mission-critical sort of viewpoint, we find ourselves trying to weave in the sustainability discussion around how we could help improve the net environmental impact without an impact on reliability. But I don't think the conversation has extended far beyond efficiency into embedded carbon and sustainability. ”
There are increasing examples of companies looking to green materials; Digital Reality announced it was using ‘sustainable materials, including recycled concrete and steel’ in its 430,000 square foot (40,000 sqm), four-story expansion of its Santa Clara campus in California.
“Transportation from manufacturing yards to sites is a massive part of the carbon footprint on construction,” says Ashley Buckland, managing director at JB Associates. “Big companies are now looking at transportation and where materials or parts are coming from, and if they can source local materials or workers they will.”
“We’ve seen materials, such as carpets made from recycled bottles becoming more prevalent in recent years,” says Adrian Brewin, co-founder of Reid Brewin Architects, “and clients are more conscious of the origins of materials such as tiling – opting for local suppliers rather than exotic ones.”
There’s also a regular stream of news about eco-bricks being made from novel recycled or organic materials. Most recently, bricks made from mushrooms and sawdust were shown in London, but there are others made of everything from construction and demolition waste to loofahs or reused water bottles. However, no one DCD spoke to knew if some of these innovations were ready for prime time.
“Increasingly novel, innovative, recycled, and organic materials are likely to be used in different aspects of large-scale construction within the next [10-20] years,” says Brewin, “but the regulatory systems that are required to approve the industrial processes needed to produce such solutions, to the sheer volume we would need, will likely take tens of years to implement.”
And just because data centers could be built, or even 3D printed, from such novel materials doesn’t mean many firms will be willing to take the financial or resiliency risks to use them in construction.
“Owners adopt low-risk mindsets,” says LEDG’s Boucher. “If materials may not be proven enough yet, it would be a challenge to implement them in a data center environment that has any form of mission criticality.”
This low risk mindset also means resilience and redundancy are higher on the list of priorities than sustainability. However, as companies move more towards fewer centralized facilities and more Edge data centers and availability zones, the environmental impact per site lowers. This creates more opportunity to introduce greener thinking into smaller sites.
“We are an industry that has rightfully been completely focused on availability and reliability. And that has governed most decision-making. But with the utilization of hybrid cloud models and the geographic diversification of data centers, we are moving away from the idea that a data center has to be this huge monolithic Tier IV facility,” he says.
“And because of that softening of the perspective, we now can integrate different elements about energy efficiency and carbon neutrality into the conversation that are not solely focused on reliability, even if that still always has to be the number one part of the conversation.”
Obviously utilizing existing buildings where possible is more sustainable than building new. Serverfarm calculates that reused existing buildings can deliver embodied carbon savings of 88 percent compared with the material carbon cost of new projects.
Design firm HKS analyzed Serverfarm’s 25MW, six-story 150,000 square feet (14,000 sq m) Chicago facility, and found that the carbon cost of building an equivalent building would create over 9,000 tons of carbon emissions, compared to 1,000 tons for building reuse and expansion. Almost all of that saving would come through the reduction in concrete.
Data and benchmarks are needed for sustainable construction
While there are standards & certifications around sustainable buildings – LEED is probably the most well-known – they often focus on the entirety of a building’s lifecycle. Measuring and tracking embodied carbon and the environmental impact of construction and building materials can be difficult, and so creating effective benchmarks from which to measure yourself and others against, and start to make a change, hasn’t historically been easy for companies.
“Embodied carbon is inherently more difficult to measure and track,” says Linesight’s Riordan. “Data is undoubtedly fundamental in quantifying, understanding and reducing the environmental impact within the construction space. What gets measured gets managed.”
LEDG’s Boucher says that when we have a more standardized commonality of language and data, it might be then possible to create something akin to a PUE against which firms might be able to quantify and benchmark their embodied carbon impact against.
There are a number of projects and lifecycle tools looking at how to better measure, understand, and reduce the embodied carbon of construction projects. The EU-funded BAMB project is working on Materials Passports that can help organizations understand the provenance of materials and become more comfortable choosing recycled materials.
Another project is the Embodied Carbon in Construction Calculator (EC3) from Building Transparency. EC3 takes the Environmental Product Declarations (EPDs) – which are third party verified disclosures that detail information around carbon impact, including kilogram of carbon per unit. EC3 takes disclosed EPDs and collates them in a free and open source tool in which users can input their own materials-use during a planned construction project.
It then creates what is essentially a bill of materials and cost estimate in terms of carbon impact per item, allowing users to easily see the environmental impact of their material choices, but also see if there are more sustainable alternatives available.
The hope is not only that firms involved in choosing and procuring materials will opt for greener choices, but that the firms supplying materials will be forced to make all their products greener as a result of losing business.
“Having these owners start to ask for lower-carbon products means the manufacturers start to create lower-carbon products, which then the whole industry benefits from.”
Smedley says one of the goals of EC3 is to make a tool that is easy to use and allow it to be integrated easily into existing processes without heavy lift and that people who aren’t experts in building lifecycles or carbon footprints can quickly use and understand. Early benchmarks from the Carbon Leadership Forum have been released for companies to rate themselves against, and Smedley says companies are already seeing a 30 percent carbon reduction against that high benchmark through using the EC3 tool.
“It helps specifiers start to develop designs limits or benchmarks, from a carbon perspective, and these specifications then get into the bid documents.”
Microsoft was an early pilot partner of the tool, and the company has been using it in its Washington campus remodel as well as its data centers in order to choose lower carbon building materials.
As well as Microsoft, Turner Construction, and Mercury - US and European construction firms with large data center practices - are both pilot partners for the EC3 program. Smedley says it's important that data center firms, especially the hyperscalers, take an interest in reducing their embodied carbon footprint due to the sheer number of facilities they run, both data center and otherwise. Likewise, while enterprise data centers may be smaller in number and size than hyperscalers and colo providers, those companies often have large commercial real estate footprints they can transfer sustainable thinking to and from and likely can impact their construction supply chains.
“In the data center space, a lot of the players are very large companies that have a ton of other types of projects they're building where they can really kind of spur the market and benefit all of their building types by getting to these lower-carbon materials. It's the big players that lead and make it easier for the smaller ones to see they can implement it without much risk or cost.”
The next step, Smedley says, will be looking at mechanical systems, and the carbon impact of materials such as generators with the hope of reducing the emissions from their manufacture.
Carrot and the stick: money talks
While the hyperscalers and largest companies are already committing to carbon-neutral pledges, financial incentives – whether carrot or stick – from government and investors might be required to get smaller firms and those more focused on returns to come on board.
“Ultimately, I believe the only driver is a financial one,” says Brewin of Reid Brewin Architects. "Consideration and effort to reduce the carbon impact of data center construction occurs within the limits of local regulations and occasional certification requirements. Anything more is for political gain or extremely limited, and the only way to encourage sustainable thinking is to make regulatory changes.”
“People don’t like change, and unless they can see a clear business payback for [sustainability] then they don't sign up to it,” says Ashley Buckland, managing director, JB Associates, “There are not enough incentives there.”
In terms of sticks, there are numerous carbon cap regulations coming out of the EU, and New York City also has new carbon cap regulations on the books.
At the same time, investors BlackRock are beginning to require carbon reporting from the firms it invests in.
“There's a big ESG [environmental, social, and corporate governance] push in investment money and there's a lot of laws that are coming out that are putting carbon caps on what you can emit,” says Rob Ioanna, principal at Syska Hennessy. “Those two trends of money putting pressure on companies coupled with government incentives or pushes combining together will probably do some good.”
Meanwhile, Building Transparency’s Smedley says the financial incentive carrot is already there, as less carbon-intensive materials are often cheaper because they have lower manufacturing and processing costs. She also advises firms to get ahead of the game and get up to speed before it becomes a regulatory requirement.
“This is coming as policy. It might not be tomorrow but it might be three years from now. You might as well get your feet wet and just understand what that means before it is potentially a mandatory thing.”
“Every step that we can take, however incremental, is important,” says LEDG’s Boucher. “If, as an industry, we're ignoring the impacts of that the construction piece, then I think that we're really doing a disservice in our commitment towards sustainability.”
Education and transparency are key
Everyone DCD spoke to said one of the most important things any firm in the data center industry can do to encourage more sustainable thinking in the construction phase is to educate and engage with stakeholders on the topic.
Design, engineering, construction, procurement, and sustainability teams should all be asking each other how to make these facilities greener; are there more sustainable materials options, is there scope to use recycled material, is everything being sourced locally where it makes sense, is everything as standardized and modular as it can be? Asking these questions will at least start conversations and may surface more sustainable options. At the same time, firms need to be open in order to share what they’ve learned.
“Hyperscaler type companies have brought transparency into the market around the types of designs that they use in like the open compute project,” says LEDG’s Boucher. Those have been really transformative for the industry, not only in providing reference points for design but demonstrating the importance of collaboration and transparency. Continuing to cultivate that type of transparency in our industry will be important to encourage sustainability.”
“Every company has a responsibility,” adds Buckland of JB Associates. “Anyone working in the data center world needs to be sending the message out how they're trying to reduce their carbon footprint. They should share those initiatives ideas, it’s not something that any company should keep as their little black book.”
|
<urn:uuid:3f90517e-7f97-4d5a-a8d3-fa65855fe9d7>
|
CC-MAIN-2022-40
|
https://www.datacenterdynamics.com/en/analysis/sustainable-data-centers-require-sustainable-construction/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00659.warc.gz
|
en
| 0.956872 | 3,978 | 3.171875 | 3 |
A “Kill Chain” term has introduced by the military to explain steps that are used to attack the target. Later on, in 2011, Lockheed Martin published a paper that defined the concept “Cyber Kill Chain.” Reportedly, the paper was prepared with the help of the Computer Security and Incident Response Team (CSIRT). Like military Kill Chain, the Cyber Kill Chain also involves steps that are employed by cybercriminals in cyber-attacks. Once the SOC team or security professionals have a clear understanding of each step in Cyber Kill Chain, they can effectively prevent, detect, or/and stop cyber-attack at each of these stages. According to SANS Security Awareness, the Cyber Kill Chain model involves the following 7 steps:
Cybersecurity threats are very fast and sophisticated even more than the enhancements organizations are making. Under such circumstances, it is very essential to understand the real behavior of cybersecurity threats and threat intelligence. For example, how cyber-attack executes, what steps are involved, what are consequences, and so on. To understand the behavior of the cyber-attack, the Cyber Kill Chain introduces various steps that have been listed in the above section. At each step, SOC teams apply security controls to prevent and detect the cyber-attack before it infiltrates the corporate network and inflicts damage.
SBS Cybersecurity defines how security tools can be deployed to each stage of the Cyber Kill Chain. Below are some details:
Reconnaissance: At this stage, to detect an attack, a SOC team can use web analytics, threat intelligence; network Intrusion Detection System (IDS). To deny the attack, they can establish an information-sharing policy, firewall, and access control lists.
Weaponization: To detect an attack, a SOC team uses endpoint malware protection. On the other hand, a Network Intrusion Prevention System (IPS) is used to deny the attack.
Delivery: To detect an attack, a SOC team employs endpoint malware protection while several security controls are deployed to deny attacks such as change management, host-based Intrusion Prevention System (IPS), proxy filter, and application whitelisting. Moreover, an inline antivirus program is also used to disrupt the attack. Queuing is used to degrade attackers and attack is contained through router access control lists, app-aware firewall, trust zones, and inter-zone Network Intrusion Detection System (IDS).
Exploitation: To detect an attack, a SOC team uses endpoint malware protection and host-based Intrusion Detection System (IDS). To deny an attack, patch management and secure password are used. The attack can be contained through an app-aware firewall, trust zones, and inter-zone Network Intrusion Detection System (IDS).
If your organization is in chemical industry, cyber attacks can seriously harm in different ways. Read more to keep it safe from cyber...
Data breaches and data loss have been the worst nightmares of organizations. That is why being able to act proactively and ensure the...
|
<urn:uuid:e8e9d428-af22-4b60-b091-97b69c4943be>
|
CC-MAIN-2022-40
|
https://www.logsign.com/blog/how-cyber-kill-chain-can-be-useful-for-a-soc-team/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00659.warc.gz
|
en
| 0.916826 | 624 | 3.046875 | 3 |
Though the common vernacular is “The Cloud,” the truth is, there are multiple cloud environments and providers available to organizations looking to utilize this growing technology. Read on to learn about the different types of cloud environments, and the biggest security obstacle each presents.
Terminology in cloud computing is growing almost as rapidly as the technology. The following list outlines the important differences between the most common types of cloud deployments:
- Private Cloud – Private clouds are created for a single organization, either internally or by a third-party service.
- Public Cloud – Public clouds are created for use by multiple organizations. For example, Amazon Web Services (AWS) is a public cloud utilized by many businesses.
- Community Cloud – Like a public cloud, community clouds are used by multiple parties. Unlike a public cloud, a community cloud is a collaborative effort, in which infrastructure is shared amongst the users.
- Hybrid Cloud – A hybrid cloud environment is made up of two or more cloud types from different providers. For example, an organization could utilize both the AWS platform as well as private cloud. Hybrid environments may also refer to a combination of cloud and on-premise servers.
- Multicloud – Similar to a hybrid cloud, multicloud environments, also known as a Polynimbus cloud strategy, use multiple clouds for storage and development. However, a multicloud environment uses multiple clouds that are all of the same type. For example, an organization may engage the services of AWS, Azure, and Rackspace, which are all public clouds.
Hybrid cloud and multicloud models have become increasingly popular, as it allows organizations to mix and match to have the exact cloud arrangement that suits their needs. However, this aggravates the main problem plaguing cloud security: misconfiguration.
Outsourcing your development and data storage capabilities across different vendors is inevitably complex. Learning the ins and outs of each cloud environment, synchronizing these clouds together, and coordinating IT teams are just the beginning. With all of these balls in the air, it’s no wonder that configuring cohesive security settings often falls through the cracks.
Unfortunately, misconfigured cloud servers can lead to disastrous consequences. Breaches, data theft, compliance violations, and lost revenue are only a few of the possibilities.
A United Front
Understanding the potential dangers of misconfiguration is critical when assessing how to best approach cloud security. Cloud providers oversee the security of the cloud, but you are responsible for the security of the data that you place in that cloud. Requiring a unified security policy across your domain is the best way to ensure that misconfiguration doesn’t place your system at risk.
Cloud adoption is only growing, with Gartner analysts predicting that cloud computing will be a $300 billion business by 2021. However, Gartner also predicts that organizations using the cloud will be responsible for 95% of all cloud security issues during that time. With misconfiguration as the major catalyst to security issues, streamlined configuration simply cannot remain a manual task.
Powertech Security Auditor centralizes and automates security administration across all environments. It documents your security policy and can implement or make changes to your policy across multiple servers at the same time, manually or automatically. Security Auditor tackles consistent configuration for you, allowing your organization to make the most of your cloud environment.
Want to learn more about preventing cloud misconfiguration?
Download "The Truth About Cloud Security" to learn more about securing your data and for an overview of the most common cloud security issues.
|
<urn:uuid:9b23cce9-8dcd-4829-a372-e4f3bed49d26>
|
CC-MAIN-2022-40
|
https://www.helpsystems.com/blog/cloud-watching-cloud-security-cloud-environment
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00059.warc.gz
|
en
| 0.934856 | 734 | 2.65625 | 3 |
September 11, 2022
Behind the scenes at locations around the world, automakers, Tier 1 suppliers and automotive startups have been running tests on autonomous cars for literally thousands of days, as they compete to achieve the coveted Level 5 fully autonomous driving capability.
Since 2010, total global investment in autonomous vehicle (AV) technologies and smart mobility is around $206 billion, in order to achieve Level 2+ (L2+). That number is expected to double to achieve every subsequent level (L3 to L5). This is clearly very serious business. Yet there is one overwhelming challenge that every player in the market faces — including DXC: how to manage the massive amounts of data generated during testing. Those who do this successfully will gain the lead in the race to Level 5.
We have the data. Now what do we do with it?
Test vehicles can create more than 200TB of raw data during an eight-hour shift. A data collection wave of 10 cars could therefore generate approximately 2PB of data in a single day (assuming one shift per day). So we have masses of rich and informative data, but how do we offload it from the test cars to the data centers once they return to the garage?
At urban testing centers, for example, network bandwidth can be easily scaled to ensure that the data reaches our data centers — located in North America, Europe and Asia (see map below) — especially if the data is collected in close physical proximity to those centers, or if our logistics service is included. But data collection often takes place far from data centers — resulting in expensive cross-border logistics services — or our customers decide to store the data in the cloud.
We currently have two main ways of transporting data back to a data center or cloud. Both have their own strengths and weaknesses. Until advances in technology make these challenges easier to manage, here’s what we do:
Connect the car to the data center. Test cars generate about 28TB of data in an hour. It takes 30 to 60 minutes to offload that data by sending it to the data center or local buffer over a fiber optic connection. While this is a time —consuming option, it remains viable in cases where the data gets processed in somewhat smaller increments.
In many situations the data loads are too large and the fiber connections unavailable to enable the data to be uploaded directly from the car to the data center (e.g., at geographically remote test locations such as deserts, ice lakes and rural areas). In such cases, two other approaches are used.
a) Take/ship the media to a special station. In this scenario we remove a plug-in disk from the car and either take it or ship it to a “smart ingest station” where the data is uploaded to a central data lake. Because it only takes a couple of minutes to swap out the disks, the car remains available for testing. The downside of this option is that several sets of disks need to be available, so compared to Method 1, we are buying time by spending money.
b) Central data lake is in the cloud. This is a version of the previous option, whereby the data is uploaded from a smart ingest station to a central data lake located in the cloud. The biggest challenge with this approach is cloud connection bandwidth: the current maximum bandwidth of one connection is 100 Gbps in a standard cloud offering. Using a simple calculation over a 24-hour period,1PB could theoretically be transferred to the cloud (in practice, it is half that number). As a result, we need to establish many parallel connections to the cloud. In addition, R&D car sensors now have higher resolution (4K), thereby producing greater volumes of data – quite a challenge when network costs increase significantly together with throughput scaling.
Future roadmaps for data ingestion
Given ongoing research and technological advances, both data ingestion methods may very quickly become outdated, as in-car computers become capable of running their own analyses and selecting necessary data. If a test car could isolate its video on, for example, at right-hand turns at a stop light, the need to send terabytes of data back to the main data center would be alleviated, and testers could then send smaller data sets over the internet (including 5G cellular data transfer).
Another innovation would be smart data reduction, such as recording with reduced frames-per-second or reduced resolution, when nothing significant is happening. In this instance, what is considered significant would need to be defined beforehand; in other words, data transfer and the data collection programs need to be strongly connected to use cases. The data cannot therefore be collected once and reused many times for different use cases (training and testing differs for algorithms and models). Smart data reduction would then occur in the car or as part of a data upload inside a smart ingest station.
A longer-term technological advancement could be in sensor reduction or lossless data compression at the sensor level. Today’s sensors follow the rule “the higher the resolution, the better” (as well as “the greater the number and types of sensors, the better”). This approach – even if acceptable in a small number of R&D cars – cannot be implemented in millions of consumer vehicles.
And so we arrive at the challenge of sensor optimization to reduce the cost and amount of data. Obviously, in such a task, machine learning algorithms can help, especially if neural networks algorithms are combined with quantum computing to solve a task of optimal location and direction of various sensors.
The data ingest challenges mentioned here are only the beginning of AD/ADAS data processing. Initial steps to control data quality or to extract metadata are frequently built into ingestion processes. However, the subsequent processing steps involving data quality, data catalog and data transformation at that scale usually occur in a data lake – a fascinating topic to further explore.
|
<urn:uuid:cb333972-1cc6-44d9-8303-44a5764cd2f2>
|
CC-MAIN-2022-40
|
https://dxc.com/us/en/insights/perspectives/blogs/ensuring-effective-autonomous-vehicle-data-ingestion
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00059.warc.gz
|
en
| 0.934142 | 1,199 | 2.5625 | 3 |
Amazon’s Virtual Private Cloud (VPC) is a foundational AWS service in both the Compute and Network AWS categories. Being foundational means that other AWS services, such as Elastic Compute Cloud (EC2), cannot be accessed without an underlying VPC network.
Creating a VPC is critical to running in the AWS cloud. Let’s take a look at:
(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)
How VPCs work: virtual networking environments
Each VPC creates an isolated virtual network environment in the AWS cloud, dedicated to your AWS account. Other AWS resources and services operate inside of VPC networks to provide cloud services.
AWS VPC will look familiar to anyone used to running a physical Data Center (DC). A VPC behaves like a traditional TCP/IP network that can be expanded and scaled as needed. However, the DC components you are used to dealing with—such as routers, switches, VLANS, etc.—do not explicitly exist in a VPC. They have been abstracted and re-engineered into cloud software.
Using VPC, you can quickly spin up a virtual network infrastructure that AWS instances can be launched into. Each VPC defines what your AWS resources need, including:
- IP addresses
- Networking functionality
Where VPCs live
All VPCs are created and exist in one—and only one—AWS region. AWS regions are geographic locations around the world where Amazon clusters its cloud data centers.
The advantage of regionalization is that a regional VPC provides network services originating from that geographical area. If you need to provide closer access for customers in another region, you can set up another VPC in that region.
This aligns nicely with the theory of AWS cloud computing where IT applications and resources are delivered through the internet on-demand and with pay-as-you-go pricing. Limiting VPC configurations to specific regions allows you to selectively provide network services where they are needed, as they are needed.
Each Amazon account can host multiple VPCs. Because VPCs are isolated from each other, you can duplicate private subnets among VPCs the same way you could use the same subnet in two different physical data centers. You can also add public IP addresses that can be used to reach VPC-launched instances from the internet.
Amazon creates one default VPC for each account, complete with:
- Default subnets
- Routing tables
- Security groups
- Network access control list
You can modify or use that VPC for your cloud configurations or you can build a new VPC and supporting services from scratch.
Managing your VPCs
VPC administration is handled through these AWS management interfaces:
- AWS Management Console is the web interface for managing all AWS functions (image below).
- AWS Command Line Interface (CLI) provides Windows, Linux, and Mac commands for many AWS services. AWS frequently provides configuration instructions as CLI commands.
- AWS Software Development Kit (SDK) provides language-specific APIs for AWS services, including VPCs.
- Query APIs. Low-level API actions can be submitted through HTTP or HTTPS requests. Check AWS’s EC2 API Reference for more information.
(Learn about more AWS management tools.)
Elements of a VPC
The web-based AWS management console, show above, shows most of the VPC resources you can create and manage. VPC network services include:
- IPv4 and IPv6 address blocks
- Subnet creation
- Route tables
- Internet connectivity
- Elastic IP addresses (EIPs)
- Network/subnet security
- Additional networking services
Let’s look briefly at each.
IPv4 and IPv6 address blocks
VPC IP address ranges are defined using Classless interdomain routing (CIDR) IPv4 and IPv6 blocks. You can add primary and secondary CIDR blocks to your VPC, if the secondary CIDR block comes from the same address range as the primary block.
AWS recommends that you specify CIDR blocks from the private address ranges specified in RFC 1918, shown in the table below. See the AWS VPCs and Subnets page for restrictions on which CIDR blocks can be used.
Launched EC2 instances run inside a designated VPC subnet (sometimes referred to as launching an instance into a subnet).
For IP addressing, each subnet’s CIDR contains a subset of the VPC CIDR block. Each subnet isolates its individual traffic from all other VPC subnet traffic. A subnet can only contain one CIDR block. You can designate different subnets to handle different types of traffic.
For example, file server instances can be launched into one subnet, web and mobile applications can be launched into a different subnet, printing services into another, and so on.
Route tables contains the rules (routes) that determine how network traffic is directed inside your VPC and subnets. VPC creates a default route table called the main route table. The main route table is automatically associated with all VPC subnets. Here, you have two options:
- Update and use the main route table to direct network traffic.
- Create your own route table to be used for individual subnet traffic.
For Internet access, each VPC configuration can host one Internet Gateway and provide network address translation (NAT) services using the Internet Gateway, NAT instances, or a NAT gateway.
Elastic IP addresses (EIPs)
EIPs are static public IPv4 addresses that are permanently allocated to your AWS account (EIP is not offered for IPv6). EIPs are used for public Internet access to:
- An instance
- An AWS elastic network interface (ENI)
- Other services needing a public IP address
You allocate EIPs for long-term permanent network usage.
VPCs use security groups to provide stateful protection (the state of the connection session is maintained) for instances. AWS describes security groups as virtual firewalls.
VPCs also provide network access control lists (NACLs) to stateless VPC subnets—that is, the state of the connection is not maintained.
Additional networking services
Of course, these are not the only AWS services a VPC provides. You can use VPC to configure other common networking services such as:
- Virtual Private Networks (VPNs)
- Direct connectivity between VPCs (VPC peering)
- Mirror sessions
VPCs & shared responsibility
Before you start configuring VPCs, check out Amazon’s Shared Responsibility model. Per Amazon, security and compliance is a shared responsibility between AWS and its customers.
For your AWS account and configurations, AWS is responsible for the “Security of the Cloud” while customers are responsible for “Security in the Cloud.” Generally:
- AWS is responsible for the AWS cloud infrastructure (hardware, cloud software, networking, facilities) that run AWS services.
- Customers are responsible for what they run in the cloud, such as servers, data, encryption, applications, security, access, operating systems, etc.
The shared responsibility model lays out who is responsible for specific issues when you experience AWS downtime, security breaches, or loss of business. It is important to understand these limits as you set up your VPC configuration. Consult the shared responsibility model for more information.
- BMC Multi-Cloud Blog
- The AWS Well-Architected Framework: 5 Pillars & Best Practices
- Public vs Private vs Hybrid: Cloud Differences Explained
- Rise of Data Centers & Private Clouds in Response to Amazon’s Hegemony
- Cloud Growth, Trends & Outlook
|
<urn:uuid:3047bb42-2e6b-4dfa-b952-9e883c5c4e24>
|
CC-MAIN-2022-40
|
https://www.bmc.com/blogs/aws-vpc-virtual-private-cloud/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00259.warc.gz
|
en
| 0.899642 | 1,660 | 2.84375 | 3 |
bennnn - stock.adobe.com
We recently looked at the idea of the data lake, so now it’s time to head downstream and look at data warehouses.
We’ll define data warehouses, look at the data types they comprise, the storage they need, and the products and services that have been available, on-premise but increasingly from the cloud.
Key to defining the data warehouse is to recap on the source of data that flows into it. That is, the data lake.
Data lakes are like the wild west, unsuited to access by users or even most IT staff. Data may be searchable and to some extent queryable by its metadata to determine its use downstream, but it is not the place where operational analysis takes place. It is where data resides before it is processed and presented for analytics work.
That’s what occurs in the data warehouse. Compared to the anarchy of the data lake, the data warehouse is an ordered environment, comprising structured data in databases.
As historically defined, data warehouses are almost always dedicated to analytics, and kept quite separate from transaction processing for performance reasons.
Data warehouse storage
The data lake, as we saw, is a largely unorganised environment and access does not need to be terribly fast. Data can reside in myriad forms and getting to grips with it will often involve schema-on-read tools such as Hadoop and Apache Spark, or Amazon Athena (in the cloud) to help with the ingest/analyse process.
By the time data gets to the data warehouse it will have been assessed, wrangled, and usually subject to an extract, transform, load (ETL) process and kept in one or more databases.
Access is for analytics purposes, so while it doesn’t need to be as rapid in access terms as for transactional databases, it should be expected that input/output (I/O) will comprise reasonable amounts of largely sequential traffic as datasets are accessed or copied for analytics processing.
Those requirements have often meant data warehouse storage has been reasonably performant (higher RPM, and SAS) spinning disk or flash. Today, if flash-like access speeds are needed, QLC flash could fit the bill with its suitability to sequential access.
Data warehouse appliances
It’s possible to build your own data warehouse, and specifying storage is a relatively easy part of the process. But hardware specification pales next to overall design, which can be very complex with implications that stretch far into the future.
To mitigate those challenges, numerous vendors have offered data warehouse appliances. These offer – or maybe offered – appliances tailored to data warehouse workloads that could often be scaled out, with preconfigured hardware, operating system, DBMS software, storage and connectivity.
The first came from Netezza in 2001. It was acquired by IBM in 2010 and by mid-decade was re-branded out of existence. That changed in 2019, when IBM bought Red Hat and revived the Netezza brand with flash storage and FPGA processing as well as the ability to run on-premise or in the cloud.
More on data warehouse storage
- Do you need a data warehouse for business intelligence? Some organisations have moved away from using data warehouses in their business intelligence strategies. Read on to find out how approaches to data storage for BI are changing.
- Analytics demands add loftier goals to data warehouse strategies. As the concept of storing data and the technologies needed to do it evolve, companies with set goals in mind are building their data warehouses to maximise analytics outcomes.
Teradata was a pioneer of the data warehouse appliance. Today it offers cloud and hardware-based data warehousing, business analytics, and consulting services. Teradata Everywhere allows users to submit queries to public and private databases using massively parallel processing (MPP) across on-premise data warehouses and multi- and hybrid-cloud storage. IntelliFlex is Teradata’s data warehouse platform which scales to hundreds of PB with flash drives, while intelliCloud is its secure managed cloud for data and analytics-as-a-service.
For a while EMC sold open source Greenplum software capability bundled with its hardware, but now Greenplum is software only, centred on its data warehousing platform and based on a highly parallelised PostgreSQL database. It competed with the big players and is heavily targeted at cloud use, although it will run on-premise and can be containerised.
Oracle used to sell data warehouse appliances, but that’s now in the past. Currently, Autonomous Data Warehouse is Oracle’s data warehouse offering, which is based on the company’s database of the same name. It is a cloud-based technology designed to automate many of the routine tasks required to manage Oracle databases.
Evolution to the cloud
Data warehouse appliances were the best solution to the challenges of running database-centric analytics on-premise in an era before the cloud really started to come of age.
But essentially they are big iron. That meant they were costly to acquire, run and maintain. When it comes to scaling, further challenges arise. Upgrades couldn’t be made in small increments so big chunks of capacity that could lie unused for quite a while needed to be bought. And they’re not just iron. As an appliance they are a complex bundle of software and connectivity out to other data sources.
In the past decade the provision of cloud services has matured to such an extent that data warehouse provision is a natural fit.
In place of costly Capex outlays and ongoing maintenance and running costs, running a data warehouse from the cloud allows the provider to take the strain.
All the big three – AWS, Azure and Google Cloud – provide data warehouse offerings that provide core functionality around a database, with added tools such as ETL and data viz and others.
Amazon Redshift is AWS’s managed data warehouse service in the cloud. You can start with a few hundred GB of data and scale to petabytes. To create a data warehouse you launch a set of nodes, called a Redshift cluster. Here you can upload data sets and perform data analysis queries using SQL-based tools and business intelligence applications. Redshift can be managed from a dedicated console or a CLI, with APIs to write into applications.
Amazon specifically targets customers that may want to migrate from Oracle, and also offers packages that come with Matillion ETL and Tableau data visualisation.
Redshift Spectrum also allows data stored in S3 to be analysed in place.
Azure SQL Data Warehouse
Azure SQL Data Warehouse is Microsoft’s managed petabyte-scale service that uses either symmetric multi-processing or MPP to deal with data, dependent on the volumes involved. Microsoft’s cloud offering makes a point of its ability to manage compute and storage independently and to pause the compute layer while persisting the data to reduce costs.
It is based on the Azure SQL database service. Data Warehouse abstracts away physical machines and represents compute in the form of data warehouse units that allow users to and easily scale compute resources at will.
ETL comes from Azure Data Factory.
BigQuery is Google Cloud Platform’s data warehouse offering. Like the rest, it offers petabyte-scale data warehousing, with querying by ANSI SQL.
Big Query has software modules that target machine learning, geographic information systems and business information use cases, and can even use Google Sheets as a substitute for a true database.
BigQuery access is via console or CLI and APIs in application code.
Google Cloud marketing materials specifically target customers that might want to migrate from on-premise Teradata deployments as well as those using Amazon Redshift.
|
<urn:uuid:ed81321a-c9be-4a6c-a184-271779604d13>
|
CC-MAIN-2022-40
|
https://www.computerweekly.com/feature/Data-warehouse-storage-Has-cloud-made-on-premise-obsolete
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00259.warc.gz
|
en
| 0.931284 | 1,603 | 2.5625 | 3 |
|Java on z/OS for Java Programmers
||This course is designed for Java programmers who need to port their skills and knowledge to Java in a z/OS environment. It explains how Java uses features associated with z/OS UNIX, and is supported by Java Software Development Kit. A step-through showing how Java programs are compiled and run in the z/OS environment confirms the similarities between this platform and other Java-enabled environments. You will also see how Java programs can be invoked from batch, CICS, IMS, Db2, and WebSphere.
|Java Introduction for the IBM Enterprise
||This course is intended for experienced Mainframe Programmers, particularly COBOL programmers who need to understand Java and the the basic concepts of object orientation and how it is different from programming languages traditionally used for enterprise development. The student will require knowledge and experience of a procedural mainframe programming language, particularly COBOL, and of the z/OS environment.
|Java Programming for the IBM Enterprise
||This course is intended for experienced Mainframe Programmers, particularly COBOL programmers who need to be able to use Java as an alternative language to COBOL and to use Java to extend enterprise systems to the Internet. Java structures are shown alongside their COBOL equivalent.
|Java Data Access for the IBM Enterprise
||This course is intended for experienced Mainframe Programmers, particularly COBOL programmers, or Java programmers new to the IBM enterprise environment who need to understand the following: the Java datafile and database access, I-O methods, the special requirements and facilities used to access the IBM Enterprise systems unique data storage facilities, to use Java beans as reusable objects and enterprise Java beans for accessing the facilities provided by enterprise systems.
|
<urn:uuid:c951c75b-5296-4605-bed2-7879ad80f259>
|
CC-MAIN-2022-40
|
https://bmc.interskill.com/course-catalog/curr-java.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00259.warc.gz
|
en
| 0.871288 | 391 | 2.796875 | 3 |
The top cyber threats to the home come from the multitude of smart devices that we use every day – from smart speakers to security cameras – that add yet another attack vector for bad actors and store information on you and your family. While families may not believe they are a worthy target of cyber attackers, they are.
We spoke with Heather Mahalik, Faculty Fellow at SANS Institute and Senior Director of Digital Intelligence at Cellebrite, to discuss how you can educate your family on cybersecurity and ensure smart devices are properly protected from malicious cybercriminals.
As cybersecurity threats increase with the proliferation of devices, the cybersecurity risk to families has grown exponentially. What are top threats to families at home and what are some steps families can take to stay cybersecure?
The top threats to the home come from the multitude of smart devices that we use every day – from smart speakers to security cameras – that add yet another attack vector for bad actors and store information on you and your family. While families may not believe they are a worthy target of cyber attackers, they are. Often, bad actors will practice cyberattacks on easier targets, and homes are an accessible and easy place to do that.
The best way to protect yourself is by securing your Wi-Fi network that these devices operate on. Change the name of the network, and most importantly, change the password from the default password on the router. You can also set up guest Wi-Fi accounts for family, friends, and others in your home with a unique password.
The recent Facebook outage blocked millions all over the world from accessing websites or signing into their smart and internet connected devices. What are some ways that families can protect their devices from outside cybersecurity risks or another potential outage that blocks access?
Relying on third parties to log in to any system or device always carries risk of being locked out, which occurred during Facebook’s recent outage. The best way to protect yourself is to avoid using third party logins altogether, despite the convenience they might offer. This means, for example, that if you use Grammarly, a popular online word processor, and login through Facebook, you wouldn’t have access to your documents during a Facebook outage.
If you do need to use a third-party login, make sure you have any necessary information backed up so you can still access it in the event of an outage.
Children are spending more time online than ever before, from online school to after school activities. What are the best ways to instill smart cyber behavior in kids and balance their online privacy?
Since the onset of the COVID-19 pandemic, we’ve seen children’s screen time increase exponentially as they’ve attended virtual school and socialized online. The increased screen time provided new opportunities for kids to engage, but it also unfortunately opened a new world of cyberbullying, online scams, and other social media dangers.
I recommend to parents that they should “friend” their children on social media and discuss any alarming or concerning posts they might see with them. It’s also important to teach your kids to recognize the signs of cyberbullying, and to tell you if they see it in online chatrooms or on social media. While a parent’s instinct might be to delete the messages to protect their children, you should keep it. When digital forensics experts like myself or law enforcement get involved, those messages help us work backwards and put the pieces together. Simply put, don’t delete anything once you discover cyberbullying is going on.
Tell us a bit about SANS’ #SecuretheFamily Campaign. How can families get involved in teaching all members of the family to be cybersecure?
The #SecuretheFamily campaign was launched to help better educate families on how to protect their privacy, security, and devices. With new home devices hitting the market weekly and all members of the family – from young children to parents to grandparents – spending more time online for school, work, and other activities, it’s critical that everyone has the knowledge to use the internet safely and responsibly.
On the #SecuretheFamily homepage, we’ve provided videos, tip sheets, and other resources to help people of all ages understand not only what threats exist, but how to protect themselves and their personal information.
For more info, view this video: https://www.youtube.com/watch?v=l8qbVXjpxig
|
<urn:uuid:02f353c5-e5ba-4da7-83f0-c039fa0a509f>
|
CC-MAIN-2022-40
|
https://www.enterprisesecuritytech.com/post/the-hidden-cyber-threat-secure-your-family-s-smart-devices
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00259.warc.gz
|
en
| 0.94652 | 918 | 2.609375 | 3 |
Battery Health Sensor
- The temperature of the battery terminals
- The voltage output from batteries or panel array
- Current load on batteries or charging circuit
Typical applications would be for battery monitoring, solar panel array, or generator starter batteries.
Product Code : BATTMON
Place the Battery Monitoring Sensor on your generator starter battery. Monitor the crank current from the starter motor, battery voltage, and temperature, or monitor the alternator charge current, voltage, and battery temperature.
This can be used as a diagnostic tool to identify a battery that needs replacing or a problem with the alternator or starter motor.
For example, a decreasing crank current may be a sign that the battery although still having sufficient voltage does not have enough power output to crank the engine, resulting in a failure to start situation.
Monitoring of your complete solar panel system from end to end is possible by deploying multiple Battery Monitoring Sensors.
Solar Panel Array
Monitor the voltage and current output from your panels. Identify if they are running at less than optimum efficiency, for example, if they are dirty and need cleaning.
By comparing the battery current load and the solar panel charging load you can identify if your system is providing sufficient charging power if you are draining your batteries, and if so at what rate they are being consumed.
|Mounting||DIN rail mounting
|Power||Input Voltage and Current ratings :
Voltage: 0~60VDC (3 configurable ranges: 0~15V, 0~30V or 0~60V)
|Power Metering||Voltage (V) : +/-0.05% Full-Scale, error +/-0.05% Full-Scale
Current (A) : +/-0.05% Full-Scale, error +/-0.05% Full-Scale
Temperature Drift: +/-0.02%/°C
|Temperature Monitoring||Temperature sensor with 1-meter cable
range -40°C to 75°C
|Status Indication||LED indication for power
LED indication for input presence
|Operating Environment||Temparature : Min. – 35°C – Max.80°C
Humidity : Min.20% – Max.80% (Non-Condensing)
|Inputs||1x sensor RJ45 Port
Hardwired with following plugs :
|
<urn:uuid:469e4eae-48a7-446b-9647-2be4f051db51>
|
CC-MAIN-2022-40
|
https://www.akcp.com/akcp-products/battery-monitoring-sensor/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00259.warc.gz
|
en
| 0.819407 | 501 | 2.53125 | 3 |
Okay, this next blog post is aimed at the most worried of web users. You know, the kinds of folks who have a Google alert set for the term 'data breach.' The folks who eschew social media for fear of cyber stalkers. The people who turn off their location tracking for fear of Big Brother.
In truth, these people do have a reason to be nervous. The internet is chock full of malicious downloads, cyber scammers, and digital threats. With that said, the best way to handle cybersecurity risks is to make better choices regarding your online privacy.
Take Advantage of Self-Destruct Messengers. Believe it or not, one of the things hackers are most interested in is your private communications. This includes emails you send to your clients, coworkers, family members, and friends. This is especially true for C-level executives and business leaders, just ask Sony or the DNC.
To avoid a similar fate from befalling your business, take advantage of self-destruct messengers like Signal, WhatsApp, or Wickr. These applications only allow the proper recipients to view your messages and can be set to automatically delete read messages.
Poof! And just like that, your conversation is protected from breaches or leaks.
Adopt a Password Manager. It's likely you have seen articles online featuring the worst passwords imaginable. And hopefully, you don't use anything so silly as '123456' or 'guest' to protect your accounts. But even if you aren't the worst offender, you may still be relying on poor password protocols.
Think to yourself: Have you ever reused a password for more than one account? Do you view some accounts as more important than others? Do you keep a list of passwords in a drawer inside your desk or saved to an electronic file on your desktop?
If you said yes to any of the above questions you need to do better. Start by adopting a password manager today. KeePass, for instance, recommends passphrases and uses password protection to lock away all your passwords in a database not accessible via the web.
Use a Virtual Private Network. Everybody loves free Wi-Fi. It's easy, publicly available at your favorite coffee shops "“ and did we mention free? Sadly, you might be getting more than you bargained for.
Hackers are known to prey on unsuspecting web users by setting up phony (albeit believable) public Wi-Fi hotspots. By luring users into this trap, the cyber crook can spy on you, pilfer your data, and sneak malware onto your devices.
But even if the connection is legit, hackers could still eavesdrop on data traveling to a public router from your computer. Do yourself a favor and install a virtual private network (VPN) on all employee devices to hide their activities from prying eyes. If you are still wary of the World Wide Web, you might want to look into a Tor browser to obscure all online activities.
Invest in Cyber Insurance. Even after adopting the most alarmist position regarding cybersecurity and complementing it with the best technologies available, you can still fall prey to cybercrime. Thankfully, you can establish a safety net through CyberPolicy.
We are more than happy to connect you with a cyber insurance provider suited to your unique needs and concerns. Visit CyberPolicy for a free cyber insurance quote today!
It's the smartest thing you'll do to protect your business today.
|
<urn:uuid:106c7b5c-5aa0-4132-8faa-4302a52814e7>
|
CC-MAIN-2022-40
|
https://www.cyberpolicy.com/cybersecurity-education/step-by-step-securing-your-whole-life-from-hackers
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00259.warc.gz
|
en
| 0.924718 | 699 | 2.53125 | 3 |
Cyber Tip of the Day - Rootkits
Happy Monday! We would like to start your week with a tip to stay safe while online. Today’s subject: Rootkits.
Rootkits open up a “backdoor” into a computer system. Once they’re installed, they let in other viruses or hackers who can then force a cyber attack and infect your machine or steal information. Rootkits give hackers the ability to take complete control over your machine.
Here are some cybersecurity tips to protect yourself:
- To improve your cybersecurity awareness pay attention to abnormal system behavior such as failure to respond to keyboard input, surges in network traffic, or frozen screens.
- Switch off your devices when you’re not using them.
- Consider disconnecting from the Internet when you are not working online.
- Use a good anti-virus solution that has anti-malware and anti-phishing capabilities.
- Use strong passwords. Rootkits often breach systems due to weak passwords set on root or administrator accounts.
- Be cautious when clicking on links or opening attachments in email or online posts, or when downloading unknown applications. These are the common entry routes for rootkits and often lead to a cyber attack.
Be Smart. Be Aware. Be Secure. ERMProtect.
Get a curated briefing of the week's biggest cyber news every Friday.
Turn your employees into a human firewall with our innovative Security Awareness Training.
Our e-learning modules take the boring out of security training.
Intelligence and Insights
|
<urn:uuid:5a9ad336-8bdc-47d9-99fd-9dbbcb9c8953>
|
CC-MAIN-2022-40
|
https://ermprotect.com/blog/cyber-tip-of-the-day-rootkits/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00259.warc.gz
|
en
| 0.883759 | 324 | 2.59375 | 3 |
Ambient Intelligence (AmI) refers to the environment of electronic devices that are sensitive and responsive to the presence of human beings. This technology is considered to be a collaborative outcome of Artificial Intelligence with the Internet of Things (IoT).
The concept of Aml was first hypothesized in the early 1990s when the Information Society and Technology Advisory Group of the European Commission collaborated with Philips, a tech company, in developing an environment where the physical world can be integrated with sensors, intelligent systems and devices.
The main components of an AmI environment according to Information Society Technologies Advisory Group (ISTAG) consists of adaptive software, dynamic distributed network, embedded systems, I/O devices, components for mobile communication, sensors and essential hardware components. These components offered adaptability, security, contextual awareness, computational capabilities, robustness, etc to the AmI devices.
Ambient Intelligence – Adoption and Market growth
This electronic environment that can sense the presence and be responsive to the needs and habits of human beings is based on three components. Namely, ubiquitous computing, communication and intelligent user interfaces. The characteristics of these components such as embeddedness, transparency and context awareness along with the implementation of technologies like Artificial Intelligence and Machine Learning are some of the factors that significantly contribute towards the growth of AmI in the market.
Ambient Intelligence not only enables human-machine interaction but also helps in increasing the efficiency and performance of the device. Technology verticals such as BFSI, security, retail and e-commerce, manufacturing, energy and utilities, IT and Telecom, education and healthcare contribute massively towards the adoption of AmI. When it comes to region-wise adoption, North America, Europe and Asia-Pacific regions contribute significantly.
Various use cases of Ambient Intelligence
Ambient Intelligence in the healthcare sector has made noticeable improvements in monitoring and analyzing patients. AmI technologies can be used to maintain patients’ Electronic Medical Reports (EMR) by recording patients’ health stats. It can also help physicians in analyzing patient’s behavior and allergic responses to certain medications. Ambient Assisted Living (AAL) technology has been assisting senior citizens by remotely monitoring their health and enables them to live independently.
Retail and E-commerce
AmI technology has allowed E-commerce vendors like Amazon and Alibaba to adopt the concept of unmanned supermarkets. Leveraging this technology will provide a personalized shopping experience in physical stores that are similar to online shopping experiences.
Security systems these days come with camera features. This allows the users to know and identify the people or to monitor home or factories or their properties remotely from one single location. AmI-powered security systems will be able to provide more accurate, real-time data and will be user-friendly as well. In 2023, the surveillance market is expected to be worth $62.6 billion along with the infrastructure applications.
Intelligent home automation systems
An intelligent home automation system can seamlessly integrate with electronic devices like refrigerators, lighting and entertainment systems and temperature controls. All of these smart-home appliances can be controlled from a single device. The evolution of technology will develop much greener and cost-efficient smart home devices that could potentially help towards reducing energy consumption and carbon footprints. The percentage of smart homes in the US by the end of 2021 will approximately be 28%.
Major players in the field of AmI technology
AppZen, founded in 2012, develops the world’s leading AI platforms for financial firms. Their AI solution can help businesses with expenditure tracking and finance automation processes. Around one-fourth of the fortune, 500 companies use AppZen’s AI solution. With the help of deep learning, computer vision and intelligence, the AI solution can essentially make relevant decisions before a transaction takes place.
2) Audio Analytic
Audio Analytic, a Britain-based company, founded in 2010, develops devices with embedded software sensors that can react to smoke alarms, security breaches, car alarms, etc. The company initially developed products for professional security agencies and eventually shifted its focus towards consumer electronics and smart home appliances.
Near, an ambient intelligence platform, provides real-time information of places, people and products. This platform provides information by collecting and processing huge amounts of data from smart devices and environments. Near deals with data that are collected on a global scale and is capable of processing data from over a billion devices at present.
Career opportunities in the field of Ambient Intelligence
As mentioned earlier, Ambient Intelligence (AmI) is the field that is related to Artificial Intelligence (AI) and the Internet of Things (IoT). Any form of intelligence demonstrated by machines in an environment can be referred to as Ambient Intelligence. Any professionals in the field of AI and IoT can work on developing AmI devices/platforms. Ideally, a few of the most common roles in both the fields include,
- AI engineer
- Embedded systems developer
- Software engineer
- Data scientist
- Machine Learning engineer
- Research scientist
- Hardware and device developer
- UX designer
- Network engineer
For students who want to pursue a career in the field of developing intelligent solutions, the basic requirements are to have a deeper understanding of sensors and their functioning, UX designing, a strong background in object-oriented programming languages and prior experience in working with IoT devices like the raspberry pi. To learn Artificial Intelligence, many courses are being offered by professionals on reputed platforms.
AmI technologies have a great chance to completely revolutionize the existing smart system mechanisms and contribute more towards making our lives easier. In the upcoming years, AmI will be able to contribute to the well-being of individuals. This will, in turn, become the key factor for the adoption of this technology.
|
<urn:uuid:5e0108fd-603e-4875-a4c3-b8c2d59aba70>
|
CC-MAIN-2022-40
|
https://expersight.com/ambient-intelligence-adoption-use-cases-opportunities/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00259.warc.gz
|
en
| 0.925183 | 1,181 | 3.09375 | 3 |
Researchers from North Carolina State University have developed a system that can simultaneously deliver watts of power and transmit data at rates high enough to stream video over the same wireless connection. By integrating power and high-speed data, a true single “wireless” connection can be achieved.
“Recently wireless power as re-emerged as a technology to free us from the power cord,” says David Ricketts, an associate professor of electrical and computer engineering at NC State and senior author of a paper on the work. “One of the most popular applications is in wireless cell phone charging pads. As many know, these unfortunately often require almost physical contact with the pad, limiting the usefulness of a truly ‘wireless’ power source. Recent work by several researchers have extended wireless power to ‘mid-range’ which can supply power at inches to feet of separation. While encouraging, most of the wireless power systems have only focused on the power problem – not the data that needs to accompany any of our smart devices today. Addressing those data needs is what sets our work apart here.”
Wireless power transfer technologies use magnetic fields to transmit power through the air. To minimize the power lost in generating these magnetic fields, you need to use antennas that operate in a narrow bandwidth – particularly if the transmitter and receiver are inches or feet apart from each other.
Because using a narrow bandwidth antenna limits data transfer, devices incorporating wireless power transfer have normally also incorporated separate radios for data transmission. And having separate systems for data and power transmission increases the cost, weight and complexity of the relevant device.
The NC State team realized that while high-efficiency power transfer, especially at longer distances, does require very narrow band antennas, the system bandwidth can actually be much wider.
“People thought that efficient wireless power transfer requires the use of narrow bandwidth transmitters and receivers, and that this therefore limited data transfer,” Ricketts says. “We’ve shown that you can configure a wide-bandwidth system with narrow-bandwidth components, giving you the best of both worlds.”
With this wider bandwidth, the NC State team then envisioned the wireless power transfer link as a communication link, adapting data-rate enhancement techniques, such as channel equalization, to further improve data rate and data signal quality.
The researchers tested their system with and without data transfer. They found that when transferring almost 3 watts of power – more than enough to power your tablet during video playback – the system was only 2.3 percent less efficient when also transmitting 3.39 megabytes of data per second. At 2 watts of power, the difference in efficiency was only 1.3 percent. The tests were conducted with the transmitter and receiver 16 centimeters, or 6.3 inches, apart, demonstrating the ability of their system to operate in longer-distance wireless power links.
“Our system is comparable in power transfer efficiency to similar wireless power transfer devices, and shows that you can design a wireless power link system that retains almost all of its efficiency while streaming a movie on Netflix,” Ricketts says.
|
<urn:uuid:c6e6a4e9-dfbe-44b7-935a-1a2161708292>
|
CC-MAIN-2022-40
|
https://daspedia.com/archives/5720
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00259.warc.gz
|
en
| 0.939703 | 635 | 3.390625 | 3 |
The Internet of Things (IoT) in the health care and medical industries is at an advanced stage in some areas and sorely lacking in others.
Some applications such as heart and other monitors provide major amounts of data to health care professionals. However, within hospital systems, silos of data and legacy equipment hamper the broad implementation of IoT in the sector — but that is changing fast.
“There are more examples of IoT in health care/medical than one may realize, from sensors to collect temperature, blood pressure and other health metrics to multi-spectral sensors for X-ray, 2D, and 3D imaging,” said Greg Schulz, an analyst with StorageIO Group.
“It’s all about quickly getting health status, monitoring, trending, and analysis.”
5 IoT health care examples
Health care is one of the key sectors driving innovation in the Internet of Things. With so much of annual GDP absorbed in health, there is enough revenue around to invest plenty of R&D dollars in medical IoT. A large number of startups have entered the space.
“IoT is undoubtedly transforming the health care industry by redefining the space of devices and people interaction in delivering health care solutions,” said Rajashekhar Karjagi, head of analytics solutions at Wipro.
“IoT has applications in health care that benefit patients, families, physicians, hospitals, and insurance companies.”
1. Digital twins
Health care technology provider Ebenbuild has launched a research program to increase the odds of survival and recovery of those needing artificial ventilation due to acute respiratory distress syndrome (ARDS).
Developers optimized pre-trained artificial intelligence (AI) inference models to run on Intel hardware, accelerating performance of the computer vision cluster. A research program from Ebenbuild fuses patient data with machine learning (ML) algorithms and physics-based computer simulation fed by IoT sensors to build a digital twin of the lungs. By better understanding the human lung, physicians can personalize ventilation therapy to bring many more ARDS patients to a full recovery.
GE Healthcare provides the industry with intelligent devices, data analytics, applications, and services.
Its Versana Premier ultrasound system uses AI and IoT to provide two-dimensional images and sensitive flow signals.
It offers automated near-real-time image enhancement features, and labels the human tissues in an image with a method based on deep learning neural network technology. This makes it much easier for personnel without advanced training to use the equipment, helping to broaden access to high-quality medical resources in less-developed areas.
IoT innovations: 85 Top IoT Devices
3. Patient monitoring
A Montage Health hospital in Monterey, California, has augmented in-person patient monitoring with a HIPAA-compliant virtual patient observation solution from Wachter Healthcare Solutions.
The Nursing Observation and Virtual Assistant (NOVA) solution allows trained technicians to monitor up to 12 patients at once from a remote monitoring station, providing a virtual window into the condition and status of patients.
With its open architecture, featuring IoT gateways, workstations, and servers, NOVA can be implemented easily and scaled to help a health care system optimize patient care.
This system has reduced patient falls by 30%, increased patient satisfaction, and lowered staffing costs. NOVA is deployed primarily in telemetry units, where patients are under constant electronic monitoring, as well as in the COVID-19 unit and emergency department overflow rooms.
Devices in the form of wearables — like fitness bands and other wirelessly connected devices, such as blood pressure and heart rate monitoring cuffs and glucometers — give patients access to personalized attention.
These devices can be tuned to monitor calorie count, exercise checks, appointments, blood pressure variations, and more.
“IoT has changed people’s lives, especially elderly patients, by enabling constant tracking of health conditions,” said Karjagi of Wipro. “This has a major impact on people living alone and their families.
“On any disturbance or changes in the routine activities of a person, an alert mechanism sends signals to family members and concerned health providers.”
5. Equipment tracking
Wipro has also developed IoT technology for physicians and hospitals.
IoT devices tagged with sensors, for example, are used for tracking the real-time location of medical equipment, like wheelchairs, defibrillators, nebulizers, oxygen pumps, and other monitoring equipment. The deployment of medical staff at different locations can also be analyzed in real-time.
Additionally, the spread of infections is a concern for patients in hospitals. IoT-enabled hygiene monitoring devices help in preventing patients from getting infected.
IoT devices assist health care providers with asset management, like pharmacy inventory control and environmental monitoring. For instance, they can check and help control refrigerator temperature and humidity.
Health insurance companies are also capturing data from IoT-connected intelligent devices for their underwriting and claims operations. This data enables them to detect fraud claims and identify prospects for underwriting. Insurers offer incentives to customers to use and share such data generated by IoT devices. They also offer discounts and rewards to those who follow certain routines and adhere to treatment plans and precautionary health measures. This benefits patients as well as insurance providers by reducing the number of claims.
See more: Best IoT Platforms & Software
|
<urn:uuid:c2886c98-84be-4725-a0de-6efb4e34ff2b>
|
CC-MAIN-2022-40
|
https://www.datamation.com/networks/internet-of-things-iot-health-care/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00259.warc.gz
|
en
| 0.929674 | 1,127 | 2.796875 | 3 |
In spite of a range of security technologies being deployed, devastating thefts of sensitive data continue to occur. To address these threats, many organizations are looking to deploy data privacy solutions- solutions that ensure the security of data inside the enterprise.
Enterprises worldwide are spending $20 billion per year on IT security, yet very costly breaches continue to occur. In large part, this is because security efforts have mainly been focused on network security rather than data privacy. Data privacy is the process of securing critical data that is being stored, transmitted and used within the enterprise.
The need to augment network security mechanisms with data privacy technologies has never been more vital. For example, given that most estimates cite over 50% of security breaches are perpetrated by internal staff, perimeter security mechanisms like IDSs and firewalls are ill-equipped to address many threats to sensitive data. Further, in spite of the deployment of network security technologies, organizations are susceptible to a range of attacks: storage systems can be breached via insecure storage management interfaces, and physical storage systems and databases themselves can be stolen.
Failure to implement a data privacy solution can have a disastrous effect on an organization. For years now, the price organizations have paid when breaches become public has been catastrophic. One estimate states that compromised firms lose, on average, 2.1% of their market values within 2 days of a breach, which translates into an average of a $1.65 billion loss in market capitalization per incident. This is on top of very real, but harder to quantify, losses that stem from damaged brands and diminished customer trust. Not coincidently, many firms do whatever they can to keep these breaches from going public. In fact, recent estimates state that only 30% of all security breaches get reported at all.
Whether organizations want it to or not, this will have to change. A range of policies and legislative mandates are dictating a more data-centric approach and, further, are requiring the disclosure of any breach. These mandates are coming in a range of forms:
Regional legislation: Europe´s Data Privacy Act, Canada´s Personal Information Protection and Electronic Document Act (PIPEDA), California´s Database Security Breach Notification Act, SB 1386, and many others all dictate encryption in some fashion, and that any victims of breaches are notified.
Industry specific legislation: In healthcare, the Health Insurance Portability & Accountability Act (HIPAA); and the Gramm-Leach-Bliley Act (GLBA) in financial services have provided comprehensive guidelines for safeguarding patient and consumer data, respectively.
Commerce policies: Credit card issuers like Visa, MasterCard, and American Express all have delivered comprehensive guidelines that provide an edict for both best security practices, including data encryption for example, as well as mandating consumer notification of breaches.
The bottom line of all of this is that organizations need to address data privacy in a comprehensive fashion. Those that don´t, and wait for a legislative mandate, or worse, a security breach, before they do so, will ultimately be taking chances that can put an entire business at risk.
Historically, the challenge in achieving data privacy has been that many of the options available to organizations have been lacking, either in terms of delivering true security, or in terms of prohibitive cost or complexity. Today there are solutions to address data privacy that overcome these obstacles.
Best Practices for Implementation of Data Privacy
A) Selecting Cryptographic Algorithms – You will need to review recommendations for choosing among the various cryptographic operations available in implementing a data privacy solution. Some of the options are DES, 3DES, AES, RC4, SHA-1, MD5. Asymmetric Key Algorithms (Public Key) can be up to an order of magnitude slower than symmetric algorithms. Therefore, if possible, a symmetric algorithm should be chosen. Block encryption algorithms (such as DES and AES) can be used in a number of different modes, such as “electronic code book” (ECB) and “cipher block chaining” (CBC). In nearly all cases, CBC is recommended over ECB mode. ECB mode can be less secure because the same block of plaintext data always results in the same block of ciphertext, a property that can be used by an attacker to reveal information about the original data and to tamper with the encrypted data. Many modes, including CBC mode, require an “initialization vector” (IV), which is a sequence of random bytes used as input to the algorithm along with the plaintext. The IV does not need to be secret, but it should be unpredictable.
B) Key Management – Key management is a fundamental consideration when deploying a data privacy solution. If the keys used to protect sensitive data within an enterprise are not properly secured, attackers may be able to gain access to this data with relative ease. In a highly secure environment it is important to generate and manage keys in a centralized manner in which strict access privileges are enforced. For example, keys stored across multiple application servers are significantly more difficult to manage and protect than keys stored on a centralized platform. A specialized hardware device in which all cryptographic operations are performed securely and in which keys are never visible in the clear is highly recommended. It is a good practice to protect data with newly generated keys periodically. Re-encrypting data with a new key at least once a year is recommended. An important consideration when rotating keys is managing backups and archives. An enterprise must be able to ensure that critical data cannot be compromised through the use of old keys and archived data, while also being able to guarantee access to this data if necessary.
C) Authentication, Authorization and Auditing – Enterprises need a secure way to identify people and entities that require access to sensitive data. In implementing a solution, administrators need to decide what data will be accessible and who will have access to it. Some methods of access control are passwords, client certificates, biometrics and tokens. Auditing is an extremely important part of a data privacy solution. It allows an enterprise to determine who did what at any given point in time, including when authentication and authorization were allowed or denied to an entity. A data privacy solution should offer robust logging capabilities and support log signing, in order to prevent an attacker from tampering with logs. Logs should be analyzed regularly to look for strange behavior that could potentially represent attacks to the enterprise.
D) Backup, Restore and Disaster Recovery – Backup and restore capabilities are critical to ensure that an environment can be recreated in the event of a disaster. It is also important to be able to replicate an existing environment in order to scale according to the needs of the enterprise. A good solution will allow for a secure mechanism to create backups and perform restores of all keys as well as relevant configuration information.
E) Encrypting multiple columns in a database – It is strongly recommended to use different encryption keys for each column encrypted. That way, even if an attacker manages to compromise a single key, the rest of the encrypted columns will remain secure. The only reason to use a single key to encrypt multiple columns is if the columns all contain values from the same set of data and encrypted values have to be compared with each other to determine quality (such as performing a join).
F) Pre-migration backups – Even if sensitive information in production databases is securely protected, it is important to be aware that sensitive data may still exist in the clear in such places as tape backup and database backups. An enterprise must identify all of these locations and replace them with new backups in which the sensitive information is protected.
These are just a few areas that must be considered in implementing a solid data privacy solution. It is critical for enterprises to address data privacy by deploying security solutions for critical data in transit and stored.
|
<urn:uuid:9e08a0d9-3376-4233-8022-f37e20b21745>
|
CC-MAIN-2022-40
|
https://it-observer.com/achieving-data-privacy-enterprise.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00259.warc.gz
|
en
| 0.927342 | 1,589 | 2.6875 | 3 |
Re-Imagining the Classroom Experience
The typical school day is evolving with new advances in technology, much of which depends on broadband connectivity. For many children and young adults, their school day begins in a classroom at a local school; for others, a school day may begin in their living room or sitting at their kitchen table. Wherever a school day begins, imagine having the opportunity for it to end in a different state, in a foreign country, or maybe even on a distant continent.
As a part of the Education Initiative Team at CableLabs and as a parent of children in public schools, I have spent time re-imagining how technology could transform our classrooms. What if we used technology to create a borderless, or even boundless educational experience? What impact could we have? What challenges would we face? What solutions might be possible?
Technology Enables Alternative Learning Experiences
By maximizing the use of technology in the classroom and beyond, endless opportunities and vast educational experiences are possible. Technology can enable traditionally schooled, homeschooled, and remote learners to join in on classroom lessons. Video communication, and potentially newer experiences offered by augmented and virtual reality solutions, can offer the ultimate virtual field trip for classrooms and schools of any size. Instead of the required reading assignments that we are all used to, e.g. read this book, write this essay, imagine high school classrooms where an alternative learning experience is assigned to supplement the required reading. For example, a teacher could coordinate a live video conference session with an author as an engaged interactive experience for students.
Newer technology and broadband connectivity, including next generation solutions such as DOCSIS® 3.1 and Full Duplex DOCSIS® 3.1, can leverage multi-location learning and collaboration. Students in the United States can participate in the global community with students from other countries, thus creating opportunities to gain knowledge of other cultures, economic strata, and quality of life. By expanding the scope and perspective of knowledge students gain from alternative learning experiences, we can develop capacity for compassion. Could the use of this technology allow us to provide a broader frame of reference for education and gear outward focus for the next generation to be successful on a global level? Could a greater international perspective in the classroom administer opportunities to solve real world issues and ultimately empower our children through project based learning and problem solving?
Utilization of video conferencing in schools is not a new concept. In fact, many schools have embraced the technology to create alternative learning situations. For example, through Skype Collaborations, teachers in Kansas learned about a water crisis in Nairobi that prevents students from coming to school. Within their science, math, and social studies classes, students in Kansas learn about the Nairobi community, the living conditions, and the resources needed to fix the water crisis faced by the school. Shortly after hearing about their circumstances, the students and teachers of the Kansas school set a goal to help the children of Nairobi get back to their classroom. By Skyping with representatives from Life Straw, the students learned about water filtration systems and how to build them with common household items. Then the students created awareness of the crisis by fundraising to obtain Life Straw Filtration systems for the Nairobi school. After watching students describe their project, it’s clear to see the passion they have gained while working towards providing a sustainable solution for those in need. Through this example, there is no question that technology, often enabled through broadband connectivity, is capable of being used for a variety of purposes.
The Role of the Cable Industry
Introducing alternative learning experiences is an opportunity for the cable industry. As an already strong partner for education, the cable industry is in a position to promote meaningful change. One idea would be to design and develop trusted solutions and enable educators to structure lesson plans outside of the normal classroom activities, take advantage of highly-trusted networks, and introduce new media types. This will enable the teachers to focus on the content of the lesson, and the tech to be more easily accessed to support learning objectives.
Appropriate use of technology within the classroom can accelerate learning while simultaneously developing empathy and altruistic perspectives that would not have been possible even ten years ago. We have the opportunity to develop a generation of individuals who are more empathetic and outwardly focused. By promoting learning without limiting the experiences in our own backyards, we can create dialogue to teach compassion, appreciation, and authenticity.
|
<urn:uuid:e97abd86-2ca6-4c3a-a617-5e59bd244b03>
|
CC-MAIN-2022-40
|
https://www.cablelabs.com/blog/re-imagining-classroom-experience
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00459.warc.gz
|
en
| 0.946677 | 913 | 3.25 | 3 |
Tuesday, February 6th, 2018, might well go down in the history books as a crucial day in the history of space travel.
If you watched the launch of the SpaceX Falcon Heavy and the live streaming from the unusual passenger on board you may have experienced the feeling of witnessing something out of the ordinary as in previous historic moments in the exploration of space.
The launch was nothing less than a game-changing moment in the transformation and even disruption of space travel. The seeds of it already lived in the mind of the game-changing founder of SpaceX, when he was still a young man wondering about our place and future in the world – and clearly far beyond it – as he told in an interview which you can watch here.
Elon Musk isn’t just changing the face of space travel. He played an instrumental role at PayPal, one of the companies transforming the way we send and receive money. And what about that unusual passenger, a Tesla Roadster? An electric sports car that essentially turns the car industry into one of software and part of many things Musk’s Tesla does.
Technologies at the benefit of humanity
The list doesn’t stop there. Musk transforms industries by the dreams he turns into action and the disruptive technologies we have. At the same time, he also impacts these technologies and why we use them, sometimes even warning for their dangers, with artificial intelligence (AI) being the major one.
Musk’s several companies are not just using AI intensively to realize so many amazing projects but he also is an advocate of keeping AI beneficial for humanity, having donated $10M to the Future of Life Institute to research exactly how we can do that.
Also, the Internet of Things (IoT) is crucial for Musk’s many initiatives: just think about Tesla’s over-the-air updates. And then there is of course data, software and a lot of analytics.
IoT, AI and (big) data analytics in an age where ‘software eats the world’ no doubt are part of the mix of disruptive technologies that will shape the transformation of more industries. I don’t have to look too far to see that happening. In the EcoXpert program for state-of-the-art building management, energy efficiency and power quality technologies and ecosystem partners, we see the combination of IoT, AI, advanced big data analytics and software transforming the landscape making facilities and the world a more sustainable place by bridging physical, digital and human innovation capacities.
Disruptions can lead to transformations in many ways. Technologies can disrupt, brilliant and out-of-the-box entrepreneurs can disrupt, changing customer preferences can disrupt and people knowing how to leverage technologies, innovate and serve changing customers can disrupt even more.
Disruptive technologies: Watch out for those ones
In my book ‘Digitize or Die’ I mention IoT as a disrupting technology and look at how organizations facing disruption and the need to transform essentially have four choices. It is imperative that leadership within a company assesses and strategizes what the IoT might mean for business, even at the sake of disrupting it, because if they don’t, the competition will.
When you look at the different disruptive technologies, you might think they are disjointed… But when you step back, you quickly realize that we are going to live structural changes in the way our society performs.
Internet of Things, Blockchain, Quantum computing and Artificial Intelligence will change the world. The consequences will be massive unemployment, peak of electricity demand and drop in prices of goods and services. People need to prepare themselves; but you can also take advantage of this to identify the next SpaceX, Facebook of the next digital age.
Below are main disruptive technologies I see and how they are connected, for businesses, the benefit of humanity and entrepreneurs who also want to turn their dreams into action.
IoT and connected things – the tiny pieces enabling the bridging of worlds
To build cyber-physical systems, allowing us to derive the insights needed to cross the bridges between digital, physical and human intelligence and innovation, we need to connect objects first.
The first layers of the IoT technology stack as I describe them in my book are essential to enable those over-the-air car updates, build autonomous vehicles and transform from the edge. This essential space of tiny sensors, protocols, embedded intelligence, data capture, APIs and communication is the very cornerstone of the new possibilities and opportunities empowering people and entrepreneurs.
The Internet of things is just the “byte” of the next internet; it will enable scalable and nearly infinite source of raw data for Deep Artificial Intelligence.
Blockchain – trust and contracts at the service of transformation at scale
IoT without Blockchain would have to face scalability and peer to peer issues. Not all is technology. Human decisions, agreements and cooperation are equally essential to transform at scale, based upon trust and secure transactions. However, here as well a disruptive technology enables this in a digital context.
Blockchain will serve as the backbone of digital trust, security at interactions at massive scale. Security and trust in fact are also horizontal layers of the IoT stack. Without them nothing is possible. Yet, trust is also needed to enable collaboration, ecosystems, data exchanges and connected platforms across companies and industries to transform. At the basis of each interaction and transaction in IoT, transformation, data, connected objects, autonomous device and exchange of mutually enriching ideas sits the need for trust, embedded, indeed, at scale. And that is the true disruptive power of blockchain
Software and advanced Big Data analytics – omnipresent applications to enhance decision making
Once you have securely connected things and people in a peer to peer infrastructures, you end up with a massive source of raw data. IoT platforms, application enablement software and software applications overall enable us to combine and understand data, extract intelligence and unleash advanced analytics in ever growing volumes.
Today we only use a small percentage of all data we have as the majority sits and waits for the next brilliant mind, innovative idea or simply transformative use case in which it would make sense. Software thereby penetrates on all levels, connects, translates, detects, visualizes and enables this environment whereby our human intelligence is enriched with flows of actionable intelligence improving our decision-making capacities
Artificial intelligence – augmenting human intelligence for intelligence at scale
The next step will be artificial intelligence using the Big Data enabled by the first 3 steps.
Artificial intelligence does not replace human intelligence, it augments human intelligence in the mentioned scope of a beneficial AI use for humanity, all the way from improving quality of life as in smart buildings to solving societal challenges such as environmental issues and the game-changing examples I mentioned. Artificial intelligence is the way to make sense of unstructured data and turn that sense into benefits into intelligence into innovation and transformation.
Quantum computing – a quantum leap for exponential growth and (re)invention
As the volumes of data keep growing, our digital footprint and capabilities in business and beyond are at the beginning and, to quote Cisco CEO John Chambers, the speed of disruption is getting brutal and we must keep reinventing ourselves, there is more need for artificial intelligence, advanced analytics and other disruptive technologies at true scale.
They need to be supported by stronger backbones and some technologies I mentioned. At the same time the combinations of these technologies and ever more computing power is poised to be disruptive as such. With quantum computing, another key disruptive technology, this will accelerate dramatically. Quantum computing will not just force us to reinvent ourselves again, it will also be the catalyst of unprecedented inventions and innovations.
|
<urn:uuid:e34202f3-72b6-49a2-8ac7-3a6d21e281c9>
|
CC-MAIN-2022-40
|
https://nicolaswindpassinger.com/look-next-spacex
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00459.warc.gz
|
en
| 0.93272 | 1,566 | 2.640625 | 3 |
Most people think of cyberattacks as software against software – traditional malware such as Trojans, viruses, and worms infiltrating and attacking applications, operating systems, or data. This arms race has been escalating for decades and is now at a high level of sophistication, requiring advanced skills on both sides.
But there is another type of attack, one that pits software against hardware. These attacks typically try to corrupt the firmware or configurations used by processors or other core components. Attacking hardware or firmware enables an adversary to have enhanced abilities to hide from detection and can also increase damage to a platform, such as preventing the machine from booting. This can render the machine unusable and unrecoverable with the tools available in an average IT department. Since the machine would not be able to get as far as initializing the memory or external interfaces, even an external drive would not be able to boot.
Often, the target of these attacks is the BIOS (basic input/output system) code that is the first to run when a machine is powered on. Secondary targets are firmware for essential components such as network adapters, IO controllers, power management, and graphics processing units. For greater flexibility, most hardware manufacturers store firmware and its configuration details in flash memory so that it can be patched or updated if necessary. Unfortunately, this rewritable flexibility makes firmware vulnerable to attack. The industry has tried to lock down access to these sensitive areas of flash memory and minimize vulnerabilities. However, the complexity of code and the number of components create numerous attack vectors that are a rich environment for the security research community and malicious adversaries to explore.
Simple Attack, Difficult Recovery
Although the recovery of a machine with corrupted firmware is quite difficult, the attack itself is often much simpler. In many cases, standard software can be used to perform an attack on firmware or hardware. Such a simple attack (that makes your machine a paperweight) is facilitated by writing garbage to the critical configuration variables used by the BIOS and triggering a reboot. Some systems have backup or default settings, so those would have to be corrupted or deleted as well. Many operating systems expose these variables to privileged applications – Microsoft Windows has SetFirmwareEnvironmentVariable, for example – making it trivial to attack once the malware has been delivered. It is the responsibility of the system integrator and OEM to ensure that adequate protection and validation for BIOS and configuration is implemented in a system to prevent this type of attack, and many machines in the wild do not employ adequate protection.
If this is so easy, why are we not seeing more hardware attacks? Probably because the majority of attacks have an objective of stealing information or turning the machine into a controllable bot, and bricked machines do not support these goals. However, as new or different criminal actors and nation states start to exercise their cyberthreats, we may see more hardware-based attacks as a means to create chaos or deny service to an organization. The other factor that makes hardware-based attacks less prevalent in bulk malware is that they often need to support multiple configurations or be individually tweaked to work on the specific hardware platform that is being targeted.
Validating that a platform’s BIOS implementation is not vulnerable to known low-level attacks is a highly complex task. Luckily, there are tools that can make this possible for an IT organization. One tool, called chipsec, was developed here at Intel and then released free and open under an open source license. Key use cases for the chipsec platform include vulnerability assessment, advanced forensics, and security research. We continue to update it and encourage others to contribute as well.
The firmware attack surface is large, and the vulnerabilities are not new and are not terribly difficult to exploit. It is critical that the industry recognizes that both hardware and software are critical assets that may become an area of exploitation in an advanced attack.
|
<urn:uuid:673b700b-ea97-4ddb-9370-e21cda4eb361>
|
CC-MAIN-2022-40
|
https://www.darkreading.com/intel/raising-the-stakes-when-software-attacks-hardware
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00459.warc.gz
|
en
| 0.951678 | 780 | 3.296875 | 3 |
Cryptography is one of the key components in cyber security that relies on codes to ensure that the sender and intended recipient can only read a message. The data encryption security feature guarantees confidentiality, integrity, non-repudiation, and user authentication in modern web systems. Cryptographic functions encrypt and decrypt plain-text messages to ensure secure electronic data transmission between entities, preventing a successful man-in-the-middle attack. Cryptographic failure encompasses a collection of application security risks that expose sensitive data and files through weak encryption techniques.
This guide discusses the cryptographic failure vulnerability, its types, and possible prevention techniques.
|
<urn:uuid:3dd66cbc-743a-44c5-a365-e04bbfd98cd1>
|
CC-MAIN-2022-40
|
https://crashtest-security.com/prevent-cryptographic-failures/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00659.warc.gz
|
en
| 0.860686 | 124 | 3.375 | 3 |
Wireless data transmitted between oncoming vehicles and traffic signals is allowing researchers to dynamically change the duration of green lights to prevent accidents.
To reduce speed-related casualties related to vehicles running red lights, researchers have developed technology to dynamically extend the duration of traffic lights.
According to the Federal Highway Administration, traffic signals are prime locations for accidents, with more than 2 million crashes and 3,000 fatalities a year. Technology developed by Purdue University’s Joint Transportation Research Program and the Indiana Department of Transportation (INDOT) will collect data from wireless transmitters installed in vehicles, calculate the speed and trajectory of oncoming vehicles and communicate that information to the signal, which uses embedded intelligence to adjust the time the light stays green or to change to a yellow light earlier than necessary.
Because the technology is built on the wireless transmission of data rather than sensors embedded in the roadway, the solution requires much less infrastructure investment. The technology has been initially designed for large vehicles and semi-trailers that need more stopping distance and are therefore twice as likely to run a red light.
"To reduce crashes, the key idea is to provide dilemma-zone protection," Purdue Transportation Research Engineer Jijo Mathew told Purdue News, referring to the section of a roadway directly upstream of an intersection. "One would think yellow time can be extended; however, drivers tend to adapt to this, resulting in lower probabilities of stopping.”
The system can extend the green light to ensure that vehicles can clear the intersection; however, when there are multiple vehicles competing for green time, the system will flash the yellow light before the cars enter the dilemma zone.
The wireless devices will be placed in both the traffic lights and in vehicles, many of which are already coming off assembly lines with built-in high-bandwidth, low-latency technologies like 5G broadband, Purdue principal research analyst Howell Li said. Specialized software at the signal controller will tie the components together.
The project was tested on a stretch of highway in Tippecanoe County, Ind. During tests, the system was able to detect vehicles traveling 55 miles per hour, in a six-foot waypoint radius spaced 50 feet apart, with 95% accuracy. Using this data to estimate risk mitigation, researchers concluded dilemma zone incursions at that particular testing site could be reduced by 34%.
The technology will be useful in reducing heavy vehicle red-light accidents, INDOT Signal Systems Field Engineer Tom Platte said, though he added that it will require vehicle manufacturers to install the technology.
"During my time working at the Indiana Department of Transportation, I have only been aware of conceptual-use cases involving onboard vehicle communication technology integrating with live traffic signal control,” he said. “Our new technology moves this integration beyond the merely conceptual. This work provides an implemented real-world use case that addresses an important safety concern, among other applications."
NEXT STORY: GIS amplifies emergency response
|
<urn:uuid:ae186481-defd-4e38-931e-d82b74e2ffb3>
|
CC-MAIN-2022-40
|
https://gcn.com/public-safety/2022/02/wireless-data-reduces-high-speed-accidents-intersections/362395/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00659.warc.gz
|
en
| 0.938952 | 599 | 3.203125 | 3 |
With data from the Federal Highway Administration’s National Household Travel Survey, researchers examined how mobility patterns in 52 of the largest metro areas affected the spread of COVID.
Cities with high-usage public transportation systems displayed higher per capita COVID incidence at the beginning of the pandemic, a new study shows.
The findings held true when researchers accounted for other factors, such as education, poverty levels, and household crowding.
The association continued to be statistically significant even when the model was run without data from transit-friendly New York City.
Using data from the Federal Highway Administration’s National Household Travel Survey, researchers looked at the nation’s 52 largest metropolitan areas and each community’s likelihood of riding buses and trains. They then compared the numbers with the 838,000 confirmed COVID cases on the Johns Hopkins Center for Systems Science and Engineering’s dashboard from January 22 to May 1, 2020.
The time frame covers the initial days, weeks, and months of the pandemic, before mask mandates were in place and prior to widespread social distancing. Ventilation on public transit had yet to be addressed, along with other public health measures that have since become the norm.
While the researchers don’t suggest that transit is the sole cause of the high incidence rates, they say it could have been an important factor early in the pandemic.
Waiting for the bus
“This is what we expected, but we wanted to run the models to know for sure. Policymakers shouldn’t make decisions based on what they assume to be true,” says Michael Thomas, one of the study’s coauthors and a PhD student in Georgia Tech’s School of Computational Science and Engineering.
“This study is similar to dusting off a dinosaur dig site and finding a leg bone. This isn’t the entire dinosaur. There are many ways of making the argument about COVID spread, and transit is just part of it.”
The team got the idea of tracking transit and COVID cases after watching early reports from Wuhan, China, and reflecting on how differences in public transportation systems may factor into pandemic spread patterns.
As assumptions were being made about how American cities should react based on ridership patterns on the other side of the globe, John Taylor, professor and associate chair for graduate programs and research innovation in the School of Civil and Environmental Engineering, thought the pandemic shouldn’t be treated as a “one size fits all” situation.
“In the initial months of the pandemic, models were being developed here at home based on incidence rates in Wuhan. But, in terms of mass transit ridership behavior, China’s may be far different than what we see in American cities,” Taylor says.
“For instance, people in Chinese urban areas often stand in long, single file lines as they wait for trains and buses. We don’t. Different spread patterns can develop because of differences in mass transit behaviors.”
Public transit and the pandemic
Taylor’s primary research focuses on the dynamics that can occur at the intersection of human and engineered networks, such as how people change electricity consumption behaviors and changing mobility patterns in natural disasters.
Pandemics were on his research radar before COVID became a household name, as Taylor wanted to create better models to forecast the spread of illnesses. His first research effort in this direction was tracking the Ebola virus that reached Texas in 2014.
In the fall of 2019, Thomas was working as a biostatistician at the Georgia Department of Public Health when he spoke with Taylor about pursuing his PhD. Thomas submitted his application to Georgia Tech that November—just four months before COVID shut down America.
The two, along with coauthor and senior research engineer Neda Mohammadi, are now creating models to predict the spread of future illnesses among populations. They’re also looking to demonstrate how researchers can modify those models for better accuracy.
“If engineers and scientists can better understand the factors of community spread, policymakers can make faster, more accurate decisions to protect public health,” says Thomas. “In transportation, for example, it could lead to quicker decisions to restrict the number of people on buses. Or policies to stagger vehicle departure times more consistently. Studies like ours provide a basis for those decisions.”
Having more accurate models also takes varying human behavior into account, according to the researchers. Just as people in Wuhan wait for public transportation differently than those here in America, cities can differ from each other.
“Your pandemic is different than your neighbor’s,” says Mohammadi. “Pandemic spread isn’t the same from city to city, nor is ridership. Decision makers often look to other communities to see how they’re responding to shape their actions. That’s not always accurate. Models need to be customizable because populations don’t react uniformly. It’s our goal to improve decision making to be easier, faster, and more accurate for the next pandemic.”
The study appears in the journal Science of the Total Environment.
Source: Georgia Tech
|
<urn:uuid:b6c10d73-909d-4e05-9651-38eac53af73f>
|
CC-MAIN-2022-40
|
https://gcn.com/state-local/2022/04/how-did-public-transportation-affect-covids-spread/365731/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00659.warc.gz
|
en
| 0.953926 | 1,082 | 3.125 | 3 |
In this blog post
The world focused on a complex, yet socially hyper-relevant subject in “Suicide Prevention” on World Mental Health Day last month. As with any global social issue today, organizations such as WHO or WEF showcased their support in addressing it through efforts centered around awareness creation as well as well-thought-out programs that are implementable.
For the uninitiated – let’s look at some broad numbers to start with.
- As per the WHO, suicide takes a life every 40 seconds, making it the principal cause of death among people fifteen to twenty-nine years old.
- An estimated 275 million people suffer from anxiety disorders and depression today. That’s around 4% of the global population. Around
62% of those suffering from anxiety are female
(170 million), compared with 105 million male sufferers.
- An estimated 26% of Americans aged 18 and older – about 1 in 4 adults – suffer from a diagnosable mental disorder each year.
- Approximately 9.5% of American adults aged
18 and over, will suffer from a depressive illness (major depression, bipolar disorder, or dysthymia) each year.
- In low- and middle-income countries, between 76% and 85% of people with mental disorders receive no treatment for their disorder. In high-income countries, between 35% and 50% of people with mental disorders are in the same situation.
While one might be forgiven if these issues do not “show up” at the workplace or do not negatively impact business and productivity, a look under the carpet reveals even more startling numbers:
- A recent study revealed that 48% of British workers have experienced a mental health problem in their current job.
- In India, nearly 42.5% of employees in the private sector suffer from depression or anxiety disorder, per the results of a study conducted by Assocham.
- Per the National Mental Health Survey of India (2015-16), nearly 15% of Indian adults need active interventions for one or more mental health issues.
- The UK loses an estimated 70 million man-days of effort due to conditions related to poor mental health – the resultant cost being in the range of £100 billion. On the flip side, the costs from ‘Presentism’ are double that number.
Not all is gloom and doom though. Many studies have shown that companies of all shapes and sizes increasingly understand the importance of good mental health. Today’s leaders are aware of the negative impact that poor mental health has on business and productivity. Firms are experimenting with and implementing proactive practices such as employee friendly policies to manage working hours, Fun@ Work programs, Employee Assistance programs etc. to promote mental well-being in their employees.
The aim here is not to establish that this subject is important and needs attention. It is however – a call to action. It is an attempt at emphasizing that while the initiatives at a strategic level are perhaps the norm, there is an increasing need for the effort to become even more individualized at the grass roots. Safeguarding staff well-being, addressing problems before they become severe, enabling those suffering with counseling when issues do emerge, need more headspace in discussions. To put things in perspective, here are some interesting practices that came to light as part of a recent study:
- Leaders were encouraged to conduct a formal / informal review of employee mental health metrics along with quarterly financial results
- Mental Health and Awareness sessions/events are being conducted periodically
- Accountability is being established and Mental Health agenda is being driven through Wellness officers at senior leadership levels across all teams
- Improving mental well-being as a driver to improving business productivity is taking an increasingly important role
- Line Managers are being trained to enhance mental well-being in their teams.
- Companies are beginning to include Mental
Wellness under “Return to Work” programs and other benefits packages
- Enabling anonymous communication channels to encourage open communication and initiatives that work towards reducing the taboo that accompanies poor mental health
- Enforcing practices through an Employee Wellness policy
- Introducing “Power Down” hours where employees are encouraged to step away from their laptops and engage in non-work related interactions with their colleagues
At GAVS, we are proud to say that the topic of Mental Well-being of employees is very important to everyone across the board. For over a year now, employee initiatives under the #WELLNESS and Wellness Wednesdays umbrella have taken shape and are driving this agenda across the board. From guided meditation sessions focusing on self-awareness by some of our certified colleagues, Workstation Yoga, to targeted interventions through talks / sessions by experts, dedicated leadership to enhanced employee experience from Hire to Rehire, dedicated millennium bays where leaders and employees alike are encouraged to unwind, GAVS proactively does its bit in enabling and ensuring each GAVSian is given the opportunity to address his / her mental well-being. Flexible work arrangements and an open-door policy to the organization’s leadership are examples of other initiatives that focus on the broader employee well-being agenda as well.
Regarded as one of the greatest artists of her generation, Glenn Close said it with grace – “What mental health needs is more sunlight, more candor, and more unashamed conversation.” It is time, that asprofessionals and leaders, we embrace what it means to drive growth for our clients and our business and do it while also embracing ‘being human’.
|
<urn:uuid:230623c3-9cf9-44d6-b9e8-2c72a4ec2fc7>
|
CC-MAIN-2022-40
|
https://www.gavstech.com/mental-health-at-the-workplace-a-call-to-action-to-make-sure-we-show-up-wherever-we-go/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00659.warc.gz
|
en
| 0.953555 | 1,243 | 2.828125 | 3 |
BGP attributes is an interesting subject of study. BGP is a very flexible and extensible protocol and I like that, let’s see how flexible is that protocol when it comes to attributes handling. We all know that BGP has four types of attributes as listed:
- Well-known mandatory.
- Well-known discretionary.
- Optional transitive.
- Optional non-transitive.
I am not going to explain them in this post, they are explained everywhere over the internet. However, in short well-known attributes must be recognized by all BGP implementations, some of them are mandatory and must be included in all update messages, others are discretionary and may or may not exist in all BGP updates . Optional attributes do not have to be understood by all BGP implementations. Optional attributes are transited to peers or not, based on the setting of the transitive bit as we will show below.
I will focus on how BGP signals and handles these attributes in update messages. BGP path attributes are sent in BGP update messages; every attribute is a triple of variable length [attribute type, attribute length, attribute value ].
Attribute type is a two octet piece of information that consists of an Attribute Flag octet and an Attribute Type code octet. The Attribute type code speaks for itself and it carries the attribute code number for a specific attribute, for example the origin attribute has a type code number of 1 (see the example below).
And because a picture is worth a thousand words, I will start by a wireshark capture and comment on below.
The Attribute Flag:
The first high order bit (from the left) is the optional bit, setting this bit to 1 means the attribute is optional and to 0 defines a well-known attribute. Origin is a well-known mandatory attribute so this bit is set to zero as you can see.
The second high order bit is the transitive bit. It defines whether the attribute is transitive (value=1) or non-transitive (value=0). Well-known attributes are always transitive and therefore their transitive bit is always set to one.
The third bit is the partial bit, it defines whether the information in the optional transitive attribute is partial (value= 1) or complete (value = 0). Well-known and optional non-transitive are always set to complete. The partial bit is set to 1 in the following cases:
- Unrecognized optional transitive attribute that is passed to peers, the sender sets the partial bit.
- Optional transitive attribute attached by some router other than the originator or the route.
The fourth bit is the extended length bit and it defines whether the attribute length is one octet or more. The last four bits are not currently used.
The sender orders path attributes in an ascending order (according to attribute type code) within the update message as shown in the packet capture above.
You might want to review one of the following posts:
|
<urn:uuid:7981f33b-fd7c-4f06-8d48-3617a2d6e49f>
|
CC-MAIN-2022-40
|
https://www.networkers-online.com/blog/2010/12/bgp-attribute-types-and-flags/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00659.warc.gz
|
en
| 0.898363 | 620 | 2.921875 | 3 |
Smarter buildings and public facilities have long been of interest to architects and developers. Innovators can see that the promise of intelligent data used for spatial design can transform how we work, live and play.
How can Big Data be used for intelligent building design? There are a consortia of companies trying to figure this out together. I will discuss the Building 4.0 Co-operative Research Centre (CRC) in Australia.
I have already been examining the new approaches to using big data in facilities management. This is done by developing smarter office spaces, embedded with devices employing Ambient Intelligence (AmI). Research looked at how the intelligent use of big data contributed to building an environment with greater energy efficiency, optimised space utilisation, enhanced workplace experience and occupants’ comfort. This includes sound masking, the use of lighting for enhance environments, and sensors for occupancy for hygiene controls.
Ambient Intelligence (AmI)
AmI refers to electronic environments that are sensitive and responsive to the presence of employees, residents or visitors. These environments can have ecosystems (pun intended) of different IoT devices communicating with each other.
There is a real emphasis here on edge computing, sensors and other IoT devices, and building intelligence into the edge for near real-time decision making closer to where the problem may sit. Ecosystm research finds that construction firms focus a significant amount of IoT investment for building management and energy management (Figure 1).
For example, if an HVAC system is on the verge of malfunction, the system could send a message for a repair intervention. When it comes to AI, predictive maintenance and surveillance are two of the leading use cases in the construction industry (Figure 2).
Building 4.0 Co-operative Research Centre (CRC)
In Australia, this push for sustainable and smarter building development is being driven by a consortium of companies looking at Big Data and infrastructure development for buildings. This year, the Building 4.0 Co-operative Research Centre (CRC) has been awarded a USD 19.5 million grant to focus on medium to long-term industry-led collaborations that can assist in driving the growth of new industries. The Australian building and construction industry is a major economic engine that contributes 13% of GDP and employs over 1.4 million Australians. Development of the Building 4.0 CRC makes sense and is timely given the current pandemic and economic conditions.
Part of its research program focus on develop new building processes and techniques through leveraging the latest technologies, data science and AI to ultimately improve all aspects of the key building phases. Their overall ecosystem is designed for enablement of several use cases (Figure 3).
The Building 4.0 CRC’s principle aims are “to decrease waste; create buildings that are faster, cheaper, and smarter; and capture new opportunities by facilitating collaborative work between stakeholders across the whole value chain in cooperation with government and research organisations.”
Green Star, the rating system which was created by Green Building Council of Australia (GBCA) in 2003, rates the sustainability of buildings, fit outs and communities through Australia’s largest national, voluntary, holistic rating system. The GBCA is a partner organisation in the Building 4.0 CRC – as are many other major organisations in construction and trade, all pulling together here, for innovative efforts for the industry.
Where might the Building 4.0 CRC effort make an impact? Its collaborative structure of industry, academia, vocational trade organisation and governmental bodies harness innovative ideas to transmit them to transformative practices of industry and construction partners.
To be smarter, one must work smarter and more efficiently. A consortia such as this pulls the best minds together to try to accelerate industry efforts for intelligent design with data.
|
<urn:uuid:846f4b51-45bf-4ef5-b0b9-b95952694d74>
|
CC-MAIN-2022-40
|
https://blog.ecosystm360.com/smarter-buildings-big-data/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00659.warc.gz
|
en
| 0.929186 | 762 | 2.5625 | 3 |
NASA plans to reduce the Mars Atmosphere and Volatile Evolution missionâs orbit around the red planet to support data exchange between the agency and its future rovers on Mars. The space agency said Monday that it will lower MAVEN spacecraft’s elliptical orbit from 3,850 to 2,800 miles above the planetâs surface to serve as data-relay satellite for the Mars 2020 rover.Â
MAVEN features an ultra-high-frequency radio transceiver to share data between Earth and the rovers or landers on Mars. NASA said the reduced orbit will provide the spacecraft with a stronger telecommunications antenna signal.
“It’s like using your cell phone,” said Bruce Jakosky, MAVEN principal investigator from the University of Colorado, Boulder. “The closer you are to a cell tower, the stronger your signal.”Â
NASA launched MAVEN to study how Mars lost its atmosphere and continues to analyze the structure and composition of the planet’s upper atmosphere until it begins new communications tasks. The agency expects the spacecraft to continue operations through 2030.
|
<urn:uuid:27ad6f49-0ed5-4168-ac19-23642e5569f9>
|
CC-MAIN-2022-40
|
https://executivegov.com/2019/02/nasa-setting-maven-closer-to-mars-to-support-rover-communications/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00659.warc.gz
|
en
| 0.892257 | 230 | 3.109375 | 3 |
Win32:Malware-gen (and Win64:Malware-gen) is a category for malicious but unspecified threats. Lots of files get detected by this name. It is difficult to say what harm a Win32:Malware-gen file might cause, only that it might be dangerous.
As it’s a threat, it is important to remove the Win32:Malware-gen item and to find out how it got on your computer.
It is possible that the Win32:Malware-gen detection was mistaken and that there’s nothing wrong with your files. Still, the warning should be taken seriously and investigated.
Win32malware Gen quicklinks
- What does Win32:Malware-gen do?
- It looks like malware
- Is Win32:Malware-gen dangerous?
- How viruses spread
- How to protect yourself after an attack
- How to remove Win32:Malware-gen
- Automatic Malware removal tools
About Win32:Malware-gen in short:
|Problems caused by Win32:Malware-gen||Stolen credentials and personal information,
adware and other malware installed.
|How malware gets installed||Downloaded from malicious sites,
spread by malicious ads, emails, and social media messages,
embedded in infected software.
|How to remove Win32:Malware-gen||Use your antivirus tools (such as Spyhunter, Malwarebytes, and others) to find and remove all malware.|
What does Win32:Malware-gen do?
It looks like malware
Win32:Malware-gens are programs whose behavior resembles known malware.
Other names that have a similar meaning to Win32:Malware-gen are Trojan:W32/Generic, Win32.Generic, Trojan.Win32, Trojan.Win64, Win64:Malware-gen, etc. Another similar type of detection is Trojan.Generic.
Though your antivirus program isn’t able to match it to any specific virus, it appears like it might be dangerous.
Some examples of files that get detected as Win32:Malware-gen:
- adware Powzip;
- optimizers Wise System Mechanic and GarGizer System Repair;
- various file-encrypting ransomware infections;
- cracking tools, game cheats and hacks;
- spyware and info stealers like Zusy.
Sometimes, one antivirus scanner recognizes a file as a Win32:Malware-gen, while another scanner is confident of the identity of the virus. No one antivirus is perfect.
It’s possible that a file you downloaded yourself got detected as Win32:Malware-gen. A miner, a software crack, or a game cheat that you wanted to download. This doesn’t mean that the Win32:Malware-gen detection is a false positive, though. Such files do look suspicious to antivirus scanners. If you truly trust the file, create an exception for it.
Is Win32:Malware-gen dangerous?
Win32:Malware-gen is mysterious by definition, but here are some common malware features that the detected virus could have:
- Spy on the victim and steal credentials, files, and clipboard contents.
- Change system settings, download and upload files, execute a program, install other malware.
- Show ads and promote unwanted websites in the browser.
Some Win32:Malware-gens could steal your credentials that may be used to hack your accounts or download and install adware. At worst, a Win32:Malware-gen can steal money by, for instance, replacing clipboard contents when it recognizes a bank account number or a cryptocurrency wallet.
Luckily, not all Win32:Malware-gens are banking trojans. Try scanning the malicious file (if it was not deleted by your antivirus tool) with other scanners to see if you get more specific malware matches or ask the support of your antivirus about what your Win32:Malware-gen could be. This can also help you know whether the file is truly malicious. For example, here’s a fake Flash Player installer’s scan results: Virustotal.com.
How viruses spread
Even after the Win32:Malware-gen threat was removed (or had an exception made for it), it is important to know how it got on your PC so that other infections can be avoided.
Malware spreads in a few ways:
- Malicious email spam. Malicious files often come in email attachments. For instance, the Emotet trojan spreads this way.
- Infected ads. Bad advertisements lead to malicious websites.
- Malicious websites. Infected links are shared in social media, in personal messages. They may also be posted in comments and descriptions.
- Pirated software and media. Some pirated files come infected with malware.
If you know how Win32:Malware-gen got onto your computer, you’ll be better able to avoid infections.
How to protect yourself after an attack
Regardless of what the malware is exactly, it needs to be removed before it can cause any harm. Hopefully, your antivirus immediately placed the threat in quarantine or deleted it.
Even if you’re not sure what Win32:Malware-gen was exactly, there are a few things you might want to do after removing it:
- Change your passwords. Reset them and use 2FA where possible.
- Watch your bank account for any suspicious activity.
- Check your computer for other viruses.
How to remove Win32:Malware-gen
If your antivirus automatically deleted all the Win32:Malware-gen threats, then you should be safe.
On the other hand, if the Win32:Malware-gen detections keep repeating, then there may be a malicious item that’s not been caught yet. Scan your computer with your antivirus program or another scanner, such as Spyhunter, Malwarebytes, and others. Or ask your antivirus program’s support for help in finding the threat.
If you suspect that the Win32:Malware-gen detection is mistaken, then report the file as a suspected false positive and make an exception for it. However, you should double-check if the file is truly safe with other scanners and/or with the support staff of your antivirus. It’s better to be too careful now than to be sorry later.
If the Win32:Malware-gen detection is not mistaken but you trust the file, then just make an exception for it. But be very careful and only use official websites to download software.
Automatic Malware removal tools
|
<urn:uuid:e0776bad-2b28-4055-a8b1-1de431685ce3>
|
CC-MAIN-2022-40
|
https://www.2-viruses.com/remove-win32malware-gen
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00659.warc.gz
|
en
| 0.886677 | 1,435 | 2.796875 | 3 |
John A. Dilley is Chief Architect for Rafay Systems.
“Now they know how many holes it takes to fill the Albert Hall.”
--The Beatles from “A Day in the Life”
But how many edges will it take to cover the planet ... with a low-latency network?
The Beatles' lyric makes no sense: you cannot fill a hall with holes. But let's take the related question as a thought experiment – and see how many edge locations it would take to provide fast service for network requests. In this context, an "edge location" is the closest place to you that runs some application you're interested in. How many do we need to get network communication delay low enough for fast service? This is a key question for edge computing because moving apps closer to users provides better performance.
Assume for the moment that we have enough compute resource to serve the request at each location, so we need one location close enough to each user such that network latency is below a given bound. Latency is the minimum packet travel time, bounded by the speed of light in fiber and optical/electrical links.
So how many locations will we need to be within a certain "latency envelope" of everyone on every square mile of the planet? Earth's land area is approximately 54 million square miles. Let's do some math.
Using a rough estimate that 1 millisecond round-trip-time (msec RTT) is 30 miles geographic distance, a 10 millisecond RTT approximately represents a 300-mile distance, assuming Internet connectivity. A point within a box 300 miles on each side can get to most everywhere in that box within 10 msec RTT (corner to corner is longer, but every point is below 10 msec from the center). That box is 90,000 square miles ... 633 of them would provide 10 msec RTT to all of Earth's land area.
Here's a table for a few RTT values:
This thought experiment ignores network effects, like packet loss and access network delays. In today's 4G mobile networks a base station radio access network (RAN) has an access time of around 15 msec, so getting to 10 msec is already improbable. The good news is the access latency drops significantly in 5G designs, which is right around the corner. And the fact that some boxes will extend over uninhabited water, tundra, and so forth; it's actually the population centers that matter the most. But as a thought experiment allow me some hand waving. We could also use circles, triangles or hexagons, but the numbers will be very similar. They are dominated by the quadratic relationship of RTT (converted to a distance, r) to area (a = r2).
The takeaway point is the number of locations increases significantly with lower latency bounds.
So now we know how many edges it takes to cover the planet! Roughly ...
In my next post, I’ll examine some assumptions made here, specifically the challenges around placing applications in the right edge locations. Each edge will be resource constrained, so we can’t put everything everywhere. And we assumed earlier we have the application in place for the end user.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
|
<urn:uuid:a7685c81-7d66-4d78-b4a8-4566bad17415>
|
CC-MAIN-2022-40
|
https://www.datacenterknowledge.com/industry-perspectives/how-many-edge-sites-will-it-take
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00059.warc.gz
|
en
| 0.933989 | 686 | 2.96875 | 3 |
Protection from Spam Mail Tips
"Spam" has been an Internet buzzword since the dawn of email, but what is it, exactly? Basically, spam mail is a form of junk email, unwanted by the recipient. It commonly is a single message which contains advertisements and is sent to a variety of recipients who never agreed to receive such messages. The most common methods spammers use to build up their email lists include: purchasing lists of addresses, tricking users into handing over their information through fake contests and false freebie offers, or using email-harvesting programs to extract addresses from websites.
Why Is Spam Mail More Than a Nuisance
There are many reasons to avoid interacting with spam mail, but the some of the truly troubling scenarios include the possibility that you'll be putting yourself at risk for identity theft or allowing an attacker to load viruses and malware onto your computer. In the worst-case scenarios, you could even be charged with crimes you weren't aware you were helping the spammer commit — such as being involved in money laundering or handling stolen items. In most cases when handling a spam message, the best course of action is to simply delete the message immediately.
As a rule of thumb, if you want to avoid spam messages, it is important to remember that if something is too good to be true, then it probably is. This will help you weed out contests and offers that seem iffy.
How To Protect Yourself From Spam Mail
Although spam mail can be difficult to avoid, you can greatly reduce and even eliminate the amount of spam clogging your inbox by using proper anti-spam software. Thanks to advances in software intelligence, many anti-spam filters are able to automatically learn which messages are legitimate and which are spam with minimal user intervention. In the case that a spam filter misses a spam message, the user simply can flag the message. That causes the filters to adapt to the new threat.
By implementing a proper Internet security protection suite, you can greatly reduce the dangers of spam mail by ensuring it is filtered away from your inbox and other important email folders. Additionally many Internet security softwares provide users with phishing protection, which can help in cases when an email appears to be legitimate but it isn't. Since these emails often ask for bank and other financial credentials, protection against phishing protection is a vital feature in any anti-spam tool.
The Importance of Bundling Security Software
When shopping for anti-spam software, it pays to pick a solution that is bundled with antivirus protection because some spam messages come laden with viruses and other malware. By using a single software suite, you greatly simplify the process of securing your computer while also improving the reliability of your system. Although anti-spam software is able to divert the messages from your inboxes, having a solid antivirus software program ensures that if you accidentally open a spam message, your computer does not get infected.
Protection from Spam Mail TipsKaspersky
"Spam" has been an Internet buzzword since the dawn of email, but what is it, exactly?
|
<urn:uuid:e646d830-08c7-46eb-9c66-dae6156d43ff>
|
CC-MAIN-2022-40
|
https://www.kaspersky.com/resource-center/preemptive-safety/simple-tips-spam-mail-protection
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00059.warc.gz
|
en
| 0.947753 | 635 | 2.828125 | 3 |
Gone are the days when the internet felt novel: AOL Instant Messenger opened up a new way of communicating; Google searches yielded new info at mind-blowingly quick speeds; a shared computer in a common space was the norm. We lived and learned through our amateur mistakes—getting hacked, falling for phishing scams, using our first names and birthdays as passwords.
For younger generations who’ve grown up with technology and social media, the internet has always been ubiquitous. They carry it in their pockets and use it to stay chronically connected to friends and to navigate everyday life and learning. Is security at the forefront of their minds, or is it something they take for granted? Essentially, are they doomed to repeat our mistakes?
Since many schools and activities are still operating virtually some of the time, it’s the perfect time to evaluate your kids’ understanding and awareness of digital privacy, and brush up on your own knowledge so that you can be a good guide. Here are Dashlane’s tips for encouraging good cybersecurity habits in kids.
The internet can be murky, but we can’t expect kids to avoid it. Rather than talking about the internet like it’s the boogeyman, arm your kids with the knowledge they need to navigate safely:
Just like you’d stress the importance of keeping an ATM Pin secure, remind them that login info and passwords are for their eyes only. Teach good password creation habits (or better yet, add them to your password manager family password manager plan with Dashlane).
Make sure they check with you before downloading apps (you can also set parental controls on an iPhone or Apple Device to prevent downloads and purchases from the App Store). For Android, Google offers a Family Link app that allows you to pair your device with your kids’, manage their app downloads, and set limits on screen time.
Websites and WiFi
Teach them how to identify a secure WiFi network. The simplest rule: if you click on a network and it asks for either a WPA or WPA2 password, you know it’s secure. Both types of passwords are keys for accessing a secured Wifi network; the latter is a more recent version that uses AES (advanced encryption standard) encryption for maximum security. They’ll also want to make sure that websites start with “https” (the ‘s’ here means secure). Limit their access to specific web content using parental controls, which you can set up on their Apple and Android devices.
With the family password manager, enjoy up to six separate accounts for 75% less than the cost of six individual subscriptions. Easily upgrade from Free or switch from Premium within 30 days to get your money back.
A year-long Stanford study concluded that most school-age children have a hard time differentiating between articles and sponsored content, and possess a general lack of skepticism when it comes to what they read online. Advertisers and content creators are adept at getting users to click and explore ads, apps, games, and articles—just think how likely you are to let curiosity get the best of you when presented with targeted ads. It’s important to encourage kids to think critically about the information they’re presented with online, and to be critical thinkers when navigating the internet.
A good golden rule: if you can’t share it with your parents, it’s probably not something you want to put online. There’s certainly a tendency to overshare on social media, and the consequences can range from sheer regret to jeopardizing kids’ safety. Remind your kids that what they put online, even in private channels, stays online, and can be found if someone really wants to find it. Depending on their age, it’s a good idea to monitor their social media accounts, and tell them to keep their accounts private and avoid friending anyone they don’t know in real life. Schedule a check-in with them and scope out their requests and DMs to rid them of bots and scammers.
Many schools have their own policies when it comes to using personal devices at school. Talk to your child’s school to find out their rules, and to see if they teach students “digital literacy”—seeing media through a critical lens. Resources like Common Sense offer courses for empowering students in their digital lives, helping them become more adept at navigating the internet.
Familiarize yourself with good cybersecurity habits, from understanding the trail you leave online to quickly improving your online security. Be a resource should they come to you for advice. Set good examples when using your devices, such as not texting while you drive, and being mindful of your own screen time, as kids are likely to pick up on these habits. Likewise, underscore the importance of keeping track of your devices and making sure they are password-protected.
|
<urn:uuid:0027373a-621e-4232-9f84-26d8517581b3>
|
CC-MAIN-2022-40
|
https://blog.dashlane.com/internet-security-for-kids/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00059.warc.gz
|
en
| 0.946155 | 1,011 | 2.890625 | 3 |
By now, you have guessed which network security model saves people daily from hackers – the zero-trust model. Ever since this approach came into being, it has been a hot topic of discussion in security circles. So, how does it help companies to stay safe? In this article at TechTarget, Sandra Gittlen and Laura Fitzgibbons share how this network security model works. They also discuss the architecture and implementation methods as well.
A Network Security Model to Reduce Breaches
The Zero-Trust Definition
When companies place too much trust in individuals or systems, hacking is bound to happen. The authors assert, “no user, even if allowed onto the network, should be trusted by default because they could be compromised.” That is what the zero-trust model is all about.
Significance of the Network Security Model
Though VPNs and firewalls are in place, they only allow access to known individuals and systems. Since people work in remote locations, the perimeter approach does not help. A zero-trust approach enhances data protection, improves auditing, and reduces breaches and identification challenges. The network security model also increases network visibility and provides much better control of the cloud setup.
The Workings of ZTNA
Zero-trust network access (ZTNA) is a section of the zero-trust model that authenticates you based on your identity but hides the IP address. This helps remote users to stay safe without the fear of hackers discovering their network location.
The network security model is not a one-stop product that you should implement. It is a mindset and a decision, per IEEE senior member Jack Burbank. However, independent analyst John Fruehe says implementing it in non-critical workspaces would be too much.
The network security model can protect third parties integrated into your corporate architecture. Additionally, remote workers can access your company’s cloud database without worry. Lastly, you will get IoT security and prominence. Per a joint report by Cybersecurity Insiders and Pulse Secure, organizations with a zero-trust model received continuous authorization, better trust among employees and customers, and data protection.
Different Than SDN and VPN
Software-defined perimeter (SDP) and virtual private network (VPN) are popular network security models too. Though the three models seem not to work well with each other, you can develop strategies to make these collaborate well.
Though it is not available in a single application or product, you can create a zero-trust environment through specific tools. For instance, security tools for the workforce, devices, networks, data, analytics, etc., can build a robust setup.
Executing the Model
If you want to adopt zero-trust as your network security model, assemble a group of security and network professionals. The security professionals will develop and maintain the framework. Meanwhile, the network team will take care of the network architecture.
To view the original article, please visit this link: https://www.techtarget.com/searchsecurity/definition/zero-trust-model-zero-trust-network
|
<urn:uuid:a2dd39ef-718d-4d26-a520-cbbe9cca1d2e>
|
CC-MAIN-2022-40
|
https://cybersecurity-journal.com/2022/09/09/this-network-security-model-blocks-hackers-each-day/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00059.warc.gz
|
en
| 0.925206 | 645 | 2.71875 | 3 |
Microsoft will go open source and Apple will be selling reasonably priced computers before you need to worry about IPv6, so don’t waste your time drawing up plans to implement the next generation Internet protocol.
The principal raison d’etre of IPv6 is that the world is running out of IP addresses, but the truth is that there’s loads of them about – more than enough to last till long after your grandchildren have retired.
At the beginning of the year there were about 925 million free IP addresses. That’s a very large number. And since about 100 million a year are being handed out, we won’t run out for almost a decade at that rate
But that assumes that people are stupid, and fail to alter their behavior when it becomes sensible to do so. Since people aren’t stupid–most people, anyway–there’ll be free IP addresses for far longer. Here’s what will change:
The last time I looked, there was a whole bunch of organizations, like Eli Lilly and MIT to name just two, with Class A (or /8 in Classless Inter-Domain Routing (CIDR)-speak if you prefer) networks made up of almost seventeen million individual IP addresses each. Now you have to ask yourself: do they really need seventeen million IP addresses? Each? What’s Eli Lilly planning – individually addressable Prozac pills? I don’t think so. If and when IP addresses really start to get scarce, many of the IP addresses in these Class As will be reassigned. Perhaps they’ll be appropriated, maybe there’ll be a market and blocks of IP addresses will be bought and sold. Who knows? But something will happen.
And let’s not forget Network Address Translation. Thanks to the wonders of NAT, each IP address can be shared by many, many other machines. A Class A network can connect billions of individual hosts to the Internet using NAT. That’s enough to give every man, woman and child in America an address for a desktop, laptop, network printer, IP phone , cellphone and even a toaster if they want. Russia? China? Brazil? A few billion each should be plenty for them too.
Even using NAT to share a single IP address between just two hosts, that would mean twenty years before we run out of IP addresses at the current rate of consumption, and by sharing an IP address with ten we’ve got a century to go. Now I don’t know how computers will be communicating with each other in a hundred years time, but I’ll bet it won’t be using IP. Robots made with nanotechnology will have devised something better, no doubt.
So forget about drawing up implementation strategies and working out which bits of your hardware and software need to be scrapped or upgraded. IPv6? Put your feet up: it’s never going to happen.
|
<urn:uuid:bcdfdb07-f1d8-4c1f-aeba-6c6d254b20e8>
|
CC-MAIN-2022-40
|
https://www.enterprisenetworkingplanet.com/standards-protocols/the-case-for-blowing-off-ipv6/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00059.warc.gz
|
en
| 0.939387 | 611 | 2.515625 | 3 |
With small and medium-sized organizations now storing, managing and accessing more data than ever before, cybersecurity is now a crucial component for business success.
In fact, cyber crime has continued to grow over the past few years. According to the Washington Post, estimated global losses from cybercrime are projected to hit just under a record $1 trillion for 2020 - that’s almost double 2018, when a reported $500 billion was lost.
Falling prey to a cyber attack could completely cripple the operations of your business, or even lead to huge fines and costs that could potentially put you out of business. After all, according to IBM, the average cost of a data breach in 2020 was a staggering $3.86 million dollars.
With that in mind, it’s crucial your organization implements and invests in a cybersecurity strategy that effectively protects your network and your data. To help get you started, OT Group has created this list of the most important cybersecurity terms your small business needs to know.
The most important cybersecurity terms for small businesses
Antivirus software is a computer program that is used to prevent, detect and remove malicious programs and files from a computer or network. Most antivirus programs run automatically in the background to provide real-time protection.
A backup, or data backup, is a copy of computer data that’s taken and stored elsewhere so it can be used to restore the original in the event of data loss or a data breach.
This is the term that refers to a lawyer with specific knowledge and experience in cybersecurity. This lawyer helps organizations to navigate the required response after they have been subjected to a data breach.
Brute force attacks
A form of hacking that uses trial-and-error to guess login information, encryption keys or other sensitive access points into a company’s network. Hackers slowly work through all possible combinations in a bid to gain access to an account.
A computer bug is an error or flaw in the coding of a computer program that produces unexpected results. A bug can also represent a vulnerability in a system that could be discoverable by cybercriminals.
A type of cyberattack in which a cybercriminal uses stolen credentials, such as usernames, email addresses and their corresponding passwords, to gain unauthorized access to a user’s account.
The practice of protecting computers, servers, mobile services, network and company data from various forms of malicious cyber attacks. These attacks are aimed at accessing sensitive information, extorting money from businesses or interrupting normal business processes.
A data breach occurs when internal, sensitive data is made accessible to external entities without authorization.
Denial-of-service attack (DoS)
A type of cyberattack where a computer is used to flood systems, services or networks with traffic that exhausts their bandwidth, preventing users from completing legitimate requests.
Distributed denial-of-service attack (DDos)
DDoS is the same as a denial-of-service attack, but instead of just one computer it’s when multiple systems target a single system. The targeted network is bombarded with packets from multiple locations.
A method used to scramble data, making it unreadable to anyone without the encryption key. Encryption makes it difficult for cyber criminals to steal data, especially when end-to-end encryption is used.
An endpoint is every device connection to your network, including laptops, mobile devices, printers and other pieces of hardware. Cybercriminals can use endpoints to gain access to a company’s network.
A firewall is a network security system that monitors and controls incoming and outgoing network traffic, based on a range of predetermined security measures. A firewall acts as a barrier between a trusted network and an untrusted network.
A person who uses their knowledge of programming code or a computer system to modify its functions or operations. Hackers can be ethical and authorized to find vulnerabilities, or malicious and unauthorized.
Incident response plan
This is a strategy created by a business to detail exactly what to do to immediately secure the company’s network and data in the event of a security breach. An incident response plan can include emergency contacts and how to recover data.
Initial control point (ICP)
This is the initial point in your network that a hacker gained control of to execute their attack.
Malware, also known as malicious software, is an umbrella term used to describe a range of malicious software attacks that aim to breach your company’s network through vulnerabilities. Malware includes software such as spyware, ransomware and computer viruses.
Multi-factor authentication (MFA)
A form of authentication that adds an additional layer of security by requiring users to provide a second, or even third, factor of authentication to get into an account. This additional form of authentication could include anything from a mobile phone, email address, fingerprint or voice.
Most organizations use a network. It’s a group of computers that are virtually connected to each other, in order to share files, data, and applications. Cybersecurity strategies are typically created to protect an entire network, not just one computer.
An update or change for an operating system or applications. A patch is used to repair flaws or bugs in a system, securing potential vulnerabilities.
A process in which cybercriminals attempt to steal sensitive information through fraudulent communications that appear to come from a reputable source. This cybersecurity threat typically aims to steal sensitive data such as login information or credit card details through fraudulent emails or phone calls.
Ransomware, a type of malware, is a malicious software that encrypts a user’s data. The attacker then demands a ransom from the user to restore access to the data. The hacker promises to hand over a decryption key upon payment, but there’s no guarantee of that happening.
As part of the recovery process, employees should have guidelines on how they can quickly access backed-up data in the event of a cybersecurity incident.
Social engineering, in the cybersecurity definition, is a form of psychological manipulation which attempts to trick people into revealing sensitive information.
A form of unwanted and unsolicited communication that typically is received via email. While most forms of spam are legitimate advertising, some will fall under the phishing category and will include malicious links and attachments.
Another form of malware, spyware is a malicious software that’s designed to enter a computer system and then gather data about the user and forward it to a third-party without your consent. Spyware, however, can also be a legitimate software that monitors your data for commercial purposes - such as for advertising.
A trojan horse is a type of malware that is disguised as legitimate software. Trojan horses are used by cyber criminals to gain access to a users’ system by tricking them through social engineering.
Any access or use of a computer system, network or resource by a user who was not explicitly granted authorization to access them.
Virtual private network (VPN)
A VPN provides privacy , anonymity and security to users by creating a private network connection across a public network connection. This is great for remote work, as it secures your employee’s internet connection no matter where they are working from.
A type of malicious code or program that’s written to alter or modify the way a computer operates, typically by attacking itself to a legitimate program or document. A computer virus is designed to spread from one computer to another.
Vishing, also known as voice phishing, is the phone’s version of email phishing. It uses automated voice messages in an attempt to steal private and financial information from a user.
Any weakness in a company’s network or security system. Vulnerabilities are any weakness in your network that cyber criminals can use to access your network, applications or systems.
A computer worm is a type of malware that spreads copies of itself from computer to computer. By duplicating itself, a computer worm is able to spread to other systems and is typically used to deposit other forms of malware on each of the systems it encounters.
Want to learn more about cybersecurity and how to protect your small or medium-sized business in Ontario from potential threats? Contact OT Group today. We would love to help better secure your business.
|
<urn:uuid:7c7f5350-ad0a-45bd-b34f-4a911d391e39>
|
CC-MAIN-2022-40
|
https://www.otgroup.ca/business-technology-insights/cybersecurity-glossary-of-terms-every-small-business-should-know
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00059.warc.gz
|
en
| 0.936842 | 1,775 | 2.84375 | 3 |
Saki Gouran / Saki Gouzanyama / Gourdsan Tomb
This one was vaguely marked on the map I was using. It was a very detailed map, describing a turn-by-turn tour through the sites north and west of the ancient Imperial palace grounds. Unfortunately, the details were all in Japanese. Unlike the other kofun described on these pages, there was no label on that detailed map, just a partial outline. If you zoom in on Google Maps it gets the same icon or marker as the other kofun. However, the only labeling is entirely Kanji. Pasting that Kanji label into Google Translate yields "Gourdsan Tomb". Pasting the Kanji label into Google search leads to a Japanese Wikipedia page. Right-clicking in Chrome and asking for a Google automated translation yields a page with the title of "Saki Gourdsan Kofun" opening with the following:
Saki Gouran Mountain Burial Mound
The Saki Gouzanyama Kofun is a keyhole-shaped keyhole in the early middle of the Kofun period, located in Saki Hanmon door in Nara City, Nara Prefecture. It is one of the Saki Shield column ancient tomb group, which is often referred to simply as Gurusan mountain tumulus, but here we use this name to distinguish it from the Guruzan mountain tomb located all over the country. This ancient tomb is designated as a national historic site.
Below is the map reference. I'm approaching from the east, from the Kofun of Empress Iwa-no-hime.
Let's approach this from the Kofun of Empress Iwa-no-hime. In the first picture below, I'm on a small lane running west between some farm fields. I have stopped and turned back to look to the southeast. The Minakami Pond is nearby. Beyond it is the ancient Imperial palace of Heijō-kyō. The city of Nara is beyond that. The mountains on the horizon are beyond Nara.
Beyond the small farms is a subdivision. Narrow streets, no lawns around the houses.
I have arrived at the mystery kofun. It is a small park with houses to the east and west and a pond to its southwest.
|Location||135.7896° E 34.6991° N|
|Circle diameter||60 meters|
|Width at base||45 meters|
|Height of circle||10 meters|
|Height of bottom end||7 meters|
There used to be a mound, the Marubuka tomb, very close to the southwest. The moat did not extend all the way around this kofun. That other mound has since been destroyed, a pond is there now.
The Japanese Wikipedia page for this kofun says that some excavations were done in 1913. "Taishō 2 years" is a reference to the 123rd Emperor, Taishō, meaning that the date was in the 2nd year of his rule. Taishō took the throne in the middle of 1912. He had contracted cerebral meningitis within three weeks of his birth and was never completely healthy. His neurological problems had worsened and were continuing to degrade when he took the throne, and Crown Prince Hirohito was named sesshō or Prince Regent in November 1921. Hirohito became Emperor when Taishō died in 1926.
In 1913 (Taishō 2 years) earth and sand removal was carried out, oval clay rhinoceros having a major diameter of 100 cm and a minor diameter of 60 cm were detected, and three chordal stone products were excavated from the inside.
Kawakami "Ancient ruins dictionary" (1995) p.538
A path circles the kofun. The grass is beat down along informal walking paths that have formed by people walking up and over the peak of the mound. Clearly this kofun isn't believed to have any connection to the Imperial house.
This anonymous kofun is at the northeast edge of a cluster, three of which are considered to be Imperial tombs.
The above is specific to the kofun around Nara. Or maybe you want to explore other places in Japan.
Other topics in Japan:
Akihabara, Tōkyō's Electric Town
Electronics parts and tools, the otaku lifestyle, cosplay, anime, and manga
Travel through Kyūshū, the Harbor, Temples, Shrines, the Samurai Path, and a World War II Bunker
|
<urn:uuid:aa45e4e4-172f-4ee8-9cfb-6fbcc479f23b>
|
CC-MAIN-2022-40
|
https://cromwell-intl.com/travel/japan/kofun/saki-gouran.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00059.warc.gz
|
en
| 0.962348 | 997 | 2.53125 | 3 |
This chapter covers the following exam topics:
1.0 Network Fundamentals
1.1 Explain the role and function of network components
1.2 Describe characteristics of network topology architectures
6.0 Automation and Programmability
6.1 Explain how automation impacts network management
6.2 Compare traditional networks with controller-based networking
6.3 Describe controller-based and software defined architectures (overlay, underlay, and fabric)
6.3.a Separation of control plane and data plane
6.3.b Northbound and southbound APIs
The CCNA certification focuses on the traditional model for operating and controlling networks, a model that has existed for decades. You understand protocols that the devices use, along with the commands that can customize how those protocols operate. Then you plan and implement distributed configuration to the devices, device by device, to implement the network.
The 2010s have seen the introduction of a new network operational model: Software Defined Networking (SDN). SDN makes use of a controller that centralizes some network functions. The controller also creates many new capabilities to operate networks differently; in particular, controllers enable programs to automatically configure and operate networks through power application programming interfaces (APIs).
With traditional networking, the network engineer configured the various devices and changes requiring a long timeframe to plan and implement changes. With controller-based networking and SDN, network engineers and operators can implement changes more quickly, with better consistency, and often with better operational practices.
This chapter introduces the concepts of network programmability and SDN. Note that the topic area is large, with this chapter providing enough detail for you to understand the basics and to be ready for the other three chapters in this part.
The first major section of this chapter introduces the basic concepts of data and control planes, along with controllers and the related architecture. The second section then shows separate product examples of network programmability using controllers, all of which use different methods to implement networking features. The last section takes a little more exam-specific approach to these topics, comparing the benefits of traditional networking with the benefits of controller-based networking.
“Do I Know This Already?” Quiz
Take the quiz (either here or use the PTP software) if you want to use the score to help you decide how much time to spend on this chapter. The letter answers are listed at the bottom of the page following the quiz. Appendix C, found both at the end of the book as well as on the companion website, includes both the answers and explanations. You can also find both answers and explanations in the PTP testing software.
Table 16-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping
Foundation Topics Section
SDN and Controller-Based Networks
Examples of Network Programmability and SDN
Comparing Traditional and Controller-Based Networks
1. A Layer 2 switch examines a frame’s destination MAC address and chooses to forward that frame out port G0/1 only. That action occurs as part of which plane of the switch?
2. A router uses OSPF to learn routes and adds those to the IPv4 routing table. That action occurs as part of which plane of the switch?
3. A network uses an SDN architecture with switches and a centralized controller. Which of the following terms describes a function or functions expected to be found on the switches but not on the controller?
A northbound interface
A southbound interface
Data plane functions
Control plane functions
4. Which of the following controllers (if any) uses a mostly centralized control plane model?
Cisco Application Policy Infrastructure Controller (APIC)
Cisco APIC Enterprise Module (APIC-EM)
None of these controllers uses a mostly centralized control plane.
5. To which types of nodes should an ACI leaf switch connect in a typical single-site design? (Choose two answers.)
All of the other leaf switches
A subset of the spine switches
All of the spine switches
Some of the endpoints
None of the endpoints
6. Which answers list an advantage of controller-based networks versus traditional networks? (Choose two answers.)
The ability to configure the features for the network rather than per device
The ability to have forwarding tables at each device
Programmatic APIs available per device
More consistent device configuration
Answers to the “Do I Know This Already?” quiz:
5 C, D
6 A, D
|
<urn:uuid:e7c932c0-1a06-42fa-a8d4-bbfb5c39a1ed>
|
CC-MAIN-2022-40
|
https://www.ciscopress.com/articles/article.asp?p=2995354&seqNum=6
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00059.warc.gz
|
en
| 0.8746 | 1,062 | 3.453125 | 3 |
I’ve always had my doubts about Chip and Pin (or EMV to give it its proper name). We’ve all heard stories of people having cards stolen and used, when this should be impossible without the PIN. There are also credible stories of phantom withdrawals. The banks, as usual, stonewall; claiming that the victim allowed their PIN to be known, and that it was impossible for criminals to do this while you still had the card so someone close to you must be “borrowing” it.
In the old days it was very easily to copy a card’s magnetic strip – to “clone” the card. Then all the criminals needed was the PIN, which could be obtained by looking over someone’s shoulder while they entered it. Cash could then be withdrawn with the cloned card, any time, any place, and the victim wouldn’t know anything about it. Chip and Pin was designed to thwart this, because you can’t clone a chip.
Well, it turns out that you don’t have to clone the card. All you need to do is send the bank the same code as the card would, and it will believe you’re using the card. In theory this isn’t possible, because the communications are secure between the card and the bank. A team of researchers at Cambridge University’s Computer Lab has just published a paper explaining why this communication isn’t secure at all.
I urge to you read the paper, but be warned, it’s unsettling. Basically, the problem is this:
The chip contains a password, which the bank knows (a symmetric key) and a transaction counter which is incremented each time the card is used. For an ATM withdrawal this data is encrypted and sent to the bank along with the details of the proposed transaction and the PIN, and the bank sends back a yes or no depending on whether it all checks out. It would be fairly easy to simply replay the transaction to the bank and have it send back the signal to dispense the money, except that a random number (nonce) is added before its encrypted so no two transactions should be the same. If they are, the bank knows it’s a replay and does nothing.
What the researchers found was that with some ATMs, the random number was not random at all – it was predictable. All you need do is update your transaction with the next number and send it to the bank, and out comes the dough. It’s not trivial, but its possible and criminals are known to be very resourceful when it comes to stealing money from ATMs.
What’s almost as scary is how the researchers found all this out: partly by examining ATM machines purchased on eBay! (I checked, there are machines for sale right now). There’s a bit of guidance on what random means in the latest EMV specification; the conformance test simply requires four transactions in a row to have different numbers.
It’s inconceivable to me that no one at the banks knew about this until they were tipped off by the researchers earlier this year. Anyone with the faintest clue about cryptography and security looking at code for these ATMs would have spotted the flaw. This begs the question, who the hell was developing the ATMs?
In the mean time, banks have been trying to pretend to customers than phantom withdrawals on their accounts must be their fault and refusing to refund the money, claiming that Chip and Pin is secure. It’s not, and a day of reckoning can’t come too soon.
Credit for the research goes to Mike Bond, Omar Choudary, Steven J. Murdoch,Sergei Skorobogatov, and Ross Anderson at Cambridge. Unfortunately they’re probably not the first to discover it as it appears the criminals have know about it for some time already.
|
<urn:uuid:d710979d-34bd-4f3a-a28d-c33a0afe47c3>
|
CC-MAIN-2022-40
|
https://blog.frankleonhardt.com/2012/chip-and-pin-is-definitely-not-safe/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00059.warc.gz
|
en
| 0.967723 | 812 | 2.859375 | 3 |
Deep Learning is a fast expanding technological advancement. An artificial neural network seeks to replicate the human brain. While Deep Learning has been around since the 1950s, developments in AI and machine learning have lately brought it to the forefront. To get started, brainstorm Deep Learning project ideas.
This article will discuss entertaining deep learning project ideas for beginners. This post offers top deep learning project ideas for beginners. Data Science has an intersection with artificial intelligence but is not a subset of artificial intelligence.
Deep Learning performs ML problems using hierarchical artificial neural networks. Deep Learning networks can learn from unlabeled data. They are like the human brain, with web-like connections between nodes.
Instead of evaluating input linearly, Deep Learning system’s hierarchical function evaluates data nonlinearly.
Deep neural networks, recurrent neural networks, even board game programmes. This field grows to help ML and Deep Learning experts build unique Deep Learning projects. This enhances knowledge and experience.
We’ll discuss the top ten Deep Learning project ideas:
1. Visual tracking system
A visual tracking system uses a camera to monitor and find moving objects in real time. Security and surveillance, medical imaging, augmented reality, traffic control, video editing and communication, and human-computer interaction all benefit from it.
This system analyses video frames sequentially and then tracks target objects between frames using deep learning. This visual tracking system has two main parts:
● Localization of the target
● Filtering and data linkage
2. Face detection system
This is a great deep learning project for beginners. Face recognition technology has been substantially improved thanks to deep learning. Face recognition is a subset of Object Detection that looks for semantic items. It tracks and displays human faces in digital photos.
This deep learning project will teach you how to recognise human faces in real-time. The model is built with Python and OpenCV.
3. Digit Recognition System
This project entails creating a digit recognition system that can categorise digits according to certain rules. You’ll use the picture dataset here (28 X 28 size).
Using shallow and deep neural networks, as well as logistic regression, create a recognition system that can categorize digits from 0 to 9. This project requires Softmax Regression or Multinomial Logistic Regression. This approach is suitable for multi-class classification (provided all classes are mutually exclusive).
Chatbots are very sophisticated and can respond to human inquiries in real-time. This is why more and more firms across all industries are implementing chatbots in their customer care systems. This is a simple project.
5. Music genre classification system
This is a cool deep learning project concept. This is a great activity to develop your deep learning skills. You will build a deep learning model that uses neural networks to automatically classify music. Use an FMA (Free Music Archive) dataset for this project. FMA is an online collection of licenced music downloads. It is an open-source dataset that may be used for MIR tasks like as exploring and organising large music libraries.
To utilise the model to categorise audio files by genre, you must first extract the appropriate information from the audio samples (like spectrograms, MFCC, etc.).
6. Drowsiness detection system
Driver sleepiness is one of the leading causes of car accidents. It’s normal for long-distance drivers to nod asleep behind the wheel. Stress and lack of sleep can make drivers sleepy. This project will develop a sleepiness detecting agent to help avoid accidents.
To construct a system that can detect closed eyelids and inform drivers who are sleeping behind the wheel, you will need Python, OpenCV, and Keras. This device will alert the motorist even if their eyes are closed for a few seconds, averting horrible road accidents. The driver’s eyes will be classified as ‘open’ or ‘closed’ by the deep learning model using OpenCV and a camera.
7. Image caption generator
This is a popular deep learning project concept. This Python deep learning project uses Convolutional Neural Networks and LTSM (a form of Recurrent Neural Network) to produce captions for images.
An image caption generator uses computer vision and natural language processing to assess and explain an image’s context in natural human languages (for example, English, Spanish, Danish, etc.).
This system is designed to run state-of-the-art Object Detection algorithms. The Caffe2 deep learning framework is used in this Python deep learning project.
Detectron is a high-quality, high-performance object detection codebase. More than 50 pre-trained models facilitate quick installation and assessment of innovative research.
9. Colouring old B&W photos
Automated colourization of B&W photographs has long been a hot issue in computer vision and deep learning. According to a recent study, a deep learning algorithm may hallucinate colours within a black and white image if trained on a large and rich dataset.
This project uses Python and OpenCV DNN architecture (it is trained on ImageNet dataset). The goal is to colourize grayscale photos. It uses a pre-trained Caffe model, prototxt files, and NumPy files.
10. 12 Sigma’s Lung Cancer detection algorithm
12 Sigma has created an AI algorithm that may detect early lung cancer indications and eliminate diagnostic mistakes.
Doctors identify lung cancer by looking for tiny nodules on CT scan pictures and classifying them as benign or malignant. It can take clinicians almost 10 minutes to visually review CT scans for nodules, plus time to identify them as benign or malignant.
Of course, human mistake is always a possibility. 12 Sigma claims their AI technology can classify lesions in CT scans in two minutes.
Top deep learning project ideas mentioned in this post. We started with easy starter tasks. Finish these beginner tasks, learn a few additional ideas, and then go on to the intermediate projects. When you’re ready, go on to more difficult projects. These deep learning courses can help you develop your abilities in this area.
These are only a few of the many Deep Learning applications produced so far. The technology is still evolving. Deep Learning presents great potential for pioneering ideas that can help humanity handle some of the most fundamental global concerns.
|
<urn:uuid:af310462-5144-4dd6-843d-7ec7763047f6>
|
CC-MAIN-2022-40
|
https://www.dailyhostnews.com/deep-learning-project-ideas-for-beginners
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00059.warc.gz
|
en
| 0.885404 | 1,312 | 3.203125 | 3 |
By now, you’ve probably seen hundreds of articles about blockchain technology, each attempting to describe what it is, and how it’s the next big revolution to hit the tech world. At the end of the article, many people are still perplexed because the authors brush over a lot of complicated concepts without explaining them.
In this guide, we’ll be giving you a ground-up explanation of what blockchains are, how they work, and the key cryptographic concepts behind them. It’s time to get past the hype and buzzwords, and understand what’s really going on at a technical level.
Don’t worry, because we will take things slowly and examine each element in detail, giving you analogies that help to visualize what’s actually happening.
A simple analogy for blockchains
Imagine a primitive village, where they don’t have money in our traditional sense. Instead, they engrave the details of each transaction onto a stone block, then cement it in place in the center of the village.
David swaps fifteen chickens for one of Sarah’s pigs. They engrave this information onto a block, then cement it in the town square. Now, anyone can see that David is the new owner of Sarah’s pigs, while Sarah is now the owner of the fifteen chickens. Since the information is public, there can be no disputes over who legally controls what.
The next day, Jessica trades Mark 100 kilograms of corn for a canoe. This is also engraved on a block, which is then cemented on top of the old block. Since everyone in the town will now be able to publicly verify that Jessica no longer owns the 100 kilograms of corn, she can’t try to sell it again if Mark goes away for a couple of days.
In the coming days, more and more transactions take place, and more blocks of stone with the transaction details engraved on them get cemented in place. Over time, the transaction stones start to form a tower.
All of the details are publicly available to everyone, and the people cannot change or take back the earlier transactions, because a bunch of blocks are cemented on top of them.
This village’s financial system may not be the easiest to use, but it gives everyone in the village a way to keep track of their transactions. It is a public ledger that keeps permanent records, which can’t be altered. One of the most important aspects is that it is decentralized. There is no central bank or government that is responsible for the transactions. It’s all done by the community.
There are a number of differences between blockchains and the above analogy, but it’s still a good starting point to get your head around what blockchains are and why they are useful.
One of the key contrasts is that blockchains aren’t on display in public, instead, anyone who wants to can store a copy of a blockchain on their computer. Blockchains use cryptography, computers and electricity to build the blocks, rather than stone and cement.
The most important aspects of blockchains are that they cannot be changed, aren’t controlled by any single entity, and everyone can view the transactions. These properties are why people believe that the technology has the potential to be used in a vast range of applications.
The history of blockchains
In the late 1990s and early 2000s, there were a series of developments toward digital currencies based on various cryptographic concepts. One of the earliest blockchain-like initiatives was Nick Szabo’s 1998 mechanism called bit gold. Although it was never actualized, it involved a series of cryptographic puzzles, where each solution would be added to the next puzzle, forming a chain.
It wasn’t until 2008 that the idea of blockchains was fully developed, when someone going under the pseudonym of Satoshi Nakamoto published the paper, Bitcoin: A Peer-to-Peer Electronic Cash System.
This person built on previous work in the field, including Hal Finney’s reusable proof-of-work system, to form the bitcoin digital currency, as well as the underlying concept of blockchains. These blockchains have since gone on to be applied in a number of different ways, both as digital currencies and as solutions to other problems.
The bitcoin network was launched in early 2009 and was originally only used by a small group of cryptographers and hobbyists. It wasn’t until bitcoin was adopted by darknet marketplaces such as Silk Road that blockchains began to see their widespread, practical adoption.
As bitcoin gained popularity, a number of spin-off cryptocurrencies, known as altcoins such as Litecoin and Peercoin were developed. These further spread the adoption and use of blockchain technology.
Ethereum launched in 2015 as a distributed computing platform that allowed its users to develop apps and enact smart contracts between parties. Around this time, interest in blockchain technology from the public, major companies and governments grew, in both financial and other use cases. This saw a surge of new activity, with blockchains being proposed as solutions to a range of different problems.
The uses of blockchains
It’s been more than 10 years since the first blockchain was launched, with intensive hype and investment for the past five or so years. Despite the flurry of activity, at this stage there have been relatively few successful real-world implementations of blockchain technology.
While cryptocurrencies have seen their values spike and plummet, they still see comparatively few transactions for everyday use. The number of businesses which accept them is limited, while the transaction costs for bitcoin become too high when the currency is frequently used. On top of this, the bitcoin network can’t handle anywhere near the volume of transactions as an alternative like the traditional Visa system.
While there are cryptocurrencies that seem more promising than bitcoin, these are accepted in even fewer places. At this stage, it seems like the main uses for cryptocurrencies are as speculative investments or to buy illicit products from darknet marketplaces.
Decentralized apps & smart contracts
After bitcoin, the most renowned blockchain-based project is Ethereum, which provides a platform for developing decentralized apps and smart contracts. Despite the excessive hype, having a market cap of $14 billion (at the time of writing), and more than 2,300 decentralized apps, it has very little to show for it at this point in time.
One of the Ethereum network’s most successful projects so far is probably Cryptokitties, a video game where users can breed cats. MakerDAO, the app that currently has the most daily active participants, had less than 1,000 users in the past 24 hours (at the time of writing). Considering that Ethereum has been labeled a “Financial Tech Revolution”, among its many praises, its current applications seem quite underwhelming.
Other blockchain-based initiatives
Over the last several years, there have been countless blockchain-based startup companies as well as many initiatives backed by our biggest tech companies and financial institutions.
Numerous pilot programs and experiments aim to adapt the technology for use in supply-chain management, financial transactions, smart contracts, decentralized storage and more.
These include IBM Food Trust and Walmart’s foray into using the technology to manage food supply logistics, banks such as UBS adapting blockchains for financial settlements, and the Australian Stock Exchange (ASX) aiming to adopt distributed ledger technology.
Despite this, it’s hard to name a successful and widely used product or service that has emerged from all of the investment and effort. This isn’t to say that blockchain technology won’t have any future uses, just that it is yet to be as fruitful as many may have hoped. At this stage, it’s hard to know whether or not some of these projects will be successful in the coming years.
The core cryptographic concepts behind blockchains
Whether or not blockchains currently see a lot of real-world usage, they are still interesting applications of cryptography.
These days, countless blockchains each have their own unique variations. Since it isn’t possible to cover each of their individual aspects, we will focus on the core concepts and how they relate to cryptography. We will mainly be focusing on bitcoin, not because it is the best blockchain, but because it is the first one, which all of the others are based upon.
What is cryptography?
Before we dive in too deep, it’s important to cover the basics. Cryptography is the study and practice of keeping secret information away from adversaries. In the early days, it was done simply, using techniques such as changing each letter in a word to the letter that follows it in the alphabet. Under this type of scheme:
If your recipient knows how to convert the coded message back to its original form and your adversary doesn’t, then you can assume that it is a safe way to communicate.
Over time, people have gotten much better at cracking codes. Technological advances also improved our code-breaking abilities significantly. In order to keep our information secure in the present day, we now have to use codes that are much more complex.
Bitcoin: The first blockchain
Now that you know the basics about the study of cryptography, it’s time to start looking at the underlying structure of bitcoin, the original blockchain. Bitcoin was initially proposed as a cryptography-based currency that could avoid the downsides of having a financial system controlled by central institutions.
At the core of bitcoin is the idea of transferring value through a chain of digital signatures, which are similar to handwritten signatures. This idea in itself wasn’t revolutionary, but it’s important to understand how it works in order to see the bigger picture. Let’s use an example with handwritten signatures to explain how this process can work:
Sarah didn’t have any money, so she asked Ann for $5. Ann said okay, but only if Sarah gave her a massage. Sarah couldn’t do it right at that moment, but Ann is a stickler for rules and enforcement, so she drew up a quick contract.
I, Sarah, owe the bearer of this paper one 10-minute massage.
Ann made Sarah sign it so that the contract was legitimate.
Later on, Ann decided that she didn’t want a massage, and offered to sell the contract to Jason for $5. Jason decided to buy it because he really wanted a massage. Ann then signed it as proof that she was giving it to another party.
While Jason is now the owner of the contract, there is a problem. How can he know whether or not Ann had already redeemed the massage? Maybe Sarah doesn’t owe anyone a massage anymore, and the contract is worthless.
We’ll get to the answer to this problem later on, in the How can blockchains prevent double-spending? section. For now, let’s talk about digital signatures and hashing, two of the most important concepts that form the foundations of blockchains.
Before we can explain digital signatures, we have to do a bit of backtracking and talk about some security basics. When we transmit valuable data online, there are four important properties which we often need:
- Confidentiality – The ability to keep data hidden from unauthorized parties.
- Authentication – This property involves being able to verify that the other party is really who they say they are, and not some impostor or spy.
- Integrity – If data retains its integrity, it means that it hasn’t been altered or tampered with by anyone else.
- Non-repudiation – This property essentially means that the individual or entity who was responsible for an action cannot claim that they weren’t involved. In everyday life, we use our handwritten signatures as a form of non-repudiation. It’s hard for you to deny that you agreed to a contract when your signature has been used to sign it.
Without each of these properties, how could we be confident that important data really represents what it is supposed to, and that our enemies haven’t accessed or changed it?
Normally, we use encryption algorithms such as AES to take care of confidentiality. For the other three properties, we turn to digital signatures.
There are two major types of encryption:
- Symmetric-key encryption – In symmetric-key encryption, the same key is used to both encrypt and decrypt data. This is an efficient method that is used everywhere in information security, from encrypting your hard drive, to securing your connection to a HTTPS website. The most commonly used symmetric-key algorithm is AES.
- Public-key (asymmetric) encryption – Public-key cryptography uses separate keys for the encryption and decryption processes. These are the public key, which is shared openly, and the private key, which must be kept secret. It relies on some interesting mathematical properties, and enables two parties who have never met before to securely exchange information. It is relatively inefficient, so in practice, public-key cryptography is only used to encrypt the symmetric key, which in turn is used to encrypt data.
Digital signatures are much like normal signatures. We sign a receipt to verify that the information on it is correct and retains its integrity. It’s very hard to repudiate our handwritten signatures, because they are so hard to copy. Since we have our signatures on our bank and ID cards, they also serve as a form of authentication. Anyone can check whether a signature matches the government-issued identity.
Digital signatures rely on public-key encryption. If Alice wants to prove that a piece of data is authentic, retains its integrity and she does not want to be able to repudiate it, she can send a digital signature alongside the data.
To create a digital signature, Alice first takes the data and puts it through a hashing algorithm to form a unique string of numbers (this is explained fully in the Hashing section). These numbers are then digitally signed using the ECDSA algorithm and her private key.
Essentially, the hash and Alice’s private key are combined using a complex mathematical formula. The result is the digital signature, which can be verified with Alice’s public key to prove that she is the real owner of her matching private key, and not an impostor.
Digital signatures allow individuals to prove their ownership of the private key without having to reveal it to the other party. For a deeper dive into how this process works, see our comprehensive guide on digital signatures.
Once Alice creates her digital signature, she then sends it to her recipient, Bob, alongside the data. When Bob receives the data, he can verify its authenticity, check whether it retains its integrity and see whether it is non-repudiable, all by using Alice’s public key.
Alice will most likely have shared her public key with Bob ahead of time, otherwise Bob will be able to find it on a key server (this is a server where many people host their public keys, so that others can find them and contact them in a secure manner).
Bob takes the digital signature and Alice’s public key and computes them together using the reverse of the algorithm that Alice used.
Due to the unique mathematical properties of this calculation, the result will be the same as the hash of Alice’s data from before she digitally signed it with her private key.
Bob then runs the message that he received through the same hash function that Alice used. If this message has not been altered since Alice signed it, then the hash function will give Bob the same result that he got from the computation he performed with Alice’s public key.
If the two values are different, it means the data has been altered, that it was not signed by Alice’s real private key, or there was some other problem. For the sake of our example, let’s say that the two values matched, and the data is in fact legitimate.
In bitcoin and other blockchains, digital signatures are mainly used in the transaction process as a way for someone to prove their ownership, without having to reveal their private key.
Hashing is the process of sending data through a hash function to produce a specific, essentially unique hash of a fixed length. In blockchain applications, we use cryptographic hash functions such as SHA-256.
Cryptographic hash functions have several important characteristics which make them useful:
- They are deterministic – a given input will always have the same output.
- Each output is essentially unique. The chances of two separate inputs having the same output are so low that we don’t really worry about it.
- It is infeasible to figure out the original input from the output (under current techniques and technology).
- Hashes can be computed quickly.
- A slight change in the input results in a significantly different output.
As an example, if we put “Let’s eat dinner” through an online hash function, it gives us:
Every time we put it through, it will give us the same result. But if we change even one character, it returns a value that is completely different. “Let’s eat dinnet” gives us:
So, we have this mathematical function with a range of interesting properties, but how is it useful in blockchain applications?
The properties of hashes allow us to:
- Prove that we possess certain information, without having to reveal that information.
- Prevent transactions from being altered by adversaries.
- Verify the confirmation of transactions without having full knowledge of a block.
- Reduce the bandwidth of transactions.
- Make cryptographic puzzles, which are part of the mining process.
These various features of hashes are used in four major areas of the bitcoin system:
- When a transaction is being made, data from previous transactions is hashed and included in the present transaction.
- When a new transaction is made, the data is also hashed to form a transaction ID (txid), which is an identifier that can be used to locate the transaction details on the blockchain.
- A hash of the public key is used as the address where users can send funds. This makes the addresses shorter and more convenient, as well as providing some security benefits.
- As part of bitcoin’s proof-of-work system (this is discussed later in the Proof-of-work section).
Bitcoin’s basic transaction process
Now that we have explained a couple of the major cryptographic techniques behind the bitcoin blockchain, we can take a look at how these are used in a transaction.
The first thing that you need to be aware of is that bitcoin transactions don’t happen in an intuitive way. All of the bitcoin that someone owns aren’t all jumbled together, and they can’t just be scooped out in the exact amount that is needed for a transaction (plus the transaction fees).
Instead, the total balance is kept separately in allotments according to how it was received. Let’s say that Alice has a total balance of 12 bitcoins, which she received over three separate transactions. Her bitcoins will be stored in the separate amounts that she received them in from the previous transactions.
Let’s say that her balance is made up of one previous transaction of three bitcoins, one previous transaction of four bitcoins, and one previous transaction of five bitcoins. This makes a total of 12 bitcoins. Each of these amounts are the outputs from the previous transactions, and they are now under Alice’s control.
Now, let’s say that Alice wants to make a transaction of ten bitcoins to buy a car from Bob. To cover the total costs, she would need to use the bitcoins from each of the three previous transactions. These outputs from past transactions would now become the inputs for the new transaction.
It may seem strange, but since the previous transactions don’t make up exactly 10 bitcoins, Alice can’t just send 10 bitcoins across and leave two in her wallet. Since the only way to make up the 10 bitcoins is to combine all three past transactions as inputs, she would have to send the entirety of the three allotments, totaling 12 bitcoins, to cover the total value of the transaction.
Fortunately, this doesn’t mean that Alice loses the extra two bitcoins. They are processed as part of the transaction, but they are returned to her as change (minus the transaction fee).
The transaction process for buying a 10 bitcoin car
3 (from one of Alice’s previous transactions) > 10 (to Bob)
4 (from one of Alice’s previous transactions) > 2 (back to Alice as change)
5 (from one of Alice’s previous transactions) >
Normally, the fee would also be taken out prior to returning the change to the sender. Since the fee amount would be negligible in comparison to the transaction amounts, we have left the fee out to keep the numbers tidy.
Looking deeper into the transaction
For the above transaction, each of the inputs would have had their previous transaction data hashed, and it would then have been included in the current transaction. Additionally, Alice has to prove that she has ownership of the three separate inputs (which are outputs from previous transactions–we know, it’s confusing!).
Alice does this using signature script, which is an unlocking script. This script is made up of two aspects, Alice’s public key and her digital signature. The public key indicates the address of the outputs from the previous transactions (which she wants to use as inputs for the new transaction), while her digital signature shows that she is the true owner.
As we discussed in the Digital signature section above, her signature proves that she is the owner, because the digital signature could only have been made using her private key. Alice’s ownership is verified with her public key using a public-key script.
This diagram shows how transactions form a chain. In the second transaction, Owner 2 combines their public key with the data from the previous block. Owner 2 also creates a digital signature with their private key to prove their ownership of the coin. This is verified with Owner 2’s public key. Bitcoin transaction visual by Inkscape licensed under CC0
How can blockchains prevent double-spending?
By now, you hopefully have a reasonable idea about the underlying cryptographic processes that bitcoin and other blockchains use in their transactions. This brings us back to where we left off in our earlier example: How can Jason know whether or not the massage has already been redeemed? We refer to this as the double-spending problem.
On any decentralized, pseudonymous network, it’s expected that some people will cheat to try and enrich themselves. Bitcoin and other blockchains solve this problem with a peer-based verification process called mining.
To explain how this works, let’s stretch our earlier analogy a little bit further:
The best way to visualize the blockchain mining process is if, whenever a transaction is made, a copy of the contract is sent to everyone within the friendship group.
After a number of transactions have been made, each person would combine the transaction details into one folder. Everyone in the friendship group would then take the result from the previous folder of transactions, combine it with the current transaction details, and then try to solve a complex mathematical problem using these inputs.
The first person to get find the solution would then broadcast it to all of the other members of the friendship group, who are able to quickly verify whether they have included the correct transactions, and whether they have the right answer or not.
If the answer is correct, the person who succeeded first receives a reward. This reward is what incentivizes everyone to validate the transactions. If anyone tries to cheat, the rest of the group will find out, which means that cheaters have no chance of claiming the reward and that the effort would be wasted.
Once a person has successfully completed the mathematical problem and claimed their reward, the whole group begins collecting new transactions in another folder. Once they have enough, they combine them with the result from the previous folder and compete to solve a new mathematical problem in the hope of winning the next reward.
The result of the previous folder is included in the new one, and a chain of results is formed, which allows people to check the transaction histories and verify that everything along the chain is legitimate.
Since everyone keeps a copy that includes the transaction history, and the only effective financial incentive is to honestly contribute to the validation process, this prevents double-spending from occurring.
This whole process may seem inefficient, but thankfully everyone who makes a blockchain transaction doesn’t have to do this. The task is left to miners and it’s automated, so it doesn’t involve anywhere near as much work as our example does.
The above analogy is imperfect, because it’s a simplification of a relatively complex process. The main aim is to give you a visual idea of what is really going on. We will discuss how things actually work on a more technical level in the following sections.
Nodes & miners
A node stores a copy of the blockchain, while a miner creates and validates the blocks. Full nodes store the entire history of blockchain transactions, while miners are only concerned with the previous block and the current one they are working on.
In addition to storing the blockchain data, nodes serve as network relays, helping to distribute information to both users and miners. Nodes also verify the blocks that miners generate by making sure that hashes match the transaction data.
In the early days of bitcoin, there was no separation between nodes and miners. The terms were used interchangeably to refer to the entities that competed to validate the transactions in a block, and also stored the blockchain that was used to verify past transactions.
These days, it’s possible to host a node without actually mining. A full node can be used both as a wallet, and to verify the chain of transactions, because it contains a complete copy of the blockchain. In the same vein, miners don’t technically have to host a node, although in reality many do.
The first step towards preventing double-spending is to widely publish a record of previous transactions. If everyone has a copy of the previous transaction records, they know whether certain coins have already been spent.
In the bitcoin protocol, blocks of transaction data are hashed, then the hash is spread throughout the network. This hash acts as a timestamp, proving that the data must have existed at the time that the hash was created–otherwise the hash could not exist.
Each new timestamp is a hash that combines the current block’s transaction data and the timestamp of the previous block. This creates a chain of timestamps, with future ones solidifying those timestamps that came before them.
One of the main aims of a blockchain is to create a decentralized system that can verify itself without the need for third parties. This is generally achieved via a peer-to-peer verification process, where the network offers financial incentives for honestly validating transaction data. Many blockchains refer to this process as mining.
In the bitcoin protocol, every time a transaction is made, the details are sent through a relay of nodes until every node on the network receives the data.
The miners then collect each of these transactions and form them into a block. Each miner then tries to solve the cryptographic puzzle for the block. When a miner succeeds, it sends the block to all of the nodes on the network.
The nodes will only accept the block if all of the transactions within it are verified and haven’t already been spent. When nodes accept a block, they take its hash and distribute it to miners, who then integrate it into the next block of transactions that they are trying to solve.
If two separate miners solve a block at the same time, the other miners will take the data from whichever block they received first, and incorporate it into the next block they are working on. They will also save the data from the second block, just in case they need it later on.
The entire network will be working on either one block or the other until the next block is solved. At this point, those that were working on the other block will abandon it. This is because miners will always accept the longest chain as the correct one. They focus their work towards extending the longest chain, because this is the most likely way for them to end up with the reward.
The bitcoin protocol uses a concept known as proof-of-work to validate its transactions. It’s based on Adam Back’s earlier Hashcash scheme. Other blockchains use proof-of-stake, proof-of-storage or proof-of-space systems, but we won’t go into the latter two in this article.
In order to add a timestamp to the network, a miner must be the first to complete a cryptographic puzzle, then spread the result to the nodes on the network, which verify the answer. The cryptographic puzzle requires a significant amount of computational resources, and miners complete it in the hope of solving the block and receiving the reward. The reward is currently set at 12.5 bitcoins.
If a miner creates a block that does not match the results of the rest of the network, the block will be left behind, and the resources that they expended will have been wasted. Excluding exceptional circumstances (such as a 51% attack), it is more profitable for a miner to act honestly, rather than attempt to disrupt the network or post fraudulent results.
This proof-of-work mechanism is what keeps the network honest. If someone wanted to alter or tamper with a block, they would have to completely redo the work of solving the block. The further back a block is on a chain, the more difficult it is to tamper with. This is because all of the blocks that come after it would also need to be altered.
This proof-of-work system is based on the SHA-256 algorithm. It’s suitable for the system’s needs because it is relatively difficult to compute the solution, but easy to verify it.
Proof-of-work algorithms require significant processing power, which makes them expensive in terms of infrastructure and energy costs.
Lightweight alternatives such as proof-of-stake have emerged to make the verification process more efficient. Proof-of-stake blockchain protocols have varying techniques, but they generally involve choosing the creator of the next block based on a combination of randomness and coin age or wealth.
The amount of coins that a user has, or alternatively, how long the coins have been held, act as the user’s stake. The stake ensures that the user is actually committed to the overall health of the system.
While these factors are important in the delegation of the next block, they are combined with randomness to prevent the system from being centralized by the richest or oldest users.
Selecting the next block through proof-of-stake systems ensures transactions are validated correctly, but in a much more efficient manner than the computations involved in proof-of-work schemes.
Ethereum is currently moving towards a proof-of-stake algorithm to increase its efficiency, while PeerCoin and NXT have already implemented proof-of-stake systems.
The only way to solve the SHA-256 cryptographic puzzles and win the reward is through brute force. This involves randomly guessing numbers until a miner comes up with the right answer.
To complete the puzzles, miners take the current block’s transaction data and the hash of the previous block as part of their input. They then need to guess a separate input, known as a nonce (an arbitrary number), so that when all of these inputs are put through the hash function, the resulting value begins with a set number of zeros. A solution could look something like this:
As the bitcoin network becomes more powerful, the difficulty of the puzzles is increased exponentially by requiring solutions to include a greater number of zeros. This makes it much harder and more time consuming to find a correct answer.
It can be hard to visualize how this process works. Luckily, there is a tool that you can play around with that gives you a reasonable idea of how miners compete in these puzzles.
With this tool, you can emulate the mining process by taking a given block of transaction data as well as the previous block’s hash, then trying to guess which nonce will give you a result that begins with four zeros. Normally, the solution requires a much greater number of zeros, but this example is just a simplification.
To keep things easy, we will pretend that our transaction data, as well as the hash from the previous block, is simply the number “1”. In the real-world, the input would be far more complex.
In the example below, we have block number 1, with a randomly guessed nonce of 72608 for our data input of 1:
As you can see, the guess of 72608 was not successful, since the hash at the bottom does not begin with four zeros. If you’re bored, you can try to manually find a solution by entering numbers for the nonce. Keep trying until you find a result that starts with four zeros.
Alternatively, you can do things the easy way, by clicking the “Mine” button. This button generates guesses automatically to try and find a solution. By pressing the button and waiting for a few moments, we get the following result:
For the given input of 1, a nonce of 64840 results in a successful hash that begins with four zeros. If we were the first bitcoin miner to expend the necessary computing power to find this answer for the block, we would receive the block reward.
These cryptographic puzzles may seem complicated, but the proof-of-work system is important for maintaining the integrity of blockchains. If validating transactions didn’t require a significant expenditure of computing power, it would be much easier for attackers to tamper with the system.
Blockchains: Rapidly emerging technology
Throughout this article, we have mainly talked about how these processes work in the context of the bitcoin protocol. This is simply because bitcoin was the first functional blockchain, and most others are based heavily on its design.
These days, there are thousands of different cryptocurrencies and blockchains, and it would have been impractical to cover the technical distinctions between each one. Despite the differences that exist between them, the entire blockchain world is based on cryptographic concepts such as public-key cryptography, digital signatures and hashing.
The varying blockchains just have slightly different structures and apply these concepts in their own individual ways. These different implementations have their own unique benefits, which give blockchains the potential to be used in a wide variety of situations.
|
<urn:uuid:f18f8c37-b22d-4979-ad7c-a9ce989acbe1>
|
CC-MAIN-2022-40
|
https://www.comparitech.com/crypto/cryptography-blockchain/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00260.warc.gz
|
en
| 0.952302 | 7,353 | 3.03125 | 3 |
Wireless Technology in Access Control
Before jumping into how wireless is used in WiFi locks and wireless access control, let's take a look at the underlying technology: Wireless Fidelity commonly known as WiFi is a wireless technology for local area network LAN. This technology is governed by IEEE 802.11 standard for physical and media access control MAC.
WiFi technology commonly uses a 2.4 GHz wireless band reserved for industrial, scientific, and medical ISM applications; it can also use 900 MHz, 3.6 GHz, and 60 GHz bands.
The indoor WiFi network range varies between 66 ft to 230 ft based on the type of modulation, bandwidth, and other factors. The outdoor range is always a bit longer than the indoor ranges with the same network parameters due to lesser hindrances. A wide range of WiFi-enabled devices is available in the marketplace that includes cell phones, PCs, tablets, access control security systems, office equipment, home appliances, and others.
Wireless Access Control: Components
WiFi-based access system uses a combination of software and hardware resources. The main products commonly used in a WiFi-enabled security system include:
- Electronic or wireless locks
- Wireless readers
- Wireless access point
- Access control application
These products communicate with each other through a wireless communication system.
How It Works
WiFi has become a very powerful technology in modern security and access control systems for a small as well as for a large building, organization, or institute. Internet of things (IoT) is one of the key models of future communication systems, which will integrate all equipment, devices, and home appliances commonly used in our day-to-day life. That integration will be accomplished through an IP-based internet system.
In a door access control system, the legacy locks of a building are replaced with wireless-enabled electronic locks. Those locks connect to a wireless access point or wireless router, which facilitates multiple wireless-enabled devices and locks to communicate with each other within specified criteria and conditions. The control and management of the entire door security access system are done through a software application, which provides an interface to configure the desired conditions and criteria for accessing the doors.
The replica of that software can also be installed on your mobile devices as a mobile app to control the system operations. Electronically programmed fobs can also be used for manual access to a particular area. This entire door access system can work in standalone, integrated, and offline modes through the proper configuration. While using the manual access method, an electronic fob is inserted into the wireless reader, which reads the data on that particular fob and establishes communication with the main application for access conditions and criteria. Based on those criteria, access to the wireless door locks is granted from the core application through wireless
Modern wireless door locks are able to handle multiple data input systems like thumb impressions, manual codes, images, and others through wireless door readers. Thus, they are much more intelligent than they were a few years back!
How do Wireless Locks Compare With Other Technologies?
Wireless locks are incredibly helpful in many use-cases and they often represent a very valid option. There are, however, some use cases where the wireless locks are not an ideal solution, we cover all these case scenarios on our page for smart locks.
Let's finally dive into the pros and cons of wireless locks compared to other technologies.
- Cheaper than legacy and cloud-based access control systems
- Good level of security
- Can be installed without professionals most of the times and are very quick to install
- A cool and modern touch for every office
- They are not part of a bigger security eco-system
- They are often not working in offline mode so this might represent a problem in the case of a blackout
- They get obsolete quicker
If you want a complete overview of the differences between smart locks and our solution, check out our smart locks comparison page!
|
<urn:uuid:e9e71290-758b-4feb-a208-249bb340401c>
|
CC-MAIN-2022-40
|
https://www.getkisi.com/guides/wireless-access-control
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00260.warc.gz
|
en
| 0.929062 | 812 | 3.203125 | 3 |
August 30, 2022
Hyper-V Virtual Switches: Types and Configuration
Microsoft Hyper-V is one of the leading hypervisors used to deploy a virtual infrastructure and create virtual machines (VMs). In an environment with multiple VMs, IT administrators use Hyper-V virtual switches to enable these machines to communicate with each other, the host and external networks.
There are three types of virtual switches that provide different kinds of connectivity. Read this post to learn about these types and how to create and configure them.
Types of Hyper-V Virtual Switches
Using Virtual Switch Manager, you can create three types of Hyper-V virtual switches:
- External switches are bound to the physical network cards located in the host. They provide VMs located on them access to the physical network to which the Hyper-V host is connected. The External switch can also share management traffic and VM traffic on the same switch, which is one of the options that can be set when creating the external switch.
- Internal switches are not bound to a physical network card. They only allow traffic between VMs and the host itself. However, in 2016, new functionality was added to allow external connectivity via NAT from the Hyper-V host: the NAT forwarding internal switch.
- Private switches are only used for virtual machines to communicate with each other. This type can be useful for specific types of traffic such as a cluster network only if you are using one host (as it can’t be used between hosts).
How to Create Virtual Switches in Hyper-V
To create a Hyper-V virtual switch, open the Hyper-V Manager and follow the steps below.
Note: The described process is identical for all versions of the Virtual Switch Manager (Windows Server 2012, 2016 and 2019).
- Click the Virtual Switch Manager…
- Click Create Virtual Switch. Here, you can choose the type of virtual switch you want to create. This example explains how to create an external switch as this type is required to allow connectivity between guest VMs and the physical network.
- Add a meaningful name for the Virtual Switch and choose the physical network adapter to use for connectivity.
Additionally, you can choose to allow management operating system to share this network adapter, which means management connectivity to your Hyper-V host will also use this adapter. Deselect this box if you have a separate management network adapter or if you want to create one manually at a later time. If you deselect, a message appears warning you that you may lose access to the host unless you have another network adapter used for management communication.
Keep in mind that applying these changes will result in a lost ping or two, but no major disruptions.
- You now have a virtual switch that you can use when creating a virtual machine for network communication to outside networks.
The three types of Hyper-V virtual switches provide different kinds of connectivity between VMs, the host and external networks. You can configure the type you need to ensure proper communication between your workloads.
Make sure you protect your Microsoft Hyper-V virtual environment using a robust data protection solution like NAKIVO Backup & Replication. The solution includes advanced features such as incremental backups, granular recovery, full VM recovery and replication/DR for Hyper-V.
Download the NAKIVO’s Hyper-V Backup Free Edition today.
|
<urn:uuid:3d99179f-fd6d-43f0-bf73-3a2d5fdb83e3>
|
CC-MAIN-2022-40
|
https://www.nakivo.com/blog/hyper-v-networking-virtual-switches/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00460.warc.gz
|
en
| 0.905161 | 688 | 3.03125 | 3 |
The building of virtual Radio Access Networks (vRAN) and the use of edge data centers has long been a major topic in the mobile communications sector – and this development affects both the current 4G and the future 5G networks. However, technology continues to evolve away from virtualized workloads and towards containers and cloud-native architectures and applications.
Traditional radio access networks consist of antennas, base stations (baseband units – BBUs), and controllers. This makes them some of the most expensive components in a mobile network. What’s more is that they also require specialized hardware and software. Virtualized RAN (vRAN) solutions overcome these disadvantages, which is why they are replacing proprietary, hardware-based radio access networks in ever-greater numbers. The vRAN is based on Network Functions Virtualization (NFV) that transforms a typical hardware-based network architecture to a software-based environment. There still might be a need for hardware acceleration in some form. Some BBU control functions are provided on virtual machines (VMs) that run on “Commercial-Off-The-Shelf (COTS)" servers in an edge data center. This all results in disaggregation in two dimensions:
1. separation of hardware and software and
2. functional split of the base station.
4G and 5G as edge use cases
The trend towards edge computing affects both 4G and 5G networks. The main advantages of edge computing include zero-touch provisioning, multi-cluster management, a smaller footprint, high scalability and automated operation. vRAN or disaggregated RAN can be seen as a specific use case or workload on edge data centers.
There are a few differences between 4G LTE and 5G when it comes to edge implementation, especially in terms of how the functionalities of the base stations are divided between the antenna locations and the edge data centers.
In 4G LTE networks, the traditional status quo is a distributed RAN with baseband units on the antenna side, meaning the full functionality of the base stations is distributed across the individual antenna locations. This results in considerable costs, potential challenges of radio interference, and high energy consumption. An edge approach moves away from a distributed RAN with BBUs and towards a centralized vRAN. Some of the functions of the base stations are centralized in virtualized BBUs (vBBUs), meaning the base station is split.
In 5G networks, on the other hand, disaggregation in edge implementation is divided into three parts: Radio Units (RUs on antenna sites), Distributed Units (DUs), and Centralized Units (CUs).
The CUs are designed as a distributed cloud solution with low space requirements, while the DUs assume tasks such as real-time processing, supporting the Precision Time Protocol (PTP), hardware acceleration such as field-programmable gate arrays (FPGAs), smart network interface cards (Smart NICs) and even Application-Specific Integrated Circuits (ASICs).
From virtualized to containerized workloads
At the edge, mobile network operators are already using Network Functions Virtualization such as Red Hat OpenStack with distributed nodes for software-defined wide area networks (SD-WAN) and mobile applications. However, introducing vRANs by using virtual machines on standard servers in an edge data center cannot be the final step but is a good first step. It has often been shown that the current virtual network functions (VNFs) – and vRANs in particular – are unable to meet expectations in terms of functionality, easy implementation, or management. That’s why the next step must be using applications that are compatible with the cloud or, even better, cloud-native applications. And this development is currently emerging in the telecommunications sector, with the use of cloud-native applications on Kubernetes-based container platforms such as Red Hat OpenShift for 5G Core (5GC), Edge and RANs, for example.
Cloud-native applications are designed as lightweight containers and loosely coupled microservices. As far as network operators are concerned, the main advantages of these types of applications are the lower development costs, the simpler upgrades and modifications, as well as the potential for horizontal scaling. This also avoids vendor lock-in.
In essence, cloud-native application development is characterized by service-based architecture, API-based communication, and container-based infrastructure. Service-Based Architecture (SBA) is defined in the 5G standard.
Service-based architectures such as microservices enable modular, loosely coupled services to be built. The services are provided via lightweight, technology agnostic APIs that reduce the complexity, effort, and expense during deployment, scaling, and maintenance. In addition, cloud-native applications are based on containers that enable operation across different environments. Container technology uses the operating system’s functions to divide the available computing resources across multiple applications and at the same time ensure the applications are secure. Cloud-native applications also scale horizontally, meaning other application instances can be added easily – often through automation within the container infrastructure. The lower overheads and high density enable numerous containers to be hosted within the same virtual machine or the same physical server.
Cloud-native architecture as foundation of 5G network slicing
It is becoming increasingly apparent that the transition to 5G is a transition to containers and cloud-native applications. This means that virtualized workloads are evolving into containerized workloads. Virtualization will be there for years to come though, in one form or another.
The advantages of the cloud-native approach can be seen in particular in the main 5G use cases, and thus in network slicing, or in other words, the provision of multiple virtual networks on a common physical infrastructure.
In principle, there are the following three use cases when it comes to 5G:
- eMBB – Enhanced Mobile Broadband: high data transfer rates and support for extreme traffic densities
- mMTC – Massive Machine-type Communications: optimization of M2M and IoT applications by networking a large number of devices such as smart-home or smart-city models
- URLLC – Ultra-reliable and Low-latency Communications: support for critical applications with low latency in areas such as predictive maintenance or connected cars as well as augmented and virtual reality, for example.
A virtualized RAN that is both container-based and cloud-native is a key component for the 5G network transformation and in providing optimal support for these technologies and use cases. Cloud-native architecture in particular allows initial costs to be kept to a minimum per slice and the scaling up to thousands of slices of all sizes to be cost-efficient.
There is no question that 5G will power a new generation of services thanks to its higher data rates and extremely low latencies. To be able to leverage their advantages to the fullest extent possible, however, telecommunications companies need to bring their data processing and processing power closer to the “end user”. The end user can ultimately also be a smartphone, a connected car, or a robot in a production process. The task clearly demonstrates that both edge computing and cloud-native capabilities are the focus of mobile network operators’ activities at the moment.
Some mobile providers have already set up commercial, if locally restricted, 5G environments, and numerous new projects are on the horizon. vRAN, edge computing, and cloud-native are the crucial technology drivers in this area, and open source solutions – such as Red Hat OpenShift – will form the basis of disaggregated 5G infrastructures.
|
<urn:uuid:21c826ac-1199-4a7a-8807-d4d796c38ac3>
|
CC-MAIN-2022-40
|
https://www.5gradar.com/features/cloud-native-architectures-will-define-the-vran-future
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00460.warc.gz
|
en
| 0.929548 | 1,557 | 2.734375 | 3 |
Happy (almost) Pi day! Tomorrow we celebrate the irrational and transcendental number known as Pi. Pi dates back to the third century B.C. when Ancient Greek mathematician Archimedes of Syracuse attempted to calculate the circumference of a circle. He calculated an infinite number beginning with 3.14. Now, centuries later, in 2009, the U.S. House of Representatives passed legislation to make Pi day, March 14th, a national holiday. Pi day is celebrated in schools around the country to encourage students to engage in the study of mathematics.
Here at Collibra, we don’t shy away from a celebration. We too want to celebrate the number Pi (it also happens to be Albert Einstein’s birthday!). However, at Collibra, we are not preoccupied with calculating the circumference of a circle. Rather, we are focused on making data meaningful for all Data Citizens.
Here are 3.14 ways to make data meaningful in your organization!
3 barriers to Data Intelligence
- Data deluge. As companies embark on their digital transformation projects, they are faced with an ever-growing and ever-changing volume of data. Digital transformations dramatically increase the volume of data a company deals with on a daily basis. Thus, it can be difficult to find your data and understand which data matters and is valuable. For example, retail companies may have information in many places, from online sales to offline sales to social platforms, and inventory services. If your organization is considering a digital transformation, make sure all your data ducks are in a row before you begin to avoid complications from the data deluge.
- Siloed applications and data fragmentation. Due to the data deluge, companies often have too much data to handle. Large enterprises typically have data stored in numerous environments throughout their organization, often making it difficult to find the right data. For example, at Lockheed Martin data was siloed across the organization within different teams. As a result, employees had to manually request data from different teams within the larger organization, thus slowing time to insight. Without a way to securely enable access to data across an organization, data remains in silos. Customers’ needs go unmet and potential insights remain untapped.
- People and processes. Manual processes are prone to error, are not scalable, and decrease the speed, quality and confidence in decision-making. It is also difficult to share data across an organization without an established process. For example, at many organizations, it can take tens of hours, several meetings, emails, hallway chats and many people to define seemingly simple metrics, like customer lifetime value. Without a centralized process, companies experience a lack of clear communication, collaboration and the duplication of data and effort.
1 solution on the path to Data Intelligence
- Data democratization. Organizations that invest in a solution to give all Data Citizens access to trusted data by connecting the right data, insights, algorithms and people to optimize processes, increase efficiency and drive innovation. Companies must automate critical data processes, as well as enterprise-wide collaboration. For example, as the Senior Center of Excellence Lead, Data Governance at AXA XL states, Data Intelligence “played an important role in [their] journey to build data transparency within [their] organization.” Data transparency and data democratization allow companies like AXA XL to use the right, and most trustworthy data, to make business decisions.
4 beneficial outcomes of Data Intelligence
- Data-centric culture. With access to trusted data, Data Citizens can create a data-centric culture within their organization. Data-centric organizations are more nimble allowing Data Citizens to make faster decisions using the right data and ultimately creating better products and/or experiences for their customers
- Drive digital transformation and innovation. By enabling Data Citizens to easily search for the data they need and understand where the data is coming from so they can make impactful business decisions.
- Innovate and transform. Empowering Data Citizens to use data allows them to make insightful decisions based on trusted data. This increases creativity, strategic thinking and the development of innovative and informed ideas.
- Collaboration. A collaborative environment where Data Citizens share data, reports and information across the organization encourages new ideas based on accurate data. Collaboration in conjunction with trusted data allows organizations to derive business value out of their data to further their goals.
Happy (almost) Pi day from Collibra!
|
<urn:uuid:6894a483-9de9-41d5-b37e-b8265e36a8eb>
|
CC-MAIN-2022-40
|
https://www.collibra.com/us/en/blog/3-14-ways-to-make-data-meaningful-in-your-organization
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00460.warc.gz
|
en
| 0.925562 | 897 | 2.5625 | 3 |
How orchestrated communications during the prevention and intervention stages can help
The extensive Coronavirus media coverage has created a public fear that a new SARS-type epidemic could spread around the world. As the virus continues to spread across China and beyond, governments around the world are setting up public healthcare, travel and education plans and the economic impact of the Coronavirus* is already being felt, not only by the Chinese economy, but also by the global economy.
Prevent panic from spreading with early crisis communications
Generalized fear, augmented by social media sharing, can easily lead to over-reactions. One defensive example is the hashtag #ImNotAVirus that has spread on social media. Government entities can avoid unnecessary panic and its consequences by alerting the population via official emergency channels and offering prevention or first intervention information.
Orchestrated communication during the prevention and intervention stages is key in containing a crisis. It can help avoid panic and minimize problems such as emergency bottlenecks and medical equipment shortages. Trusted healthcare and government sources must rapidly :
• Inform the affected areas or populations at risk, such as the elderly, the young or the immunodeficient
• Explain how the epidemic is being transmitted
• Describe the medical processes to be followed and provide recommendations
Communications, the cornerstone of intervention and continuity plans
The healthcare and education industries can prevent epidemics from spreading with intervention plans and solutions to ensure continuity of public services. Proficient, on-site and remote communications are the cornerstone of such a plan.
Public healthcare officials’ top concern is a healthcare facility’s capacity to welcome all patients. Temporary hospitals can be built in 10 days, in extreme cases, as we have seen in Wuhan. But if this isn’t possible, available hospitals need to check and organize their emergency operation plans. They need to:
• Sustain internal and external communication with the community
• Check and coordinate available resources across the different wards
• Prepare organizational aspects
• Plan staff responsibilities
• Review available utilities, resources and assets to ensure the patients’ safety and security
The goal is to provide the best clinical support in any situation. Communications are mission-critical in effectively delivering an emergency operation plan and aligning all departments in one goal, to prepare the hospital for mass casualty incidents, as this strategic whitepaper details. This can be achieved by the integration of notifications, unified communications and collaboration services supported by a highly resilient and redundant communications system.
A second example is education. If officials encourage students and personnel to stay home, colleges and universities can reinforce the government messages on official communication channels and provide alternative continuity solutions, such as remote classes, with chat, voice and video integrated in the learning management systems that students can use from anywhere. Read more insights in this dedicated blog post on crisis management for campus safety.
In today’s connected world, emergency situations and ensuing business slowdown affect global economies and industries almost instantly whether it is the cost of oil, air travel and hospitality industries, consumer goods or luxury sales. Global companies with manufacturing, production or logistics operation sites in the affected areas can ensure business continuity, anticipate backup plans and put in place effective communications to prevent stock values from decreasing.
Managing the crisis: Break silos to accelerate resolution
When a global crisis is confirmed, multidisciplinary collaboration is key to successful crisis management. Lack of communications or acknowledgement of messages between the different emergency actors (first respondents, healthcare providers, public safety organizations) can have a huge impact on peoples’ health and lives. But connecting humans is only half of the story. Smart objects and the local IoT can also help solve a crisis. Breaking the silos not only between public services, but also between people, objects and processes can save the day when every single minute counts.
After the crisis: Lessons learned and processes redefined
Once the crisis is managed and over, it is the time for analysis. A lessons-learned phase is crucial to identifying what went well, what went wrong and project future action plans. No matter what the next major emergency might be, an environmental crisis, an epidemic, an attack, replaying, analyzing and enhancing the process could help shorten or even prevent it.
In this stage also, multidisciplinary collaboration is key. Different stakeholder insights and perspectives can help better define the necessary data and processes to analyze. Cross-department cooperation will help enhance future emergency anticipation, intervention and keep associated costs from spiraling out of control.
Looking into the future of emergency management with AI and IoT
AI and machine learning can transform global emergency management. Correlating different sources of data can be very helpful in identifying how an epidemic evolved, areas most exposed to risk and the most effective actions. Gathering data from previous crisis situations, transmission patterns and timescales can help governments or health agencies model action plans and population information broadcasts.
IoT is spreading to every area, particularly in the public safety and security sectors. The transportation industry deploys different types of cameras and sensors. They use thermal imaging cameras that can visualize temperatures higher than the usual 38-39°C instantly from a safe distance, identifying infected individuals and notifying security staff.
Emergency intervention scenarios can facilitate crisis management and communications. Whether the trigger comes from a human being, a connected object (a camera or a sensor in a specific area) or AI, scenarios can be set up to ensure notifications are transferred to the right group of decision-makers, communicators and eventually affected populations. Instant communications scenarios should include all actors at the core of a crisis.
Connecting everything that matters
To conclude, the cornerstone of any emergency plan is to ensure people, processes, smart objects, and AI are connected, effectively communicating, resilient and available 24/7.
The Alcatel-Lucent Rainbow™ Communications Platform as a Service (CPaaS) can transform siloed communications into a truly coordinated, mission-critical teamwork. Rainbow CPaaS helps emergency players coordinate communications from people or smart objects, and improve responsiveness on any channel with SMS, chat, group alerts, audio and video calls or conferences. Connected with AI-powered bots, Rainbow allows people to alert authorities instantly, and vice-versa, helping both parties be aware and solve emergencies more efficiently, together.
Rainbow is built on top of reliable, enterprise-grade voice services, notification servers and emergency-certified solutions. Also, Rainbow can be integrated into various public or industry-specific applications as well as processes, to help public safety actors anticipate crisis and accelerate resolution.
Learn more about our Public safety solutions or contact us for a personalized use case.
* New York Times: The SARS and Coronavirus economic impact, https://www.nytimes.com/2020/02/03/business/economy/SARS-coronavirus-economic-impact-china.html
|
<urn:uuid:3f4572b8-2755-414c-abcc-499387e0360d>
|
CC-MAIN-2022-40
|
https://www.al-enterprise.com/en/blog/epidemic-crisis-avoid-panic-costs
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00460.warc.gz
|
en
| 0.915718 | 1,411 | 2.78125 | 3 |
The advancements in the development of artificial intelligence spread all over the world at a tremendous speed and create an incredible hype increasing our expectations. As a matter of fact, it is rather difficult to disappoint a user in an entertaining domain: an introduction of AI and neural networks is instantly gaining immense popularity (Prisma and FaceApp applications are good examples of that). In this article, we have compiled 9 ways to use artificial intelligence in education.
Automated grading is a specialized AI based computer program that simulates the behavior of a teacher to assign grades to essays written in an educational setting. It can assess students’ knowledge, analyzing their answers, giving feedback and making personalized training plans.
Intermediate interval education
Revision of knowledge when you are just about to forget it is an effective educational and technological solution. Polish inventor Peter Wozniak came up with an educational application based on the effect of the interval. This app keeps track of what you are learning, and when you are doing it. Using artificial intelligence, the application can find out when you most likely have forgotten something and recommend you revise it. It takes only a few revisions to make sure that the information is now stored in your memory for many years.
Feedback loops for teachers
Feedback, that is, students’ assessment of teachers, has a century-long history. Despite the shift from paper to online surveys, little to no progress has been made in the feedback look area. Since student evaluation of teaching is often the most valuable source of information, it is obvious that it needs to be elevated.
Thanks to modern technologies, such as AI-driven chat robots, machine learning, and natural language processing, there are many interesting opportunities for improving the quality of feedback.
A chatbot can collect opinions via a dialog interface just like a real interviewer but with a small amount of required work from a person. A conversation can be adapted in accordance with the answers and personality of a student. A chatbot can even find out the reasons for this or that opinion. You can also filter out personal insults and obscene expressions, which are sometimes present in teacher’s ratings.
At the Georgia Institute of Technology, students were fascinated by a new teacher’s assistant named Jill Watson, who quickly and accurately answered students’ requests. However, the students did not know that Ms. Watson’s true identity was actually a computer equipped with an IBM-AI system. Such virtual facilitators can be highly useful in the educational domain.
At the University of Deakin in Victoria, Australia, the development of a chat campus is in full swing. As with the teacher assist, the intelligence behind this comes from IBM’s supercomputer system named Watson.
Once the project is finished, the chat campus will be able to answer questions related to everything that a student should know about the life on the campus. How to find the next lecture hall, how to apply for the next semester class, how to get assignments, where to find a parking lot or how to contact a professor – these are all the questions that AI chat campus bots will be able to solve.
Personalized learning refers to a variety of educational programs in which the pace of learning and the instructional approach are optimized for the needs of each learner. The experience is tailored to learning preferences and the specific interests of different learners. AI can adapt to the individual pace of learning and can consistently offer more complex tasks to accelerate learning. Thus, both fast and slow students can continue to study at their own pace.
Adaptive learning is perhaps one of the most promising areas of application of AI for education. It is assumed that artificial intelligence in schools can track the progress of each individual student and either adjust the course or inform the teacher about the material that a given student has difficulty comprehending. In this regard, it is also worth mentioning some intelligent tutoring systems.
Distance learning, the locomotive of modern high-tech education, implies running distance exams. How can we conduct it in such a way as to make sure that a student does not cheat? AI-powered protecting systems come to rescue. Proctoring or Proctored Test is a mechanism to ensure the authenticity of the test taker and prevent him/her from cheating via a proctor is present during the duration of the test.
Data Accumulation and Personalization
Using geolocation data and our previous search queries, AI is already able to offer us the ideal cafe nearby or, for example, to build a route to the nearest store of your favorite comic books. The same technology can be applied when we study some grammatical rule based on examples only from the sphere that interests us with all educational content adapting to us.
|
<urn:uuid:9075a4bd-c69b-486b-997c-1137e8aed34e>
|
CC-MAIN-2022-40
|
https://www.crayondata.com/9-ways-to-use-artificial-intelligence-in-education/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00460.warc.gz
|
en
| 0.955214 | 979 | 3.15625 | 3 |
Although computers are better for data processing and making calculations, they were not able to accomplish some of the most basic human tasks, like recognizing Apple or Orange from a basket of fruits, till now. Computers can capture, move, and store the data, but they cannot understand what the data mean. Thanks to Cognitive Computing, machines are bringing human-like intelligence to a number of business applications. Cognitive Computing is a term that IBM had coined for machines that can interact and think like humans.
In today’s Digital Transformation age, various technological advancements have given machines a greater ability to understand information, to learn, to reason, and act upon it.
Today, IBM Watson and Google DeepMind are leading the cognitive computing space. Cognitive Computing systems may include the following components.
Components of cognitive computing
- Natural Language Processing – understand meaning and context in a language, allowing a deeper, more intuitive level of discovery and even interaction with information.
- Machine Learning with Neural Networks – algorithms that help train the system to recognize images and understand speech
- Algorithms that learn and adapt with Artificial Intelligence
- Deep Learning – to recognize patterns
- Image recognition – like humans but faster
- Reasoning and decision automation – based on limitless data
- Emotional Intelligence
Cognitive computing can help banking and insurance companies to identify risks and frauds. It analyses information to predict weather patterns. In healthcare, it is helping doctors to treat patients based on historical data.
Recent examples of cognitive computing
- ANZ bank of Australia used Watson-based financial services apps to offer investment advice, by reading through thousands of investments options and suggesting best-fit based on customer specific profiles, further taking into consideration their age, life stage, financial position, and risk tolerance.
- Geico is using Watson based cognitive computing to learn the underwriting guidelines, read the risk submissions, and effectively help underwrite
- Brazilian bank Banco Bradesco is using Cognitive assistants at work helping build more intimate, personalized relationships
- Out of the personal digital assistants we have Siri, Google Now & Cortana – I feel Google now is much easy and quickly adapt to your spoken language. There is a voice command for just about everything you need to do — texting, emailing, searching for directions, weather, and news. Speak it; don’t text it!
As Big Data gives the ability to store huge amounts of data, Analytics gives the ability to predict what is going to happen, Cognitive gives the ability to learn from further interactions and suggest best actions.
|
<urn:uuid:b8973101-c4f9-4399-8109-db6be794135a>
|
CC-MAIN-2022-40
|
https://www.crayondata.com/what-exactly-is-cognitive-computing/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00460.warc.gz
|
en
| 0.931481 | 517 | 3.296875 | 3 |
There are a lot of popular practices in the world of business management when it comes to optimizing processes. Two that are frequently seen to be on top of the list for businesses are DevOps and agile software development.
There are a lot of similarities between the two, but their differences are often used in debates about which one is better. However, what might be of note is that you could get more favorable results by using agile and DevOps in tandem. So, how do agile and DevOps interrelate?
Table of Contents
- What Is DevOps?
- What Is Agile Software Development?
- DevOps vs Agile
- How Do Agile and DevOps Interrelate?
The name is derived from its main functionality, which combines practices from both software development (Dev) and IT operations (Ops). DevOps is known for its large repository of optimization techniques and its continuous development process.
What DevOps does is that it eliminates any barriers between process and production. It is considered practical because it puts collaboration, integration, and proper communication at the forefront, allowing IT and development teams to work in tandem and use the same tools. This creates a constant flow of management that keeps refurbishing processes for quality results.
The cooperation of the two units is the reason behind faster problem solving, which makes a more efficient workflow and even more successful deployment. The integration of processes and the interconnectivity of the IT and development teams essentially produces faster service and, in turn, more customer satisfaction.
Interestingly enough, DevOps does include different agile principles and practices, specifically regarding aspects of collaboration and enhanced automation. However, DevOps focuses on both development and operations, while agile focuses more on the former.
The principles of agile software development center around the continuous reevaluation and advancement of systems to create more efficient processes that will inevitably increase productivity, raise profits, and satisfy customers.
Agile software development is not just one individual methodology. Instead, it is a collection of different principles and practices utilized connectedly to generate deliverables efficiently.
Agile is seen as the successor of the waterfall model, which is a more linear approach to project development. With the waterfall methodology, each phase within the project depends on the previous phase’s outcomes. Therefore, due to the ever-changing and fast-paced business world, the more adaptable agile methodology has proven to be more advantageous to companies.
Unlike the waterfall methodology, agile allows for work to be completed in smaller batches separately, significantly reducing the time needed to complete a process. Each batch is small enough to be completed relatively quickly in a short period of time, not holding back the rest of the project development.
Agile also relies on feedback and continuous testing that makes a business able to quickly reevaluate and reconstitute systems that no longer meet the business’ needs or do not perform as expected initially.
Essentially, agile software development provides adaptability to businesses and project management, so they can either deal faster with unexpected changes or keep improving their systems.
It is a common misconception that DevOps practice and the agile method have an array of differences and that DevOps is meant to replace agile, much like agile method did with the waterfall methodology. This is not actually accurate. In reality, they should be better seen as complementing each other.
For agile methodology, the focus is more on the quality of the interactions, prioritizing collaboration between units within the software development process from the inception phase to the deployment phase. The main work is completed in smaller batches and is organized appropriately based on the preset procedures.
For DevOps practices, the focus includes some of the same functions, but it extends its scope further. Along with the development process side, DevOps also incorporates operations and moves past the development phase to include delivery and maintenance through testing. Specifically, DevOps pivots toward testing and automation while also accounting for unexpected variables that affect the work.
Separately, the two have quite a few downsides. For example, due to the fact that most of the work is done in smaller batches in the agile method, the coordination of the entire workflow is not always perfect. Moreover, agile project management lacks the ability to respond immediately to changes.
On the other hand, the continuous adjustments of processes in DevOps can be seen as challenging by many. This creates a need for very efficient and highly qualified individuals to navigate said changes.
Therefore, based on everything mentioned above, it becomes apparent that DevOps and agile should not be seen as competing forces but more as interrelated instead.
While, in some ways, DevOps can be seen as a successor to agile, it seems more fit to characterize it as an evolution of agile. Perhaps, a more straightforward perspective is to think about the DevOps approach as an add-on mechanism to agile. It is the inclusion of DevOps that helps agile fully achieve its objectives.
For example, consider how DevOps improves its processes through the provision of feedback for a better delivery flow. Because agile focuses on interactions between different units, it smooths the communication required to take place between development and operations.
While their approaches are considered quite different, their continuous integration maximizes results, producing more effective outcomes than those created by each one alone. The synergy between the two enables businesses to increase the quality and accelerate software development.
While the two previously mentioned approaches look at software development from different viewpoints, their end goals are parallel to each other. Individuals often perceive them as competing forces or as a replacement for the other.
However, in reality, their infrastructures are more parallel than divergent. In fact, using a DevOps tool and agile software development in tandem produces much better results, as each approach brings different values that are missing or are less achieved by the other.
Perhaps, a better demonstration of how the two are interrelated is to showcase how DevOps practices and agile development solve the waterfall methodology’s inefficiencies. Unlike the waterfall model, both agile and DevOps focus on shorter release cycles and spend more time on automation and collaboration.
|
<urn:uuid:722f5912-8b82-435d-b6e6-e2e5453e5305>
|
CC-MAIN-2022-40
|
https://digitaldirections.com/how-do-devops-and-agile-interrelate/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00460.warc.gz
|
en
| 0.956843 | 1,239 | 2.515625 | 3 |
Are you a DIY enthusiast and desire to make a hologram projector for a long time? Then get off your couch now and do it without any further ado.
With a hologram projector, you can watch both 2D and 3D pictures, animations, or videos without any glasses.
That being said, you might think that it is extremely tough to make due to its complex mechanism.
Turns out, making a hologram projector is not as difficult as it may seem.
With a few simple steps, you will be able to grasp the whole process. Read on to know how to make a hologram projector effortlessly.
A hologram projector is basically a gadget that produces three-dimensional illusion by using holograms. It causes a picture to show up as though it is in midair. Apart from pictures, it can create 3D-like videos and animations as well.
This projector is a basic gadget that can be made with the help of a couple of plastic or glass sheets by converting them into a pyramid. It uses special laser light or white light to project the holograms.
One of the best ways to become successful in making a hologram projector is knowing how it works. It will give you a basic idea about the item, and you will comparatively make fewer errors.
The hologram projector projects holograms using the Pepper’s Ghost principle. This principle is an illusion technique that is mostly used in theatres to project movies and in museums to show pieces of information.
In simple words, four evenly inverse varieties of a single hologram or a picture are projected onto the four essences of the pyramid. By Pepper’s Ghost principle, each side casts the picture falling on it to the focal point of the pyramid. These projections work as one to shape an entire figure, creating a multi-dimensional image.
Making a hologram projector is not that tough, but it is a work of patience. A little mistake can make the whole project unsuccessful.
If you want to know how to make a hologram projector at home, we have discussed all the necessary steps below. Check them out!
Before you start the process of making a hologram projector, you should have all the materials beside you. This will make the process much more convenient for you since you won’t have to get up again and again to get the tools.
Here are the pieces of equipment you will need.
Now that you have all the necessary equipment draw a rough diagram for your projector. With the help of this diagram, you will be able to cut the plastic or the glass with utmost precision.
We recommend making the diagram on graph paper because it will help you draw everything in completely straight lines, ensuring accurate measurement.
To make a regular-sized hologram projector, you should draw a figure with an 18 cm base, 10.5 cm height, and a 2 cm top region.
If you are also wondering how to make a hologram projector for your phone, then you will be delighted to know that the process is almost similar. You will just have to change the dimensions. The hologram projector for the phone should have a 6 cm base, 3,5 cm height, and 1 cm top portion.
You can increase or decrease the size according to your requirement, but we suggest you maintain the ratio of the dimensions mentioned above.
After making the rough diagram on graph paper, place it on a completely flat surface with enough lighting.
With the help of an anti-cutter or a scissor which you can purchase online or your hardware store, you can cut the glass or CD cover in a matter of minutes.
Check out how you will be able to cut the glass effectively.
As you will have to make a pyramid shape to enable hologram projection, at this step, you will have to connect the four glass pieces.
To join the glass pieces, you can either use glue or scotch tape. However, it is better to use glue since it won’t block any rays of light from coming in or going out from the projector.
This is one of the most crucial steps because if you make the slightest mistake while joining, the projector may come off after a certain period of time or can create scattered 3D-like illumination.
Now that your hologram projector is ready, it is time to check if it really works and the best way to do it is by projecting a hologram using it.
To help you with the process, here is a brief description of it. Have a look!
A simple screensaver picture that fills the screen or a video that goes back and forth displayed on the screen will be more than enough to project 3D illusions. After setting the picture, put it on a plane surface.
Increase the auto screen off time during this process to ensure that the image is displayed on the screen for a long period of time.
Keep the hologram projector with the short end on the mobile phone screen and ensure that the video or the moving image is displayed. When the image is still moving, you will observe a 3D-like hologram projected in midair.
Making a hologram projector is innovative work. With a hologram projector, you will be able to make simple videos and moving images much more interesting.
Due to the pyramid-like structure of this projector, you might first assume that making this will be a tough talk. Hopefully, after reading this article about how to make a hologram projector, your assumption has changed.
About Dror Wettenstein
Dror Wettenstein is a software engineer and entrepreneur with more than 15 years of experience in the industry. He is the founder of TechTreeRepeat, a company that enables technical writers to publish their work faster and share it with readers across the globe. Dror has a master’s degree in computer science from San Diego State University and a bachelor’s degree in physics from UC Irvine.
When he’s not working on software projects, Dror enjoys writing articles and essays on various topics. He also likes playing guitar and spending time with his wife and two young children.
|
<urn:uuid:4550faae-698b-4d7c-a5aa-57d2e8c658e7>
|
CC-MAIN-2022-40
|
https://www.ceedo.com/how-to-make-a-hologram-projector/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00660.warc.gz
|
en
| 0.932367 | 1,258 | 2.796875 | 3 |
What are the 5 Levels of Autonomous Driving?
Autonomous driving is not to an all-or-nothing affair. But in reality, automation of driving functions has a long history that has been steadily expanding for decades. The use of speed control with a centrifugal governor dates back to the 1900s and 1910s, while modern cruise control was invented in 1948. Anti-lock brakes were first used for aircraft in 1929. The growing use of automation of driving functions is also apparent in newer features like intelligent parking assist and lane keeping assist systems.
Here, we take a look at the growing use of automation and present a model that captures the transition to truly self-driving cars based on work by the U.S. government and SAE International.
|
<urn:uuid:c13b7f13-2c76-4d23-94ab-7fc726dd3ea6>
|
CC-MAIN-2022-40
|
https://www.iotworldtoday.com/2016/08/23/what-are-5-levels-autonomous-driving/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00660.warc.gz
|
en
| 0.965086 | 152 | 3.21875 | 3 |
With the onset of COVID-19, educational institutions have moved at least some – if not all – of their teaching online. While this may work sufficiently well for classroom-based university lectures, the same cannot always be said for the younger set. Even university students experience their fair share of problems with online classes such as the lack of discussion and interaction with classmates. However, technology has never failed to amaze us and it certainly isn’t going to start now – interactive whiteboards are a good supporting tool for remote learning, especially among the younger ones.
How Interactive Whiteboards Work
Connecting its display surface to a projector as well as a computer, users can see the images shown on the computer on the whiteboard’s surface with the help of the projector. Users can interact directly with the display with the use of a mouse or even their finger, depending on how the device works.
Boost Interaction and Keep Students Engaged
There’s a reason teaching styles vary so drastically at university and grade school level. Grade school students simply don’t have the same attention spans as older students and if you are going to conduct a class “lecture style”, you can bet many of them will become disengaged before long. Fortunately, with interactive whiteboards and displays, students can participate in classes much in the same way as they would in a traditional classroom. Some features of interactive whiteboards include:
- Downloadable graphics such as graphs and charts and even educational videos.
- Participative handheld devices which allow students to “vote” or select an option, with the ability to see how the rest of their classmates answered.
- A variety of features such as customizable web interfaces, drag-and-drop applications and more.
Tips for Teachers and Educators
While the novelty of using interactive whiteboards may keep students engaged, teachers need to exercise the right level of moderation between fun and education. It would do to keep the following in mind:
- Minimize the amount of text on the display. Essential information can be imparted verbally or through collaborative activities.
- Use participative handheld devices to start discussions about why an answer is right or wrong.
- Ensure that students do not get so distracted by the novelty of the technology that more focus is placed on it over the content of the lesson.
Get the Perfect Interactive Display Solution at Ameritechnology
If you are looking to get your classroom started with an interactive display solution, you have come to the right place. Ameritechnology has over three decades of experience delivering excellent one-on-one services for our clients in order to boost productivity and efficiency. With expertise in the latest interactive display technology, our friendly team is confident of improving your bottom line when you engage us. If you have any questions regarding our products and services or would like a recommendation, please do not hesitate to contact us today.
|
<urn:uuid:12798709-a0b9-4bf9-87f3-f166816c8fa8>
|
CC-MAIN-2022-40
|
https://www.atechnj.com/interactive-whiteboards-can-boost-remote-learning-during-covid-19/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00660.warc.gz
|
en
| 0.949819 | 591 | 2.890625 | 3 |
The government’s goal right now is to promote its robot employees to higher tasks.
A lot of very smart people, like Elon Musk and the late Stephen Hawking, believe that artificial intelligences and the robotic bodies that could house them will eventually lead to the end of the world, or at least the end of human civilization. That concept has also become a cliché in science-fiction. Personally, I don’t think robots and AI will ever eliminate humans, not so much because I don’t see their potential, but because I don’t think humans are talented enough to build something both powerful and malicious enough to destroy humanity, much less something that could really become sentient, which is obviously the first requirement.
In the short term though, there is a good chance that AI and robotics might advance automation to the point where quite a few people will be put out of work. Some researchers believe that about half of all job fields have vulnerable workers who could be replaced by AI or robotics by 2026. And in what could make things even more difficult for women in the workforce, those studies also suggest that traditionally female-dominated fields might be more harshly affected by the new robotic employees.
The government isn’t falling behind on AI and robotics but is taking a careful approach designed to use robotics and automation to elevate humans to higher, better roles instead of eliminating them completely. At least that’s the plan.
Currently, there are 20 federal agencies that employ one of the lowest levels of robotics in the workforce, which is called robotics process automation or RPA. The General Services Administration is one of the most invested, with 10 RPA systems on the job now and plans to increase that to 25 by the end of the year. In general, RPA systems look impressive on the surface, performing multiple tasks quickly such as looking up employee data and issuing payroll checks or dispatching money electronically. But they are only following a highly defined ruleset, which they can never break. They are not true AIs or even all that advanced in terms of robotics. They are more like old-school expert systems, but can still perform repetitive tasks accurately and with great speed.
The government’s goal right now is to promote its robot employees to higher tasks. For that, it needs to develop intelligent process automation, or IPA. The duties of an IPA system are similar to those performed by the non-intelligent RPAs, with the exception that IPAs can learn from their environment, make some level of judgment call should something fall outside of its programmed parameters, and remember those decisions if they are successful.
The Defense Advanced Research Projects Agency is taking the lead on evolving government robotics to the next step. It recently issued a request for information to see how IPA is being successfully used in the commercial sector, and where the technology could perhaps interface with government. Specifically, the new government robots would need to be able to work in fields like procurement, where they would be expected to run cost analysis studies, modify contracts and evaluate proposals from vendors.
The goal of the new advanced government robots would not be to replace humans, though that would certainly happen in some cases, but instead to elevate them out of jobs where they have no future. The theme of elevating, not replacing, humans using AI and robotics was repeatedly stressed at the 2019 ACT-IAC Artificial Intelligence and Intelligent Automation Forum in Washington, D.C.
“We shouldn’t be hiring for positions that can be automated,” said Ed Burrows, a senior adviser to GSA’s chief financial officer, at the forum. “That becomes a dead-end job. So that’s one thing to keep in mind. We should think about automation first.”
In fact, the government has been directed to do just that according to the President’s Management Agenda, which stipulates that wherever possible, government workers should be moved up from menial positions to higher-value tasks. Employing either IPA or RPA systems could accomplish that by filling in and taking on the necessary, low value jobs that humans will be abandoning. It’s just that the IPA systems could expand the definition of a menial task a little bit more toward the high end, displacing or promoting more humans, depending on how you want to look at it.
It’s good that government is looking at the impact of AI and robotics on humans, and not just on the increased efficiencies the new systems, especially IPA ones, could offer. I doubt that too many private companies would worry about workers losing their jobs if it meant an increase in their bottom line. However, my guess is that by taking a balanced approach and promoting workers who have proven their skills and reliability doing drudge work to higher-level tasks in the same field, agencies will improve operations even more in the long run.
If all goes to plan, people won’t be complaining that a robot took their job, at least in government. Instead, they might find themselves bragging about an advanced robot that took over their old, crappy job, and thus paved the way for their own well-deserved promotion.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys
|
<urn:uuid:87ac6fdd-2f47-447d-a1b1-78478a909b4c>
|
CC-MAIN-2022-40
|
https://www.nextgov.com/ideas/2019/04/are-government-robots-coming-your-job/156010/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00660.warc.gz
|
en
| 0.970446 | 1,108 | 2.609375 | 3 |
Today In History May 25
1950 Brooklyn Battery Tunnel opens in NYC
The Brooklyn–Battery Tunnel (authoritatively the Hugh L. Carey Tunnel, regularly alluded to as the Battery Tunnel) is a tolled burrow in New York City that associates Red Hook in Brooklyn with Battery Park in Manhattan. The passage comprises of twin cylinders that each convey two traffic paths under the mouth of the East River. Despite the fact that it passes only seaward of Governors Island, the passage doesn’t give vehicular access to the island. With a length of 9,117 feet (2,779 m), the Brooklyn–Battery Tunnel is the longest nonstop submerged vehicular passage in North America.
Plans for the Brooklyn–Battery Tunnel date to the 1920s. Official intends to manufacture the passage were submitted in 1930, yet were at first not completed. The New York City Tunnel Authority, made in 1936, was entrusted with developing the passage. After fruitless endeavors to make sure about government reserves, New York City Parks Commissioner Robert Moses proposed a Brooklyn–Battery Bridge. Be that as it may, the open contradicted the extension plan, and the Army Corps of Engineers dismissed the arrangement a few times, worried that the scaffold would obstruct transporting access to the Brooklyn Navy Yard. This incited city authorities to reexamine plans for a passage. The Brooklyn–Battery Tunnel began development on October 28, 1940, yet its finish was postponed because of World War II-related material deficiencies. The passage opened on May 25, 1950.
1964 US Supreme Court rules closing schools to avoid desegregation is unconstitutional
Griffin v. Province School Board of Prince Edward County, case in which the U.S. Incomparable Court on May 25, 1964, governed (9–0) that a Virginia district, trying to maintain a strategic distance from integration, couldn’t close its state funded schools and utilize open assets to help private isolated schools. The court held that the strategy broke the Fourteenth Amendment’s equivalent security condition.
A government area court decided that the end of the district’s state funded schools was an infringement of the equivalent assurance provision, which ensures that no individual or gathering will be denied the insurance under the law that is delighted in by comparative people or gatherings. Be that as it may, a redrafting court switched the decision, finding that the region court ought to have went without until the state court had rendered its choice. The Supreme Court of Appeals of Virginia thusly decided for Prince Edward area. It held that the province reserved the option to close its government funded schools and that state assets could be utilized at the isolated tuition-based schools.
1962 US unions AFL-CIO starts campaign for 35-hour work week
The opportunity of laborers to combine in associations and haggle with managers (in a procedure known as aggregate bartering) is generally perceived as a basic human right over the globe. In the United States, this privilege is secured by the U.S. Constitution and U.S. law and is bolstered by a greater part of Americans.
More than 16 million working ladies and men in the United States are practicing this right—these 16 million laborers are spoken to by associations. Generally speaking, more than one of every nine U.S. laborers are spoken to by associations. This portrayal makes sorted out work probably the biggest organization in America.
1983 1st US National Missing Children’s Day is proclaimed
National Missing Children’s Day has been remembered in the United States on May 25, since 1983, when it was first announced by President Ronald Reagan. It falls on a similar day as the International Missing Children’s Day, which was built up in 2001.
In the quite a long while going before the foundation of National Missing Children’s Day, a progression of prominent missing-kids cases stood out as truly newsworthy.
On May 25, 1979, Etan Patz was just six years of age when he vanished from his New York City home on his way from transport to class. The date of his vanishing was assigned as National Missing Children’s Day. At that point, instances of missing kids infrequently earned national media consideration, yet his case immediately got broad inclusion. His dad, an expert picture taker, disseminated highly contrasting photos of him with an end goal to discover him. The subsequent gigantic pursuit and media consideration that followed concentrated the open’s consideration on the issue of youngster kidnapping and the absence of plans to address it.
For just about three years, media consideration was centered around Atlanta, Georgia, where the assortments of little youngsters were found in lakes, bogs, and lakes along side of the road trails. Twenty-nine bodies were recouped in the Atlanta murders of 1979–1981 preceding a suspect was captured and indicted.
|
<urn:uuid:1596b1e5-92ec-4ec0-8da3-d2cc8821aba2>
|
CC-MAIN-2022-40
|
https://areflect.com/2020/05/25/today-in-history-may-25/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00660.warc.gz
|
en
| 0.971224 | 983 | 3.203125 | 3 |
Introduction to Data Analytics
Today, practically every firm has evolved into a data-driven organization, which implies that they are implementing a strategy to acquire more data about their customers, markets, and business processes. This data is then classified, saved, and analyzed in order to make sense of it and gain useful insights.
Table of contents:
- What is Data Analytics?
- Why is Data Analytics important?
- Data Analytics tools
- How to become a Data Analyst
- Career Scope in Data Analytics
What is Data Analytics?
Most businesses are constantly collecting data, but this data is meaningless in its raw form. This is when data analytics enters the game. Data analytics is the process of analyzing raw data to derive meaningful, actionable insights that can subsequently be utilized to inform and drive smart business decisions.
A data analyst will take raw data, arrange it, and then analyze it, changing it from unintelligible statistics into cohesive, understandable information. After interpreting the data, the data analyst will share their findings in the form of suggestions or recommendations for the company’s next steps.
Data analytics allows you to make sense of the past and predict future patterns and behaviors; rather than relying on guessing, you can make informed decisions based on what the data is showing you.
Armed with data insights, businesses and organizations can gain a much deeper understanding of their audience, industry, and firm as a whole—and, as a result, are much more positioned to make decisions and plan forward.
Check out our blog on Data Science tutorial to learn more about Data Science.
Why is Data Analytics important?
Data analytics plays an important part in business improvement since it is utilized to collect hidden insights, develop reports, conduct market studies, and improve business requirements.
Data analysis is a type of internal arrangement in which numbers and figures are presented to management. With the help of data analytics, enterprises will be able to decide on customer trends and behavior forecasting, boost business revenues, and promote efficient decision-making.
Role of Data Analytics:
- Gather Hidden Insights- Data is mined for hidden insights, which are then analyzed in light of business requirements.
- Generate Reports- Reports are generated from the data and distributed to the appropriate teams and individuals to deal with additional steps for a successful business.
- Perform Market Analysis- Market analysis can be used to determine the strengths and weaknesses of competitors.
- Improve Business Requirements- Data analysis enables businesses to better meet the needs and expectations of their customers.
Data Analytics tools
With the growing market demand for Data Analytics, different tools with varying functionality have arisen. The top data analytics tools, whether open-source or user-friendly, are as follows.
- Python: This is the most widely used analytics tool for statistics and data modeling. R can be compiled and run on a variety of systems, including UNIX, Windows, and Mac OS.
- R programming: Python is an object-oriented programming language that is open source and simple to learn, develop, and maintain. It includes libraries for machine learning and visualization such as Scikit-learn, TensorFlow, Matplotlib, Pandas, and Keras.
- Tableau Public: This is a free program that links to any data source, including Excel and corporate data warehouses. It then generates visualizations, maps, dashboards, and other web-based tools with real-time changes.
- SAS: This tool, which is a programming language and environment for data manipulation and analytics, is simple to use and can analyze data from various sources.
- Microsoft Excel: This is a popular data analytics tool. This tool examines tasks that summarize data with a preview of pivot tables and is mostly used for clients’ internal data.
- Apache Spark: This tool executes applications on Hadoop clusters 100 times quicker in memory and 10 times faster in storage, making it one of the largest large-scale data processing engines. This tool is also widely used for developing data pipelines and machine learning models.
- RapidMiner: A sophisticated, integrated platform capable of integrating with any sort of data source, including Access, Excel, Microsoft SQL, Teradata, Oracle, and Sybase. This technology is usually used for predictive analytics, such as data mining, text analytics, and machine learning.
Prepare yourself for the industry by going through these Data Analyst Interview Questions now!
How to become a Data Analyst
It is recommended to have a strong CGPA and a graduation degree from a data analysis program. Even if a person does not specialize in data analysis, a degree in mathematics, statistics, or economics from a well-known university can lead to an entry-level Data Analyst position.
A bachelor’s degree is required for entry-level data analyst positions. Higher-level data analyst jobs normally pay more and may require a master’s degree. Aside from the degree, a person interested in becoming a Data Analyst may enroll in online courses.
- Technical Skills
Programming Languages: A Data Analyst should be familiar with at least one programming language. R, Python, C++, Java, MATLAB, PHP, and other programming languages are among those that can be used to edit data.
Data Management and Manipulation: A Data Analyst should be familiar with programming languages such as R, HIVE, SQL, and others. Developing queries to retrieve the relevant data is a critical component of Data Analytics.
- Soft Skills
A data analyst is in charge of giving management accurate and trustworthy information. Data analysts must therefore have a thorough understanding of the data as well as each user’s specific requirements. Good communication skills are also required when working with others to ensure that the data is effectively aligned with the objectives.
- Practical Skills
Mathematical Ability: A Data Analyst must understand statistics and be conversant with the formulae required for data analysis in order to deliver real-world value. As a Data Analyst, you must understand mathematics and be able to handle common business problems. You must also understand how to use tables, charts, graphs, and other tools.
Microsoft Excel: Data Analysts’ primary responsibilities include organizing data and gathering numbers. As a result, it is important for a Data Analyst to be familiar with Excel.
Career Scope in Data Analytics
A Data Analyst can expect to make a significant amount of money, do fascinating work, and have a lot of job stability. This is a career that is always changing and diverse and requires a lot of attention to detail and a focus on quality. A profession in Data Analytics also provides excellent prospects for progression.
Data Analyst is a position that is clearly on the rise. The difference between mid-level and senior-level positions is determined by experience and extra education. However, because Data Analysts are in such great demand at all levels, job growth is expected to be positive for each tier over the next decade, ranging from 5% for a Financial Analyst to 25% for an Operations Research Analyst.
|
<urn:uuid:f9324243-4b3a-4d1e-997a-bce400b2e37b>
|
CC-MAIN-2022-40
|
https://www.businessprocessincubator.com/content/introduction-to-data-analytics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00660.warc.gz
|
en
| 0.919243 | 1,443 | 3 | 3 |
There are many Ethernet network cable wires used for data center applications such as Cat5e, Cat6, Cat6a, and Cat7 cables. The conductor metals adopted by those network patch cables vary in different kinds such as oxygen-free copper wire, pure copper wire, copper clad aluminum wire, and aluminum wire. This article discusses the above network cable wire and compares the differences.
What Is Oxygen-Free Copper Wire?
An oxygen-free copper wire is the highest conductivity copper cable wire that is refined to reduce the amount of oxygen to less than 0.003%, the total impurity content to less than 0.05%, and the purity of copper to more than 99.95%. Thereby, improving the conductivity and oxidation resistance.
What Is Pure Copper Wire?
The pure copper wire has a slightly lower copper content than that of oxygen-free copper wire, which is around 99.5% to 99.95%. The other impurities are some metals such as iron and oxygen. The pure copper wire has excellent conductivity, thermal conductivity, plasticity, and is easy to be pressed.
What Is Copper Clad Aluminum Wire?
The copper clad aluminum wire is an electric conductor composed of an inner aluminum core and an outer copper cladding. Since it contains aluminum, it is significantly lighter and weaker than pure copper wire or oxygen-free copper wire, but stronger than pure aluminum wire. Copper clad aluminum wire is not compliant with UL and TIA standards, both of which require solid or stranded copper wires, but it’s a cheap alternative for category twisted-pair communication cables.
What Is Aluminum Wire?
An aluminum wire is made of pure aluminum. Due to the lightweight nature of aluminum, aluminum wire is quite malleable. However, when compared with copper wire, it has lower electrical and mechanical properties, which is a relatively poor electrical conductor.
Aluminum VS Copper Wire: Which Is the Better Network Cable Wire?
Despite being the best material, copper is a little expensive than aluminum. Thus, people prefer to use aluminum to save money without compromising quality. However, when the aluminum wire warms, it expands, and when it cools, it shrinks. With each gradual warm-cool cycle, the tightness of the wiring decreases, resulting in sparks, even fires. Aluminum wire will also corrode when it encounters certain metal compounds, which increase the resistance to the connection. Thus, aluminum wire requires higher maintenance than copper wire. In contrast, copper has one of the highest electrical conductivity rates among metals. Copper has high tensile strength so it can withstand extreme stress and is more durable. Due to its high elasticity, high durability, low maintenance, and high performance, it is a more stable material than aluminum. So a good manufacturer will use a great deal more copper in the wire to ensure the performance.
Now we know that copper wire outweighs aluminum wire when used in wired networks. The higher the copper content of the network cable wire, the better the conductivity and transmission capacity. However, most of the network cables sold on the market are pure copper wires or copper clad aluminum wires. FS provides oxygen-free copper wires, which outperforms among the peers. These oxygen-free copper wires are 100% pass the Fluke Channel Test with PVC CM jacket material, making them the best choice for you in terms of price and quality. If you’re interested, please visit www.fs.com.
|
<urn:uuid:867eb955-0ecc-48cf-b5f8-a6347f16f387>
|
CC-MAIN-2022-40
|
https://www.fiber-optical-networking.com/tag/network-cable-wire
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00660.warc.gz
|
en
| 0.906777 | 709 | 2.828125 | 3 |
When Alan Turing was developing the Turing Test for determining machine intelligence—what we today refer to as Artificial Intelligence (AI)—he first posed the question, “Can machines think?” At the time, this seemed preposterous, but technological advances in recent years now have us asking a similar question: “Can machines learn?” The answer is “yes,” as long as we supply them with enough of the right kind of data.
This ability to learn is a specific, data-oriented form of AI that has the power to dramatically increase productivity and streamline business processes. Companies are increasingly looking to AI and specifically machine learning (ML) as a way to improve their businesses.
While many of these applications are still in the early developmental phases, companies must prepare now to successfully implement AI and ML in the future and take full advantage of the power of this technology. Specifically, organizations need to establish data ecosystems today that will become the foundation of ML applications, training models, and business process development in the future.
Machine learning analyzes data flow in the enterprise. Anticipating this eventual capability, the most critical issue is how companies prepare the data that will be used as the basis for machine learning. Data scrubbing and formatting is one of the greatest challenges in implementing AI so companies should adopt clean data format methods now.
To address this challenge, IT and the lines of business need to organize around a specific business problem they are trying to solve. Defining that specific problem creates a direct correlation between the input, its format and shape, and ultimately the decision output. Therefore, understanding the outcome and the kinds of decisions that need to be made will determine which data sets you use and how you need to scrub, transform, or clean them.
Looking ahead into 2018 and the next few years, there are three steps companies looking to power their process optimization efforts with AI must take:
- Look for “low-hanging fruit” opportunities where AI can make a difference and good, clean data is readily accessible.
- Build a solid technical infrastructure and invest in developing the digital assets for collecting and organizing the data needed for AI and machine learning.
- Invest in a common platform that allows IT and the lines of business to collect and clean the data that will be used to build the training models for machine learning.
A comprehensive process automation platform can help businesses address all three of these steps and businesses who use one to make sure they are ready for AI- and ML-enabled applications often fall into one of three categories – each with its unique challenges:
- New user with existing LOB data
- New user with no existing LOB data
- User with an existing platform and an existing data repository
|
<urn:uuid:2997da71-96ba-44e9-8adf-5691ade2a6ee>
|
CC-MAIN-2022-40
|
https://www.nintex.com/blog/how-to-prepare-your-data-for-the-future/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00660.warc.gz
|
en
| 0.951723 | 556 | 2.609375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.