text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
A shortage of human cybersecurity talent is causing companies to turn to artificial intelligence (AI) to fill in the gaps.
An estimated one million cybersecurity jobs are expected to go unfulfilled this year around the world, according to the Information Systems Audit and Control Association. The need for qualified cybersecurity employees is in such short supply that consulting firms, security firms and businesses with pressing need are turning to artificial intelligence.
Such is the scarcity that many companies use employees from other departments or recent inexperienced graduates to fill in for jobs in corporate cybersecurity. This leaves companies open to cyberattacks from cybercriminals who can launch attacks backed by thousands of computers in a botnet and more.
As a result, companies are turning to artificial intelligence or AI to plug their cybersecurity needs, according to PwC head of US cybersecurity and privacy practice Sean Joyce. The company itself uses AI capabilities such as predictive analytics to identify “hot spots” where certain dangerous cyberattacks are originating and spreading around the world, according to the Wall Street Journal.
Another firm, Booz Allen Hamilton, uses AI tools to spot critical threats and highlight them for cybersecurity experts in order to defuse the threats. The use of AI, the company says, helps cybersecurity pros combat “alert fatigue”, a term wherein professionals are faced with a constant barrage of security alerts.
IB vice president for strategy and offering management for IBM Security Jim Brennan adds:
The skills shortage in cybersecurity is real. We’re seeing it every day with clients. This added intelligence also allows junior staffers to learn on the job by providing fast access to historical insights and security research of more senior analysts.
IBM’s own Watson for Cybersecurity “works side by side with analysts to boost their accuracy and shorten their reaction time” according to Brennan.
Chicago-based cybersecurity services firm Trustwave Holdings uses AI and machine-learning products to provide protection for small and medium-sized companies. “[H]alf or fewer of their security staff have the specialized skills and training to address more complex security issues,” according to a Trustwave spokesman, pointing to a survey which turned out the opinion from over 60% of the participating companies.
Image credit: Pexels.
|
<urn:uuid:8464c969-f54b-4741-b8bf-5bc16ac82916>
|
CC-MAIN-2022-40
|
https://www.lifars.com/2017/10/cybersecurity-talent-shortage-forces-companies-to-turn-to-ai/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00236.warc.gz
|
en
| 0.942512 | 455 | 2.6875 | 3 |
Data has become the most critical factor in business today. As a result, different technologies, methodologies, and systems have been invented to process, transform, analyze, and store data in this data-driven world.
However, there is still much confusion regarding the key areas of Big Data, Data Analytics, and Data Science. In this post, we will demystify these concepts to better understand each technology and how they relate to each other.
- Big data refers to any large and complex collection of data.
- Data analytics is the process of extracting meaningful information from data.
- Data science is a multidisciplinary field that aims to produce broader insights.
Each of these technologies complements one another yet can be used as separate entities. For instance, big data can be used to store large sets of data, and data analytics techniques can extract information from simpler datasets.
Read on for more detail.
What is big data?
As the name suggests, big data simply refers to extremely large data sets. This size, combined with the complexity and evolving nature of these data sets, has enabled them to surpass the capabilities of traditional data management tools. This way, data warehouses and data lakes have emerged as the go-to solutions to handle big data, far surpassing the power of traditional databases.
Some data sets that we can consider truly big data include:
- Stock market data
- Social media
- Sporting events and games
- Scientific and research data
(Read our full primer on big data.)
Characteristics of big data
- Volume. Big data is enormous, far surpassing the capabilities of normal data storage and processing methods. The volume of data determines if it can be categorized as big data.
- Variety. Large data sets are not limited to a single kind of data—instead, they consist of various kinds of data. Big data consists of different kinds of data, from tabular databases to images and audio data regardless of data structure.
- Velocity. The speed at which data is generated. In Big Data, new data is constantly generated and added to the data sets frequently. This is highly prevalent when dealing with continuously evolving data such as social media, IoT devices, and monitoring services.
- Veracity or variability. There will inevitably be some inconsistencies in the data sets due to the enormity and complexity of big data. Therefore, you must account for variability to properly manage and process big data.
- Value. The usefulness of Big Data assets. The worthiness of the output of big data analysis can be subjective and is evaluated based on unique business objectives.
Types of big data
- Structured data. Any data set that adheres to a specific structure can be called structured data. These structured data sets can be processed relatively easily compared to other data types as users can exactly identify the structure of the data. A good example for structured data will be a distributed RDBMS which contains data in organized table structures.
- Semi-structured data. This type of data does not adhere to a specific structure yet retains some kind of observable structure such as a grouping or an organized hierarchy. Some examples of semi-structured data will be markup languages (XML), web pages, emails, etc.
- Unstructured data. This type of data consists of data that does not adhere to a schema or a preset structure. It is the most common type of data when dealing with big data—things like text, pictures, video, and audio all come up under this type.
(Get a deeper understanding of these data types.)
Big data systems & tools
When it comes to managing big data, many solutions are available to store and process the data sets. Cloud providers like AWS, Azure, and GCP offer their own data warehousing and data lake implementations, such as:
- AWS Redshift
- GCP BigQuery
- Azure SQL Data Warehouse
- Azure Synapse Analytics
- Azure Data Lake
Apart from that, there are specialized providers such as Snowflake, Databriks, and even open-source solutions like Apache Hadoop, Apache Storm, Openrefine, etc., that provide robust Big Data solutions on any kind of hardware, including commodity hardware.
What is data analytics?
Data Analytics is the process of analyzing data in order to extract meaningful data from a given data set. These analytics techniques and methods are carried out on big data in most cases, though they certainly can be applied to any data set.
(Learn more about data analysis vs data analytics.)
The primary goal of data analytics is to help individuals or organizations to make informed decisions based on patterns, behaviors, trends, preferences, or any type of meaningful data extracted from a collection of data.
For example, businesses can use analytics to identify their customer preferences, purchase habits, and market trends and then create strategies to address them and handle evolving market conditions. In a scientific sense, a medical research organization can collect data from medical trials and evaluate the effectiveness of drugs or treatments accurately by analyzing those research data.
Combining these analytics with data visualization techniques will help you get a clearer picture of the underlying data and present them more flexibly and purposefully.
Types of analytics
While there are multiple analytics methods and techniques for data analytics, there are four types that apply to any data set.
- Descriptive. This refers to understanding what has happened in the data set. As the starting point in any analytics process, the descriptive analysis will help users understand what has happened in the past.
- Diagnostic. The next step of descriptive is diagnostic, which will consider the descriptive analysis and build on top of it to understand why something happened. It allows users to gain knowledge on the exact information of root causes of past events, patterns, etc.
- Predictive. As the name suggests, predictive analytics will predict what will happen in the future. This will combine data from descriptive and diagnostic analytics and use ML and AI techniques to predict future trends, patterns, problems, etc.
- Prescriptive. Prescriptive analytics takes predictions from predictive analytics and takes it a step further by exploring how the predictions will happen. This can be considered the most important type of analytics as it allows users to understand future events and tailor strategies to handle any predictions effectively.
Accuracy of data analytics
The most important thing to remember is that the accuracy of the analytics is based on the underlying data set. If there are inconsistencies or errors in the dataset, it will result in inefficiencies or outright incorrect analytics.
Any good analytical method will consider external factors like data purity, bias, and variance in the analytical methods. Normalization, purifying, and transforming raw data can significantly help in this aspect.
Data analytics tools & technologies
There are both open source and commercial products for data analytics. They will range from simple analytics tools such as Microsoft Excel’s Analysis ToolPak that comes with Microsoft Office to SAP BusinessObjects suite and open source tools such as Apache Spark.
When considering cloud providers, Azure is known as the best platform for data analytics needs. It provides a complete toolset to cater to any need with its Azure Synapse Analytics suite, Apache Spark-based Databricks, HDInsights, Machine Learning, etc.
AWS and GCP also provide tools such as Amazon QuickSight, Amazon Kinesis, GCP Stream Analytics to cater to analytics needs.
Additionally, specialized BI tools provide powerful analytics functionality with relatively simple configurations. Examples here include Microsoft PowerBI, SAS Business Intelligence, and Periscope Data Even programming languages like Python or R can be used to create custom analytics scripts and visualizations for more targeted and advanced analytics needs.
What is data science?
Now we have a clear understanding of big data and data analytics. So—what exactly is data science?
Unlike the first two, data science cannot be limited to a single function or field. Data science is a multidisciplinary approach that extracts information from data by combining:
- Scientific methods
- Maths and statistics
- Advanced analytics
- ML and AI
- Deep learning
In data analytics, the primary focus is to gain meaningful insights from the underlying data. The scope of Data Science far exceeds this purpose—data science will deal with everything, from analyzing complex data, creating new analytics algorithms and tools for data processing and purification, and even building powerful, useful visualizations.
Data science tools & technologies
This includes programming languages like R, Python, Julia, which can be used to create new algorithms, ML models, AI processes for big data platforms like Apache Spark and Apache Hadoop.
Data processing and purification tools such as Winpure, Data Ladder, and data visualization tools such as Microsoft Power Platform, Google Data Studio, Tableau to visualization frameworks like matplotlib and ploty can also be considered as data science tools.
As data science covers everything related to data, any tool or technology that is used in Big Data and Data Analytics can somehow be utilized in the Data Science process.
Data is the future
Ultimately, big data, data analytics, and data science all help individuals and organizations tackle enormous data sets and extract valuable information out of them. As the importance of data grows exponentially, they will become essential components in the technological landscape.
- BMC Machine Learning & Big Data Blog
- DataOps Explained: Understand how DataOps leverages analytics to drive actionable business insights
- What Is a Data Pipeline?
- Database Administrator (DBA) Roles & Responsibilities in The Big Data Age
- Data Streaming Explained: Pros, Cons & How It Works
- Data Ethics for Companies
|
<urn:uuid:16931f88-15d6-428b-a1f5-54da33e855a2>
|
CC-MAIN-2022-40
|
https://www.bmc.com/blogs/big-data-vs-analytics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00236.warc.gz
|
en
| 0.891216 | 1,983 | 3.296875 | 3 |
One development in the post-Sept. 11 world that aids security personnel is the Homeland Security Presidential Directive 3 (HSPD 3). As this page details, HSPD 3 established the Homeland Security Advisory System that created “a common vocabulary, context, and structure for an ongoing national discussion about the nature of the threats that confront the homeland and the appropriate measures that should be taken in response.” It’s important that all security providers understand what HSPD 3 entails, so they can respond appropriately.
The Homeland Security Advisory System establishes five Threat Conditions from Green (low) to Red (severe) that assess both the probability of a terrorist attack occurring and its potential gravity. These Threat Conditions set by the Attorney General in consultation with the Assistant to the President for Homeland Security, carry with them a corresponding set of protective measures to reduce an organization’s vulnerability to a terrorist attack or enhance its ability to respond.
The executive level of the Federal Government is the only entity required to monitor the Threat Conditions and enact the proscribed protective measures. However, it’s advisable for other government entities as well as the private sector to establish similar measures and react accordingly when Threat Conditions change. ARK Systems can provide the technology to help organizations prevent or respond to terrorist acts. Contact us to learn more about the security options available and which ones would be appropriate for your organization.
|
<urn:uuid:418e8c86-f3c4-40e2-8936-5db2fc4f414b>
|
CC-MAIN-2022-40
|
https://www.arksysinc.com/blog/hspd-3-explained/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00236.warc.gz
|
en
| 0.930583 | 280 | 2.65625 | 3 |
In order to properly utilize static routes, it is vital to understand the role that they play. The following list provides situations in which overriding the default outbound load-balancing behavior is recommended and sometimes necessary.
- Destinations and/or end point hosts require specific paths. For example a secure website that can open multiple TCP sessions, yet requires all packets to come from a specific IP address or that the TCP session is “sticky” to the initial session characteristics.
- Bandwidth or latency requirements for an application or service make a particular WAN line more suitable as the primary connection.
- It is necessary to create a failover precedence. Such methods to fail into VPN’s or Site-to-Site Line Bonding from a private MPLS or dedicated circuit can be defined.
Types of Static Routes
1) Fixed – only go over the WAN(s) specified, if those WAN(s) are down, drop the traffic.
2) Failover – use that WAN while it’s up, if it goes down fail over to another WAN.
3) Failback – similar to failover except when the preferred WAN comes back up, fail back to that WAN.
4) Hostname Failback – failback using a hostname (resolved via DNS) to set which WANs to fail over to, and fail back preference.
5) Priority Failback – failback with a higher priority (this will supersede all routing with the exception of VPNs – DANGEROUS).
6) VPN Static Route (only available for Static Routes) – this will force outbound and inbound VPN traffic relating to this static route to use the selected WAN.
Static Routes vs. Static Policy Routes
The difference here is only in classifying what traffic will follow this Static Route.
1) Static Routes route by source subnet (LAN or Next Hop Route) and destination subnet.
2) Static Policy Routes route by source subnet (again, LAN or Next Hop Route), destination subnet, and protocol.
Static Route NAT
Static Routes generally NAT by default. If you want the traffic to go out with no NAT, the LAN or Next Hop Route (NHR) in question should have the correct WAN to Route Via selected in the LAN or NHR configuration sections.
Scenarios (Common Usage of Static Routes)
1) The traffic can only traverse your MPLS, because it’s routed, create a Fixed Static Route with your MPLS as the WAN line.
2) Mail traffic has to correspond to is rDNS Resource Record, in the case create a Fixed Static Route that appropriately classifies mail traffic for the WAN lines that corresponds to the rDNS record. A common way to classify mail traffic to use a Static Policy Route with TCP and SMTP port 25.
3) VPN’s may have a preferred WAN to use first, in this case create a Basic Static Route with Failback type selected and VPN checked. This will make sure the VPN is forced to connect on the preferred WAN line. If that WAN line is down we will force it to use the line that was failed over to. When the preferred WAN comes back up we will fail back and force the use of the preferred WAN for the VPN’s path.
|
<urn:uuid:c938b216-ad1b-47d9-a3bc-f1a3602f8c63>
|
CC-MAIN-2022-40
|
https://support.ecessa.com/hc/en-us/articles/200143916-Static-Routes-Versions-10-5-and-earlier-
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00236.warc.gz
|
en
| 0.872609 | 706 | 2.84375 | 3 |
Frauchiger’s & Renato’s New Thought Experiment Shaking Foundations of Quantum Physics
(QuantaMagazine) A new thought experiment is confronting the theoretical assumptions of quantum mechanics head-on and shaking the foundations of quantum physics. The experiment is decidedly strange. For example, it requires making measurements that can erase any memory of an event that was just observed. While this isn’t possible with humans, quantum computers could be used to carry out this weird experiment and potentially discriminate between the different interpretations of quantum physics.
The experiment, designed by Daniela Frauchiger and Renato Renner, of the Swiss Federal Institute of Technology Zurich, involves a set of assumptions that on the face of it seem entirely reasonable. But the experiment leads to contradictions, suggesting that at least one of the assumptions is wrong. The choice of which assumption to give up has implications for our understanding of the quantum world and points to the possibility that quantum mechanics is not a universal theory, and so cannot be applied to complex systems such as humans.
The three assumptions are:
1) Quantum theory is universal;
2) Quantum theory is consistent;
3) Opposite facts cannot both be true.
|
<urn:uuid:434bbbae-dd5e-4539-83ec-f86cb44a707e>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/frauchigers-renatos-new-thought-experiment-shaking-foundations-quantum-physics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00236.warc.gz
|
en
| 0.920931 | 244 | 2.734375 | 3 |
A major new international research programme is responding to the overwhelming demand of internet traffic to develop ubiquitous wireless data coverage with unprecedented speed at millimetre waves.
For the first time in the Internet’s history, the data used by tablets and smartphones now exceeds that of desktops. Emerging technologies and entertainment such as telemedicine, Internet of Things (IoT), 4K video streaming, cloud gaming, social networks, driverless cars, augmented reality and many other unpredictable applications will need zettabyte (1,000 billions of billions) of wireless data.
Smartphones will continue to work at microwave frequencies for many years because of microwaves’ ability to pass through barriers. Though due to limitations to the amount of data that can be transmitted by microwaves, the only way to provide data with very fast download speeds is through covering urban areas with dense grids of micro, nano and pico ‘cells’, at microwave frequencies to serve a small number of users per cell.
However, manufacturers and operators have not yet solved how to feed a huge amount of data to a new maze of cells. Fibre is too expensive and difficult, if not impossible, to deploy in many urban areas, due city council permits or disruption.
A desirable solution is a wireless layer that can provide data at the level of tens of gigabit per second per kilometre square. It also needs to be flexible and come at a low cost.
Only the millimetre wave frequencies, 30–300 GHz, with their multi GHz bandwidths, could support tens of gigabit per second of wireless data rate. Unfortunately, rain can weaken or block data transmission and other technological limits have so far prevented the full exploitation of this portion of the spectrum.
The €2.9million European Union’s Horizon 2020 ULTRAWAVE project, led by engineers at Lancaster University, aims, for the first time, to build technologies able to exploit the whole millimetre wave spectrum beyond 100 GHz.
The ULTRAWAVE concept is to create an ultra-capacity layer, aiming to achieve the 100 gigabit of data per second threshold, which is also flexible and easy to deploy. This layer will be able to feed data to hundreds of small and pico cells, regardless of the density of mobile devices in each cell. This would open scenarios for new network paradigms and architectures towards fully implementing 5G.
The ULTRAWAVE ultra capacity layer requires significant transmission power to cover wide areas overcoming the high attenuation at millimetre waves. This will be achieved by the convergence of three main technologies, vacuum electronics, solid-state electronics and photonics, in a unique wireless system, enabled by transmission power at multi Watt level. These power levels can only be generated through novel millimetre wave traveling wave tubes.
Professor Claudio Paoloni, Head of Engineering Department at Lancaster University and Coordinator of ULTRAWAVE, said: “When speeds of wireless networks equal fibre, billions of new rapid connections will help 5G become a reality. It is exciting to think that the EU Horizon 2020 ULTRAWAVE project could be a major milestone towards solving one of the main obstacles to future 5G networks, which is the ubiquitous wireless distribution of fibre-level high data rates.
“The huge growth in mobile devices and wireless data usage is putting an incredible strain on our existing wireless communication networks. Imagine crowded areas, such as London’s Oxford Street, with tens of thousands of smartphone users per kilometre that wish to create, and receive content, with unlimited speed. To meet this demand, ULTRAWAVE will create European state of the art technologies for the new generation of wireless networks.”
The ULTRAWAVE consortium includes five top Academic institutions and three high technology SMEs in millimetre wave and wireless technology, from five European countries: Lancaster University in UK, Fibernova and Universitat Politecnica de Valencia in Spain, Ferdinand Braun Institute, Goethe University of Frankfurt and HF Systems GmbH & Co. KG in Germany, OMMIC in France and University of Rome Tor Vergata in Italy.
The ULTRAWAVE project started on the 1st September 2017 and will be presented to the public by the Kickoff Workshop at Lancaster University on the 14th September 2017.
More information is available by visiting www.ultrawave2020.eu
|
<urn:uuid:c6057d93-cdea-4cae-95cd-9919c64a2399>
|
CC-MAIN-2022-40
|
https://daspedia.com/archives/5718
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00236.warc.gz
|
en
| 0.899627 | 905 | 3.171875 | 3 |
The medical device remains a crucial component in improving the quality of life. Key players in the medical technology arena are going on the AI track to invent cutting-edge devices with high precision and automation. Expectations are high as the future of healthcare delivery is poised for steady growth with AI onboard.
Picture a smart sensor device that estimates the possibility of a heart attack or an imaging system that uses algorithms to spot a brain tumor – these are real-world evidence of AI medical technologies in action. Application design teams harmonizing AI technologies into medical devices made these realities. A medical device is any device created for medical use. Its use transverses the prevention, diagnosis, or treatment of specific medical conditions.
From the common stethoscope to the advanced cardiac pacemaker, about 2 million different medical devices are classified into over 7,000 generic groups depending on their uses. And for people living with either acute or chronic disease conditions, medical intervention in the form of a medical device can be the ultimate lifesaver. For example, heart disease is the leading cause of death worldwide, with about 3 million people worldwide living with an implantable pacemaker. As heart disease is becoming more prominent even among low and middle-income nations, many people will need to get one to improve their cardiac function.
With – a high consumer preference for wearable devices, an increasing geriatric population, and a high number of patients in need of implantable technologies, demand for medical devices is expected to soar. As a result, the global medical device industry is projected to grow from its current market value of US$ 455 billion to US$ 657 billion by the year 2028, according to reports by Fortune Business Insights. Device failure is a major stumbling block to the growth of disruptive medical technologies. And by integrating AI into the device development process, failure rates can be reduced to a minimum.
MEDICAL DEVICE CLASSIFICATION
A medical device must have an intended medical use and be able to execute it before called one. Understanding how a medical device is classified is important because it determines its development process. The FDA classifies devices based on the risk posed on consumer health.
Medical devices belonging to this category pose the lowest threat to consumers. Examples include surgical tools, dental floss, and oxygen masks. Such devices are subject to general control.
Class 2 devices are subject to general controls together with special controls because they pose more threats to consumers than class 1 devices. Special control entails that the device meets specific – testing requirements, performance standards, and labeling requirements. Examples are infusion pumps, powered wheelchairs, and X-ray machines.
Devices under this category are essential to sustaining human life, and hence they are subject to premarket approval and general control. Examples are breast implants, blood bank software, pacemakers, and life support machines.
THE MEDICAL DEVICE DEVELOPMENT PROCESS
The process of developing a medical device is not for the faint-hearted. It begins with the device discovery and ideation stage, where medical researchers spot an unmet medical need. They then create an idea to birth the new device. This is followed by creating a document called ‘proof of concept’ to determine if the concept will fly or not. Medical researchers, alongside biomedical engineers, proceed to build a prototype version that is not for human use. The prototype is tested and refined under controlled laboratory conditions. Once it can show fewer potential risks, it has passed the preclinical research–prototype phase.
The third stage is the ‘pathway to the approval stage.’ Remember the classification of devices? It plays a major role in this stage. The device is properly assigned to one of the three regulatory classes based on the risk it poses. The greater the risk to consumer health, the higher the classification, the stricter the regulatory control the device is subject to.
For each medical device class, the regulatory controls include two assessments. First, the substantial equivalence to show that the device is safe and effective in comparison with a legally marketed medical device not subject to premarket approval. Second, enough scientific evidence that the health benefits of using such a device far outweigh the risks. And that the device will improve the quality of life of a large number of a target population. The regulatory team fact-checks all information and decides whether to approve it or not. Device approval is followed by post-market device safety monitoring to check the emergence of new safety concerns.
HOW AI EXPELS LAX PROCESSES IN THE MEDICAL DEVICE DEVELOPMENT FRAMEWORK
Despite passing regulatory processes, medical devices still fail due to regulatory loopholes and lax oversight in the design process. In the last decade, medical device failure has led to over 80,000 deaths, 2 million injuries, and billions of dollars in lawsuits. But here’s how AI can help.
Reduced Failure Rates
Incorporating AI systems into the development process of a medical device can predict its performance and failure rate before it gets to the market. This is achievable by utilizing data sets from potential consumers and carrying out a possible scenario analysis. In addition, analyzing data and performance records of a medical device recalled due to failure can reveal underlying causes of what went wrong. Machine learning can also detect other factors interfering with the performance of such medical devices. By so doing, medical practitioners and hospitalists are better informed of interfering factors, suitable environmental conditions, and unique handling guides that will ensure optimal performance.
Faster Device Manufacturing Time and Less Cost
Under normal conditions, a medical device can take 3 to 7 years to reach the market. This cost medical device companies an average sum of US$ 31 million or US$ 94 million depending on the approval pathway. Machine learning can help medical researchers speed up the ideation process of identifying an unmet medical need and suggesting designs that will scale through. This will reduce waiting times and costs. Patients in dire need of such a device can have it in good time before their medical condition becomes terminal.
REAL-WORLD DATA AND HOW IT IS ADVANCING MEDICAL DEVICE AI
To upscale the precision of diagnostic medical devices and reduce malfunction rates, biomedical engineers are now incorporating clear-cut AI technologies into them. The availability of data and big data analytics in healthcare is easing the process. The greatest benefit of embedding AI software in a medical device is the ability to leverage real-world data to improve its performance while enhancing consumer health. RWDs in the form of wearable devices, the Internet of medical things, and medical tricorder are fast becoming the consumer’s first point of access to healthcare. Pratik Agrawal, Director, Data Science and Informatics Innovation at Medtronic, described these technologies as “empowering patients to take care of themselves.” Wearable devices can save and transmit data about a patient’s health status to health care practitioners thus, reducing hospital wait times. They can also spot underlying medical conditions real quick by analyzing any deviation in standard vital signs.
As AI-powered medical devices collect real-time data, there is enough data to – monitor post-market safety and any adverse event that can affect regulatory decisions. This is possible via RWD and predictive analysis, which spots maintenance issues in medical devices before breakdowns or catastrophic accidents.
TOP MEDTECH BRANDS USING AI
The biggest players in the medical device industry are using AI and machine learning to create brilliant solutions for better health outcomes. Philips uses precision diagnostics, powerful imaging technologies, workflow informatics, and longitudinal data with insights from AI to diagnose and treat oncology patients. Not keeping itself confined to cancer care, Philips signed a merger agreement to acquire BioTelemetry. BioTelemetry focuses on AI-based data analysis, cardiac diagnostics, and wearable heart monitors.
Medtronic decided to focus on incorporating AI into the surgical and robotics aspect of orthopedic care by acquiring French AI-enabled spinal surgery company Medicrea. Medtronic plans to take advantage of Medicrea’s UNiD ASI – a pre-surgical platform that uses predictive modeling algorithms to measure and digitally reconstruct spines. This will help orthopedic surgeons view surgical permutations and identify potential outcomes and challenges pre-surgery.
For sure, AI has earned its spot at the medical device development process table, even though it’s still a work in progress. With major roadblocks and lapses in the development framework removed, biomedical engineers and research experts can focus on novel medical technologies. This will reduce the incidence of lawsuits faced by medical device companies, and their consumers also enjoy better health outcomes.
|
<urn:uuid:5f08d2d9-9ff6-44c4-a373-3e319a61e4e1>
|
CC-MAIN-2022-40
|
https://dataconomy.com/2021/06/ai-in-medical-device-development/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00236.warc.gz
|
en
| 0.931295 | 1,766 | 3.109375 | 3 |
You’ve probably heard this before: “Have you tried turning your computer off and on again?” It seems IT support has been offering this advice since the dawn of the computer. Considering the complex inner workings of today’s technology, it may seem hard to believe that such a simple step could actually fix the problem—and for those of us who don’t fall under the classification of tech wizard, being prompted to restart may make us feel that our concerns are being dismissed or brushed off by IT support. But guess what? Restarting has been proven to fix a multitude of issues that many computers experience.
So, what exactly happens when you reboot?
Flushes RAM and speeds-up servers
Your computer’s random-access memory (RAM) is the system in charge of short-term tasks, so it can quickly become a repository of all random, temporary or unrelated data. When the computer is unplugged, the RAM is flushed, meaning it rids itself of excess data. With its clutter gone and its memory cleared, your computer has the ability to run faster.
Creates a clean slate for the operating system
Do you know how many programs you open on a given day? Probably enough for your operating system to get backed up and bogged down. Shutting down your computer for a little bit allows the operating system to wipe those built-up tasks, clean itself up and ultimately run more smoothly overall.
Prevents memory leaks
It’s common for programs on your computer to “borrow” memory from RAM—and if they’re not up to date, they may “forget” to return that memory when they’re finished using it, which leaves your computer with a memory leak that could slow your operations. The good news: Restarting your computer regularly should not only alleviate this problem but eventually prevent it from happening in the first place.
Minimizes frozen screens
A frozen screen lets you know your computer is glitching (i.e. the servers are overloaded), which can happen when a user fails to restart regularly. Restarting allows the system’s processor to cool down and take a break, thereby reducing the chance of a glitch.
Before you waste time scouring the internet for a solution to your latest tech issue, try rebooting your computer! Even if this simple fix doesn’t work, you’ve ruled out a big chunk of potential problems—and restarting is probably the first thing you’ll be asked to do when you get in touch with IT support anyway, so you may as well save yourself some time. Next time you run into trouble, remember that the turn-it-off-and-on-again method is well known for a reason: It can fix a lot of problems!
|
<urn:uuid:ee9c9663-3ab6-47e3-95b3-0a103a7a4398>
|
CC-MAIN-2022-40
|
https://integrisit.com/the-real-reasons-it-support-tells-you-to-restart-your-computer/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00236.warc.gz
|
en
| 0.947873 | 584 | 2.75 | 3 |
Today just about every organization has virtualized servers on their network, with the benefit of cost, efficient use of hardware, and improved recovery. Software-defined networking (SDN) brought virtualizing the network a step closer.
We’ve long had some kind of network virtualization, VLANs for example virtualized layer-2 segments. But SDN provided us with a complete abstraction from the underlying network, separating the control plane from the data plane. SD-WAN brings SDN principles to the wide area network (WAN), allowing enterprises to abstract all of the individual data services connecting locations into a single, intelligent WAN.
What preceded SD-WAN and why is the SD-WAN the next step in WAN technology? Let’s find out.
The Early Days – PPP and Frame Relay
In the 1980s, in order to connect LANs that were in different locations, you used point-to-point (PPP) leased lines. These were typically DS0 (56 Kbps) connections, and later on the faster, more expensive T1/E1 or T3/E3 connections which could also be purchased as fractional T1 or T3 lines at a much lower cost point.
Frame Relay service was introduced in the early 1990s. The same connections used with PPP could be used to connect to a “cloud” from a service provider. It was no longer necessary to purchase and manage individual links between each of the locations. Compared to PPP, Frame Relay reduced monthly WAN costs with far fewer physical connections to manage. It allowed the expensive last-mile link bandwidth to be shared across multiple remote connections, and used less expensive router hardware than the PPP. The OpEx and CapEx advantages of Frame Relay created an explosion of growth of the corporate WAN around the globe and within 5 years of its introduction, even the most conservative enterprises had migrated to Frame Relay.
MPLS Overtakes PPP
In the 2000s, MPLS became the successor to Frame Relay and was designed as an IP-based solution for carriers to converge voice, video and data on the same network. Today MPLS, the most common deployment of enterprise WANs, is a connectionless protocol, whereas Frame Relay is connection-oriented. This difference gave MPLS an advantage with reduced latency in live voice calls and improved QoS.
The Next WAN Innovation is Born
In April 2013, the board at ONUG convened for its bi-annual meeting at UBS headquarters where use cases were shared requiring solutions that suppliers were not yet providing nor addressing. The ONUG Board, invited a handful of guests to provide their input and feedback including Jim Kyriannis, Program Director for Technology Architecture at New York University, who was the one to contribute to the “Branch Office Has Multiple Paths to Headquarters” use case.
It was at the following ONUG Conference, hosted by JPMorgan Chase, where the use case was again presented and its title was transformed into SD-WAN. The ONUG Community was asked to vote on nine use cases at that meeting and it was Jim’s SD-WAN use case that earned the vast majority of the community’s vote. The ONUG SD-WAN Working Group was launched and collaborated with 17 vendors on proof of concepts, including discussions about the cost, risks, benefits, and value.
MPLS Pros and Cons
As MPLS adoption grew, more organizations began to understand that MPLS had economic and technological advantages over Frame Relay causing a rapid migration to MPLS. Today, a similar shift is occurring as enterprises begin looking to replace MPLS with SD-WAN based networks. What has caused this newest networking technology shift? What are the prime differences between MPLS and SD-WAN which are motivating organizations today to look for another solution?
|MPLS Pros||MPLS Cons|
Most businesses rely on MPLS services for its dependability with SLAs that guarantee latency, packet delivery, and availability. In the case of an outage, the MPLS provider resolves the issue within a stated period of time or pays the requisite penalties. However, MPLS is not budget friendly in comparison to Internet services. According to Telegeography, in Q1, 2017, median 10 Mbps direct-Internet access (DIA) prices are potentially 1/3 less expensive than MPLS. The time it takes to order and install MPLS circuits is another factor in today’s fast-paced environment. Depending on location, provisioning can take anywhere from 3-6 months.
Making the Move From MPLS to SD-WAN
With the combination of growing bandwidth requirements and restricted network budgets, SD-WAN resolves the issues of cost and network scalability that MPLS presents without sacrificing the quality of service. SD-WAN offers the following advantages:
- handles a variety of connections and dynamically route traffic over the best available transport, regardless if that’s MPLS, cable, xDSL, or 4G/LTE.
- provides redundancy and more capacity using lower cost links with multiple connections at each location.
- measures the real-time transport quality (latency and packet loss) of each connection and applies Policy-based Routing (PbR) to route application-specific traffic over the most appropriate transport.
Bottom Line: the time of installation to delivery is far faster than MPLS. Some SD-WAN solutions offer zero-touch provisioning which allows the end-point to configure its connection to the WAN using the available mix of services at each location; a site can be brought online quickly without requiring a networking expert to be on-site for the install.
Technologies are born from the necessity to solve challenges that arise over time. The rise of SD-WAN was born from the changing enterprise environment and the need to adapt WAN infrastructure to meet these needs while staying within budget.
Projections from industry experts agree that the migration from MPLS to SD-WAN has begun and is continuing to grow rapidly. Andrew Lerner, Vice President of Research at Gartner, predicts “By the end of 2019, 30% of enterprises will have deployed SD-WAN technology in their branches, up from less than 1% today.” Another indicator is revenue from SD-WAN vendors is growing at 59% annually, Gartner estimates, and it’s expected to become a $1.3 billion market by 2020.
SD-WAN solution providers such as Cato Networks can help organizations make the transition and meet the challenges of today’s WAN environments. Subscribe to Cato Networks blog to find out the latest developments in SD-WAN technologies.
|
<urn:uuid:867f2216-d8c5-4fcd-9a5a-9e72078a79f6>
|
CC-MAIN-2022-40
|
https://www.catonetworks.com/blog/a-history-of-sd-wan/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00236.warc.gz
|
en
| 0.954227 | 1,418 | 2.828125 | 3 |
Let’s discuss container management systems. So far, we’ve explored the history of containers, and talked about what a container actually is from an ops perspective. In the last post, we discussed how devs use containers: to build applications. Let’s delve deeper into how they actually plan to do that.
As a reminder, this blog series is exploring containers from an operations point of view. Also, I work at VMware.
Containers hold the building blocks for applications
When someone says “containers” and they aren’t talking about management, most likely they are talking about a container image. According to TechTarget, a container image is:
A container image is an unchangeable, static file that includes executable code so it can run an isolated process on … IT infrastructure. The image is comprised of system libraries, system tools and other platforms settings a software program needs to run on a containerization platform such as Docker or CoreOS Rkt. The image shares the OS kernel of its host machine.
One reason containers became so popular with developers is that you can encapsulate approved version of an environment in a container image, and have all of your developers work from that image. This way, developers working on new applications will use the exact parameters their code will run on in production. This should make it seamless to go from test to dev to prod. No more “well it worked in dev…” or “it worked on my system….”, because the they will be coding in a replica of the approved environment.
What is a container management system?
Now that we’ve reviewed what a container is, it should be intuitive to realize that containers are for developers. Containers makes their job of creating and improving applications much easier. They can develop on the same systems as the ones that will be in production, so the transition from test/dev environments should be seamless.
What could go wrong? Remember VM sprawl? Containers are easy to create and deploy, so you need a system to keep track of them. How do you make sure containers are being created in an image that is production approved? How can you manage security? How can you make sure you can easily move these containers between servers, and even the cloud? At a very basic level, this is what container management systems can do for you.
Additionally, containerized apps may separate the individual components of an application into several different container images. To bring these to life, it will be important that there is a coordinated way to start each container in the correct order.
What are common container management applications?
Popular container management applications are Docker Platform, Kubernetes, IBM Cloud Foundry, and Google Kubernetes Engine (GKE). This is the short list, at the time of this writing. Expect this list to expand and contract, as container adoption is just beginning to pick up steam in our industry. This Server Density article [written in 2017] gives a great operations perspective comparison of current tools.
What do you look for in a container management application? Well first of all, will ops even own this decision? This may be a tool your develop team chooses. Generally, a container management application will need something to help you with container images as well as container orchestration. It will probably help with securing containers. It should also have the ability to manage containers on premises or in the cloud. But as this article points out [also written in 2017], the features your team lands on in a container management application will depend on the development and operational requirements of your organization, as well as available technical skills.
Why do ops people need to understand these apps?
While developers may choose which version of containers and container management applications to use, operations will need to nail these systems to the ground. In other words, ops will manage the hardware, networks, and storage that host containers and their management systems.
Never forget, the ops in devops is operations. This is our chance to really support the developers in our orgs, by understanding how they need to build and support apps, and then building, managing, and securing the architecture to support it, or managing how it is stretched to the cloud.
Are you currently supporting a containerized environment? What sort of management apps are being used? Share your experiences with us in the comments.
|
<urn:uuid:4e2f3e21-bf2c-4eef-8338-83694a6e445b>
|
CC-MAIN-2022-40
|
https://24x7itconnection.com/2019/07/09/what-are-container-management-systems/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00236.warc.gz
|
en
| 0.942965 | 896 | 2.578125 | 3 |
You don’t need to reboot a Linux server, they are not running only 2-3 weeks even years without interruption. Still, I am explaining the most popular commands to reboot Linux server and system.
If you are window user then you know simple way to reboot you system using graphical interface.
Linux is not one step back then windows operating system, It’s developer build up Linux with beautiful graphical interface.
So you can GUI to shutdown and reboot Linux without command line. Is it interesting? then read full article, I will share end of this article.
There are three most popular and useful command to reboot linux system, reboot command, shutdown, and init.
You can use these three command respectively to reboot the system.
- Reboot command to reboot Linux
- Reboot command options
- Other commands to reboot Linux system
- Command to reboot Linux schedule time
- Canceling Scheduled Reboot Linux
- Checking your reboot logs
Reboot Linux Command Table
|$reboot -h||Display Usage of Reboot command|
|$sudo reboot||Reboot the system|
|$sudo shutdown –r now||Reboot system istantly|
|$init 6||Reboot system|
|$sudo shutdown –r [TIME] [MESSAGE]||Display Time and Message to reboot scheduling|
|$sudo shutdown –c [MESSAGE]||Cancel Scheduling Rebook|
|$last reboot||Display reboot logs|
Reboot command to reboot Linux
Are you looking for just restart the linux system without going into any details? reboot command is very helpful at this situation. It is the easiest and quickest way to reboot linux system.
reboot command is used to restart or reboot the system. In a Linux system administration, You need to restart the Linux server in very rare conditions, for example after the completion of some network and other major updates.
You need to reboot Linux system because of changes after updation can be affected on the server.
For example, if the user is re-compiling the server’s kernel that is going through some more advanced server administration, then he needs to restart the machine in order to complete the compilation and to have a new updated kernel version on the server.
When Updating the server’s memory, IP allocation, NIC configuration are the key tasks that need to be done on the server restarted once leading to their successful implementation.
Most of the Linux system administrators are not working directly on the server, they have access to their servers via shell or SSH.
So if you are an administrator and want to perform a bunch of administrative activities, server management, and monitoring on the server, you first login on your system by using SSh server.
If you are restarting system through SSH server you must know the basic commands to restart the server from the shell.
You just help yourself by using any one of the following commands:
$sudo shutdown –r now
If your system is running through a normal user, you can use this command without sudo.
You can say a normal user can reboot without any problem.
Reboot command options
Sometimes, your system is blocking from shutting down the process, perhaps due to a runaway process, then your single reboot command will not work.
you can use the –force flag to make the system shut down anyway. Because this option skips the actual shutting down process.
–force just an option is used to a forced system for shutting down, there are some other options can be used with the reboot command.
you can use man command or following command to display all available options.
- –help : This option prints a short help text and exit.
- –halt : This option halt the machine, regardless of which one of the three commands is invoked.
- -p, –poweroff : This option will going to power-off the machine, regardless of which one of the three commands is being invoked.
- –reboot : This option reboot the machine, regardless of which one of the three commands is invoked.
- -f, –force : This option force immediate halt, power-off, or reboot. When it is specified once, this results in the immediate but clean shutdown by the system manager. When it is specified twice, this results in immediate shutdown without contacting the system manager. See the description of the option –force in systemctl(1) for more details.
- -w, –wtmp-only : This option only writes wtmp shutdown entry, it does not actually halt, power-off, reboot.
Other commands to reboot Linux system
reboot command is popular but not a single command to reboot the Linux system, here are some other command can be used to complete this task.
You can use shutdown, and init commands as well.
I have described both command separately, so you can understand, very well.
Shutdown command to reboot Linux system
The shutdown command basically used for power off the system or shutdown the machine. But you find several options to control exactly what you want.
For the example, If you want to reboot system by using shutdown command, then use -r option to reboot Linux mahine.
Use shutdown command followed by “-r now” to reboot system imediatly. The syntax will be as below:
$shutdown -r now
$sudo shutdown -r now
#shutdown -r now
Init Command to reboot Linux
Init term has been taken from the word initialize that is widely been used to initialize/start different processes in a Linux machine.
So you can use this command followed by runlevel 6. As you know, Linux Machine can understand 7 runlevels.
Different distributions can assign each mode uniquely, but generally, 0 initiates a halt state, and 6 initiates a reboot (the numbers in between denote states such as single-user mode, multi-user mode, a GUI prompt, and a text prompt).
So you will find runlevel 6 is useful for rebooting a Linux server.
The syntax for this is mentioned below:
$sudo init 6
Command to reboot Linux schedule time
As you have seen before, simple reboot command has limited usage. If you want to schedule reboot/restart on your system, reboot command is useless.
In this case, The shutdown command is being used instead of reboot command to fulfill much more advance reboot and shutdown requirements. One of those situations is a scheduled restart.
Following is the syntax which is being used to reboot your system after time defines by the TIME.
$sudo shutdown –r [TIME] [MESSAGE]
Here the TIME has various formats. The simplest one is “now”, I have already told you in the previous section. You can use it restart the system immediately.
Other valid formats we have are +m, where m is the number of minutes we need to wait until restart and HH: MM which specifies the TIME in a 24hr clock.
If you want to reboot after 10 minutes then you can use following command:
$sudo shutdown -r +10
If you want to schedule restart on specific time then use following syntax:
$sudo shutdown –r 14:20
Canceling Scheduled Reboot Linux
If you don’t want restart/shutdown system on sheduled time, you can use another shutdown command followed by –c option and broadcast with a message for the users about the cancellation of the sheduling restart.
The basic of canceling restart/shutdown as follows:
$sudo shutdown –c [MESSAGE]
The scheduled reboot can be canceled by the system administrator by simply issuing another shutdown command with the –c option and an optional message argument.
$sudo shutdown -c “our custom example of canceling a scheduled shutdown on specific time”
Checking your reboot logs
If you are working as a system administrator, then you must know how to check reboot, login, logout records. You can find all logins and logouts records inside file /var/log/wtmp.
You can use cat command to see available records in the log file.
There is another command used toe access log for the reboot is the last command. You can use the last command followed by reboot display all reboot records.
Below you can find the last command usage and syntax to display restart log.
[vijay@localhost ~]$ last reboot reboot system boot 4.18.0-147.5.1.e Tue Apr 14 19:13 still running reboot system boot 4.18.0-147.5.1.e Tue Apr 7 10:22 - 13:02 (1+02:40) reboot system boot 4.18.0-147.5.1.e Mon Apr 6 10:38 - 12:31 (01:52) reboot system boot 4.18.0-147.5.1.e Sun Apr 5 10:01 - 10:40 (00:39) reboot system boot 4.18.0-147.5.1.e Sat Apr 4 18:18 - 19:59 (01:41) reboot system boot 4.18.0-147.5.1.e Thu Apr 2 14:06 - 16:58 (1+02:51) reboot system boot 4.18.0-147.5.1.e Tue Mar 31 17:27 - 17:59 (00:31) reboot system boot 4.18.0-147.5.1.e Mon Mar 30 17:04 - 18:51 (01:46) reboot system boot 4.18.0-147.5.1.e Sun Mar 29 12:02 - 20:48 (08:46) reboot system boot 4.18.0-147.5.1.e Sun Mar 29 11:55 - 12:01 (00:06) reboot system boot 4.18.0-147.5.1.e Sat Mar 28 15:11 - 12:01 (20:49) reboot system boot 4.18.0-147.el8.x Sat Mar 28 13:40 - 12:01 (22:20) reboot system boot 4.18.0-147.el8.x Fri Mar 27 17:35 - 12:01 (1+18:26) reboot system boot 4.18.0-147.el8.x Fri Mar 27 17:22 - 17:34 (00:12) reboot system boot 4.18.0-147.el8.x Mon Mar 23 18:21 - 17:34 (3+23:13) reboot system boot 4.18.0-147.el8.x Sat Mar 21 11:22 - 11:27 (00:04) reboot system boot 4.18.0-147.el8.x Sat Mar 21 11:14 - 11:22 (00:07) wtmp begins Sat Mar 21 11:14:56 2020 [vijay@localhost ~]$
You have learned about reboot command in Linux, which is used to reboot the Linux and Unix system and server. I have described about another commands including shutdown and init to reboot the system.
You can reboot Linux system remotely by using ssh server.
If you have any questions or feedback, feel free to leave a comment.
|
<urn:uuid:50413d25-987a-448c-ae65-674bfe184da3>
|
CC-MAIN-2022-40
|
https://www.cyberpratibha.com/commands-to-reboot-linux/?amp=1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00236.warc.gz
|
en
| 0.813328 | 2,415 | 2.71875 | 3 |
11 March 2022
Reading Time: 8 mins
11 March 2022
Reading Time: 8 mins
IoT is a gigantic network of connected “things” – unique hardware devices that transmit critical operational, transactional or sensor data. Anything can be connected from your toothbrush, a vending machine to wearable technology in healthcare. IoT can be applied to any industry and is enabling us to lead smarter lives.
According to Juniper Research, the number of IoT devices in 2021 will reach 46 billion. For any IoT device to deliver on its promise, it must have access to a secure, reliable connection.
Are you looking to produce and deploy IoT devices but not sure which connectivity solution you need?
In this guide, we’re going to explore:
After reading, you’ll know whether cellular connectivity is the best route for your IoT initiative and what to look for in your IoT connectivity provider.
Cellular networks are based on open, global industry standards, use licensed spectrum, and are always operated by wireless network providers.
Cellular connectivity allows information to be sent back and forth using mobile networks and includes services like 2G, 3G, 4G, 5G, LTE Cat. 0, LTE Cat M, NB-IoT, 4G LTE, and LTE Advanced.
The GSMA has introduced two additional LTE standards, NB-IoT and LTE-M which are primarily designed for LPWA use cases but are still in the initial stages of rollout.
Cellular connectivity is utilised by IoT devices in all industries from smart cities, smart vending machines, telehealth, energy, point of sale and payment processing, to logistics and supply chains.
Commercial IoT (i.e., not home connected devices) is powered by wireless communications and cellular connectivity which allows multiple devices to communicate with each other at any one time. Sensors collect and communicate information and respond to changes in the device’s environment – all enabling in-depth data analysis and the ability to make better business decisions.
Here’s a quick overview and comparison of the different connectivity options.
Data Rate and Power Consumption
< 1 GHz
Up to 50m
Industrial IoT (IIoT)
Up to 50m
Up to 100m
Licensed cellular LPWAN(LTE-M
Up to 1 MBps
Up to 20 KBps
GSM is the second-generation mobile telephone system and includes 2G and 3G. Primarily designed for voice, the standards also support SMS and GPRS data. They are proven, widely adopted standards, with hardware available at a low cost.
Many mobile operators are in the process of shutting down 2G and 3G networks in favour of newer technologies. Before choosing a 2G or 3G service, it’s essential to make sure the service is available for the timespan and locations required.
LTE is the 4th generation mobile network system. Introduced in 2012, 4G is primarily designed for better scalability and wireless broadband. Although it’s not as wide range as GSM, it provides much higher data rates – comparable with Wi-Fi.
The latest standard is 5G.
5G has bandwidths of up to 1 Gbps, and enables high-speed communication with high capacities and very low latency. It can be used in mission-critical applications, such as autonomous vehicles, as well as applications such as VR, AR, gaming, and any use cases requiring real-time response.
The parallel operation of 4G and 5G promises greater capacity and faster network speeds in the future.
Low-power wide-area networks (LPWANs) and low-power wide-area networks (LPWAs) are types of wireless wide-area networks. Their purpose is to facilitate the transmission of data between connected devices over long distances at low bit rates. LPWAN technology standards include LTE-M and NB-IoT – both of which enable battery-powered devices to operate reliably for their entire lifecycle in the field, often 10+ years.
Our partners at Thales say that ‘LPWA Network (LPWAN) technologies strengthen the business case for IoT solutions, offering a cost and power-efficient wireless option that leverages existing networks, global reach, and strong built-in security’.
LPWAN is used for a wide variety of applications like asset and goods tracking, industrial process monitoring and control, smart lighting, meters and solar panels, crop and livestock management and predictive analytics solutions.
Designed specifically for IoT devices, NB-IoT has the following characteristics:
NB-IoT doesn’t support seamless handover when switching to another cell network tower. It can switch cells but must re-establish the connection, which takes more power. NB-IoT is better suited for large scale deployments where the requirements do not change with time, sensors are static, and indoor coverage is of top priority.
LTE-M, also known as LTE Cat-M, is an extension to the LTE networks. Like NB-IoT, it offers a low speed, low power, long-range protocol for small bandwidth applications. Although it doesn’t provide the same length of battery life as NB-IoT, it offers higher bandwidth, so is a good candidate for use cases with higher volumes of data.
LTE-M is run on top of LTE base stations, making implementation more attractive for network operators as no dedicated hardware is needed. LTE-M can operate over a range of approximately 10-15km.
The power-saving capabilities, eDRX and PSM, can also be used with devices that connect to LTE-M networks.
LTE-M is suitable for a wide variety of applications like smart meters, alarm systems, smartwatches, to more complex and remote environments like drain sensors installed deep underground. LTE-M is a good choice for moving IoT devices, for example, assets that need to be tracked and monitored for many years without intervention.
There are advantages and disadvantages of cellular connectivity which will help you decide if it’s the right connectivity solution for your device:
even in remote locationsLimitless rangeConsistent and reliableStandard low power wide area (LPWA) cellular IoT (LTE-M and NB-IoT) give deeper coverage especially in remote areas and undergroundNetwork switching/non-steered roaming to connect to the network with the strongest signal coverageMinimal device downtime
Good bandwidth and speed (on par with Wi-Fi)
Cellular NB-IoT offers a lower data rate – good for stationary and low powered devices.Roaming is not supported on NB-IoT
High-security standard: data is encrypted by defaultAutomatic security updates
Low installation costsLow support and hardware costsLoRaWAN or LPWAN is optimised for low data rates and low power operationRemote management
High data bills can incur if the device uses roaming or has not been localised
Advantages and disadvantages of cellular IoT connectivity (2G, 3G, 4G, 5G, LTE Cat.0, LTE Cat M, NB-IoT, 4G LTE, LTE Advanced)But one size does not fit all when it comes to IoT connectivity. Here’s a visual overview of how each technology stacks up in terms of cost, data rate, power consumption and range.
There are over 800 mobile network operators providing cellular connectivity worldwide. Primarily cellular connectivity was designed for consumers and not IoT which is why you’ll find that MNOs historically quote coverage based on population, not geo-coordinates.
Transforma Insights report “cellular networks today cover around 98% of populated areas. However, they’re a long way short of 98% territory coverage – often closer to 60%.”
IoT devices are expected to operate for many years. But a large majority of devices don’t stand still – they’re crossing borders and continents, going places where humans don’t inhabit. IoT devices require a continuous, reliable connection to function and gather accurate data.
IoT roaming is one of the solutions to enable connectivity in many countries around the world. Roaming tends to be more expensive for the end-customer in the long-term due to:
The connectivity model is having to evolve because roaming is not a viable long term connectivity solution for IoT. With global IoT deployments becoming more common, devices can be at risk from permanent roaming restrictions which prohibit a device from connecting in a country that is not its nominal ‘home’ territory beyond a specific period, for example, more than 3 months. In some countries, the regulators or networks have imposed roaming restrictions and only allow a SIM to roam for a limited period in one country.
With modern eSIMs, switching from one connectivity service provider no longer requires the UICC to physically be changed, as is the case for traditional SIM deployments.
eSIM is conceived as a flexible, over-the-air solution enabling the SIM to use a local MNO network profile or a choose from a greater choice of roaming partners.
When combined with localisation, eSIM eliminates permanent roaming challenges and improves application performance.
Learn more about how eSIM localisation can solve roaming challenges for enterprise IoT >
A connectivity management platform (CMP) enables organisations to effectively manage connectivity across a global IoT deployment. It offers flexibility and choice – devices can switch connectivity to a network that suits them best, and the CMP allows continuous monitoring and management of devices at an individual and aggregated level.
Learn more about the key functions of a connectivity management platform >
Ever heard the saying don’t put all your eggs in one basket? The same applies to IoT connectivity. You can’t rely only on one network’s connectivity. What if it fails, loses availability, or rates skyrocket?
For IoT devices to work at optimum levels they must have access to a consistent, secure, and reliable connection always, regardless of location. To achieve this result, additional networks should be available for connecting devices, ensuring redundancy within the network.
A Multi-IMSI SIM can store multiple IMSIs, enabling devices to switch to different networks without physically changing the SIM. This offers peace of mind and network redundancy – if something goes wrong with one network due to an outage, for instance, the SIM can switch to another network as a backup to avoid connectivity loss and device downtime.
Connectivity providers offer multi-IMSI solutions with different levels of sophistication, functionality, and security. Some solutions can use over-the-air updates to download additional IMSIs and remotely manage the IMSIs on a SIM.
An Access Point Name (APN) is the name of a gateway between a cellular network and another computer network, such as the public internet. Like a home broadband router, it acts as a gatekeeper between individual end devices and the internet.
The role of an APN in IoT is to allocate IP addresses to devices and route data between devices and endpoints, such as backend systems and websites.
Equipment can fail, cables break (or get stolen), disasters happen (fires, floods, earthquakes etc) – infrastructure redundancy is essential to prevent interruptions to service.
Our AnyNet solution uses multiple data centres worldwide, connected by a global MPLS core, to host POP (Point of Presence) interfaces for carrier interconnects.
Each network operator has a connection to a primary data centre, which is the most suitable one for their geographic operation. They are also configured to have a secondary data centre for use if the connection to their primary data centre fails.
Within each data centre, equipment is configured to have a spare that can instantly take over if there’s a failure to the main equipment. For example, firewalls in data centres operate in pairs, with the backup firewall running in hot standby so that it can automatically take over if the primary firewall fails.
IoT devices are prone to cybersecurity attacks which can be devastating for an organisation and its customers. The majority of IoT devices are unmanaged and were not designed with security or management in mind, making them easy targets for malicious attacks.
With this in mind, it’s crucial that organisations treat IoT security seriously. Organisations with IoT devices deployed around the world and across various mobile network operators (MNOs) need to be able to mitigate risk to ensure operational resilience and business continuity.
We have strategically partnered with Armis, a leading agentless device security platform provider, on a joint IoT security solution that enables organizations to deploy connected devices anywhere in the world with enterprise-class security and consistent, reliable cellular (4G/LTE/5G) connectivity.
The Armis and Eseye technologies create an industry-first synergy, delivering a secure and connected ecosystem for mobile devices across industries. Here are some of the advantages:
Our intelligent patented network switching AnyNet technology helps organisations achieve near 100% secure universally available cellular connectivity for their IoT devices.
We have established the AnyNet Federation and has 14 direct ‘interconnect’ partnerships with MNOs worldwide, which combine to deliver access to 700 operators’ networks.
These special partnerships enable our customers to seamlessly switch from one operator profile to another, meaning that coverage blackspots are eliminated. It also means devices can be localised onto local networks. No other IoT connectivity provider has access to this unique network of interconnects or can offer a similar device localisation capability.
Our AnyNet offering means organisations only need to design, deploy, and manage a single device SKU which works globally; reducing the time, cost, and complexity of an IoT development and deployment when compared with the standard, fixed-network approach.
Paul is one of Eseye’s co-founders. With a background in senior design engineering, Paul’s focus is on ensuring his development, operations and support teams deliver solutions that work faultlessly in the field.
Paul was co-founder of CompXs, with Ian Marsden, and developed the world’s first IEEE 802.15.4 radio. Before CompXs, Paul was in senior radio design at Philips.
|
<urn:uuid:daad5a9f-90a3-44fd-baa5-90db27f6871a>
|
CC-MAIN-2022-40
|
https://www.eseye.com/resources/iot-explained/cellular-iot-connectivity-what-business-leaders-need-to-know/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00236.warc.gz
|
en
| 0.920101 | 3,066 | 2.59375 | 3 |
This is a question that many people have in mind. Especially since virus and malware infections are on the rise. And because the majority of internet users go online to access all kinds of websites. But along with this question come many others, all of which are interconnected.
What kind of websites contain viruses?
Do you have to do anything, in particular, to get infected?
How about if you have some kind of antivirus protection?
How do we know these questions? From our extensive research and dialogue with our readers, which makes us aware of the serious misconceptions about this topic. The wrong answers can give you a lot of trouble. So, to avoid that, we have to insist on each detail that makes a difference.
People wonder if you can get viruses or malware just by visiting a website. And they risk walking away with inaccurate information. This type of information will give them a false sense of security, something we are fighting against.
Which are those common misconceptions, you wonder?
Here they are:
- You can only get viruses from malicious websites.
- If you access a malicious website, as long as you don’t download anything, you’re OK.
- Even if you download something, on purpose or unknowingly, if you don’t open it, you’re OK.
- Should the malicious website try to automatically download something on your device, your antivirus software should flag and block the action.
A website doesn’t have to be malicious to pose dangers to a user. Sure, there are many malicious websites out there, specifically built to cause harm. Yet those are not the only threat.
There are also legit websites that may contain pieces of malicious content. Websites that have been hacked and that contain hidden malicious code. The code was inserted by hackers, without the knowledge of the persons who operate the website in good faith.
Sometimes it’s a script, other times it’s an ad, and the list can go on and on.
For some reason, people believe that even when visiting such a website, you don’t have to get infected. It’s like if you don’t touch it you can’t get it.
The problem is, however, that you can get it without touching it… Think of the human viruses that spread from one host to another. In a similar way, computer viruses can get viral without specific action from your side.
Allow us to detail the misconceptions from above and you’ll see what we mean.
The reason why you can’t rely on your antivirus for any kind of threat
If you think of it, most standard antiviruses first scan the files you’re accessing. Then, they match it against a database of virus signatures. When you access a website, you download the information contained in it. This means that the AV cannot decide on that information’s nature: malicious or harmless.
So, unless it is a piece of code that has already been flagged as malicious and is now included in the AV’s signature database… Chances are the antivirus software will not be able to protect you from an exploit atack.
But that’s not all!
Hackers have ways to keep spreading even the viruses that are already in the common AV databases. They use a so-called packer that encrypts the malware. And by changing its appearance, they make it undetectable to an antivirus that relies solely on signatures.
AV developers found a couple of solutions for these packers. They have implemented behavioral analysis, sandboxes or rollbacks. In other words, the antivirus will no longer look at the file’s signature only. It will try to emulate and interpret its behavior to see what it might do if released on your system.
How can something download on your device without you noticing it?
At this point, you have realized that no antivirus offers 100% protection. But now you may begin to question the possibility of something to download without you noticing it.
People say: “Wait, shouldn’t I see if something is being downloaded on my device?”.
We’ve already suggested that you can get viruses or malware just by visiting a website. Even if you don’t download something yourself. That’s because certain malicious files can automatically download on your device.
When that happens, it’s not the classical download of an attachment from the email app or a file from the internet browser.
The attacker won’t even use your internet browser’s downloader to drop the malicious file. They are trying to keep it all under the radar. So the download path is different than the norm and it usually relies on exploits.
Even if it wouldn’t be an exploit attack, most malware files are very small. When you download something as small as a few kilobytes, it’s hard to tell. Especially if you have a download speed around 0.5 Mb per second, it will end up on your device super fast. So, there would be no time for a download progress bar to show up on the screen.
But again, usually, getting viruses from compromised websites happens via exploit kits. It will be a malware exploit file, a very small one, which doesn’t take the normal download path.
Most importantly, that file is programmed to download itself AND to run itself automatically, right after it finishes the download. So the myth that if you don’t manually launch a malicious file you’re safe… It’s busted. Some malicious files launch automatically…
Exploit kits – the tools that can turn your world upside down
Exploit kits make it possible for your device to get a virus when simply accessing a website. Many of the dangerous malwares that have scared the world are transmitted via EK on devices all over the world.
From cryptoware to banking trojans, a lot can travel with such a kit. To make it worse, the standard antivirus protection is useless in front of this threat. Why? For the reasons that we detailed above.
Now, these exploits, also referred to as drive-bys, can look into the vulnerabilities of:
- the internet browser itself
- application software or web service such as Adobe Reader
- a plugin such as Flash, Java, or Silverlight
- a media player software etc.
Exploit kits are malicious toolkits. They are hosted on rogue servers. And users are redirected to them when accessing a compromised website.
You will be clueless about what is going on. You just can’t tell that you’re not on the server where the website you were trying to reach is hosted. Once your device starts communicating with the rogue server, the exploit kit will begin gathering information on you as a user.
Depending on what it finds, it decides what type of exploit will work better on your device. Then, it starts delivering it. If you don’t have the right kind of protection, the exploit will succeed. This means that the malicious software will be downloaded and executed on your device without you even knowing it.
So far, applications dedicated to anti-exploit have been developed. These are specifically designed to stop the malware unleashed by such exploit kits. Problem is, however, that running an anti-exploit app along with an antivirus or antimalware is not good. All kinds of conflicts may appear between the two of them. So, you’d have to make up your mind for one at a time.
How can you protect yourself from getting viruses or malware by visiting a website?
Judging by the information revised so far, not all antivirus softwares are an effective solution because they rely on its database. Not that this kind of evaluation isn’t effective… But it simply cannot cover all the threats. Only the ones that are already known and in the AV software’s database.
The best way to keep exploits at a distance? Rely on security options that work with user permissions. Such options would only allow actions that the user allows in the first place. Whatever kind of action it detects on your device that hasn’t been allowed will be automatically forbidden. This is the way to prevent malicious code from automatically downloading and executing on your device.
The daunting task of choosing the best software to protect your device left aside, you’re still looking at a couple of other protection measures:
- Regardless of what software you use, keep it up to date. Not just the antivirus software, the antimalware tool or the anti-exploit app you rely on. But also your internet browser, your operating system, the plugins you work with. Exploit kits take advantage of vulnerabilities and updates patch vulnerabilities. The fewer vulnerabilities you have, the better protected you are, so, don’t miss out on any updates.
- Because plugins are a known vulnerability, ideally, you should stop using them. If you can’t, you should at least set them as Click-to-Play or Ask-to-Activate. That way, if an exploit tries to tamper one of your plugins, the action won’t go unnoticed because the plugin cannot work unnoticed.
- Since you know that ads can be an entrance point for malware, try to run an adblocker. Just keep in mind that this is limited protection. If you end up on a website with exploits built into the actual web page, not just into the advertising code, an adblocker won’t help.
- Since you know that scripts can run automatically and load malicious content, also try to run a script blocker. It will protect you from on-page exploits and it can also protect you from advertising-included exploits. The latter, however, isn’t a strong enough reason for you not to run an ad blocker separately.
- Consider playing around with whitelisting software. This type of software will prevent executable files from running, as long as you haven’t previously approved them. That way, if malicious code is downloaded automatically, it will still be prevented from running because you haven’t whitelisted it before.
Acting with caution is, as always, of the essence. Keep in mind that mainstream websites are more and more likely to spread malware. Hackers know that it’s easier to lure their victims on legit websites and they focus on exploiting their vulnerabilities rather than waiting for them to land on a shady website.
What we’re trying to say is that the chances for you to encounter malware when surfing the web are growing. Instead of trying to minimize those, try to increase your protection chances. Knowing that you can get viruses and malware just by visiting a website should certainly help you revise your online behavior and stay better protected!
|
<urn:uuid:ce4abfe0-8557-49e2-8234-395eefc20aa4>
|
CC-MAIN-2022-40
|
https://antivirusjar.com/viruses-or-malware-by-visiting-a-website/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00436.warc.gz
|
en
| 0.925448 | 2,272 | 3.109375 | 3 |
To listen a Audio we need headphones, whereas to record we need Microphones. But Security researchers at Israel’s Ben Gurion University have created a proof-of-concept exploit that lets them turn headphones into microphones to secretly record conversations.
Malware that turn Headphone as Microphone
In earlier days Headphones was also used as Microphones because Speakers and microphones employ similar components to process electrical signals and sound in very similar ways.
But Researchers manages to switch the output sound channel as an input one , where intelligible audio can be acquired through earphones and can then be transmitted distances up to several meters away.
The experimental malware instead re-purposes the speakers in earbuds or headphones to use them as microphones, converting the vibrations in air into electromagnetic signals to clearly capture audio from across a room.
“People don’t think about this privacy vulnerability,” says Mordechai Guri, the research lead of Ben Gurion’s Cyber Security Research Labs. “Even if you remove your computer’s microphone, if you use headphones you can be recorded.”
The speakers in headphones can turn electromagnetic signals into sound waves through a membrane’s vibrations, those membranes can also work in reverse, picking up sound vibrations and converting them back to electromagnetic signals. (Plug a pair of mic-less headphones into an audio input jack on your computer to try it.)
But how this hack possible?
Ben Gurion researchers took that hack a step further. Their malware uses a little-known feature of RealTek audio codec chips to silently “retask” the computer’s output channel as an input channel.
This allows malware to record audio even when the headphones remain connected into an output-only jack and don’t even have a microphone channel on their plug. The researchers say the RealTek chips are so common that the attack works on practically any desktop computer, whether it runs Windows or MacOS, and most laptops, too.
“This is the real vulnerability,” says Guri. “It’s what makes almost every computer today vulnerable to this type of attack.”
To be fair, the eavesdropping attack should only matter to those who have already gone a few steps down the rabbit-hole of obsessive counter-intelligence measures. But in the modern age of cybersecurity, fears of having your computer’s mic surreptitiously activated by stealthy malware are increasingly mainstream.
In this tests, the researchers tried the audio hack with a pair of Sennheiser headphones. They found that they could record from as far as 20 feet away—and even compress the resulting recording and send it over the internet.
In highly secure facilities it is common practice to forbid the use of any speakers, headphones, or earphones in order to create so-called audio gap separation. Less restrictive policies prohibit the use of microphones but allow loudspeakers, however because speakers can be reversed and used as microphones, only active one way speakers are allowed.
Software countermeasures may include disabling the audio hardware in the UEFI/BIOS settings. This can prevent a malware from accessing the audio codec from the operating system.
However, such a configuration eliminates the use of the audio hardware (e.g., for music playing, Skype chats, etc.), and hence may not be feasible in all scenarios. Another option is to use the HD audio kernel driver to prevent rejacking or to enforce a strict rejacking policy.
|
<urn:uuid:26055145-d142-407f-881f-27e8716ad860>
|
CC-MAIN-2022-40
|
https://gbhackers.com/headphones-can-act-spyware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00436.warc.gz
|
en
| 0.906687 | 730 | 2.703125 | 3 |
Synchronisation in Telecommunications networks (ISDN, Digital Networks, GSM, 3G, 4G, 5G, etc..) is the process of aligning the time scales of transmission and Switching Equipment (e.g. Stored Program Control nodes) so equipment operations occur at the correct time and in the correct order. Synchronisation requires the receiver clock to acquire and track the periodic timing information in a transmitted signal.
Poor synchronisation can lead to the following issues:-
– Degraded speech quality and audible clicks
– Degraded data traffic throughput
– Call setup
– Incomplete Facsimile messages
– Freeze-frames and audio pops on video transmissions
– Call disconnects during mobile call hand-off
– Partial or complete traffic stoppage
– Corrupt data sessions for 3G/4G/5G
– Errors placed in CDRs and Logs
A complete Network Synchronisation plan must be in place that looks at the Clocks (Stratum) used, Jitter and Wander
parameters, Primary and Secondary Synchronisation Clock sources, etc.
|
<urn:uuid:a939015d-b1bc-4030-985e-29577c064784>
|
CC-MAIN-2022-40
|
https://www.erlang.com/reply/69937/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00436.warc.gz
|
en
| 0.813917 | 226 | 3.328125 | 3 |
Article by Tracy Gregorio as appearing in the Virginia Ship Repair Journal
Phishing attempts and other types of malicious email are some of the most dangerous messages received. According to the Anti-Phishing Working Group (APWG), the total number of phishing attacks in 2016 was 1,220,523, a 65% increase over 2015 (February, 2017). Phishing is one of the leading ways employees open businesses to infiltration by cybercriminals. Scammers hope that unsuspecting victims will respond to their urgent-sounding requests by providing sensitive information, such as account passwords, social security numbers, or other identifiable facts. Other malicious correspondence may appear innocent at first, but include downloadable attachments that will infect computers with malware. To identify dangerous messages and safeguard privacy and finances, take the following precautions with email accounts.
1) Use the spam protection filters offered by your email service, and/or
2) Install an internet security program that blocks unwanted email. They will identify most spam for you.
3) Avoid click-bait. Malicious emails often contain links to phishing websites. Unless you’re certain the link is from a trustworthy source, don’t click on it. If you’re uncertain, but think it might be legitimate, manually type the website address into your browser. This will reduce your risk of ending up at an illegitimate website.
4) Beware of anyone asking for a password, bank or credit card information, or a PIN. Emails claiming that your account will be locked or disabled if you don’t enter a password are almost always phishing scams.
5) If it sounds too good to be true, it probably is. No one is willing to pay large sums of money for small tasks on your part. Foreign royalty does not need your help. There is probably not a “miracle cure” for your ailment and “get rich” schemes are rarely credible.
6) If you receive an email that appears to be from a friend or relative, but is requesting personal information or help, contact the person by other means (such as by phone) for confirmation. Do not wire money or provide account information until you have confirmation.
7) Check the source of the email. Scammers often use email addresses that appear legitimate at first glance, but minor differences can reveal their ploy. They may use an incorrect domain name (irs.net instead of irs.gov) or misspell the company name (Verizion instead of Verizon). Other spelling errors or addressing you by something other than your name (Dear Customer or Hello Friend) are also a tip-off that the email is probably fraudulent.
8) Do not open attachments unless you can confirm they’re from a legitimate source. Attachments can contain malware, software intended to damage your computer.
9) Grammar errors and odd sentence structure often point to scammers. (“After sent email, please do this,” for example.)
10) Legitimate sources do not ask for your Username and Password by email, nor do they provide both in the same email message to you. If you forget your username or password, it is common practice for a company to send your username and password recovery information (usually instructions for setting up a new password) separately.
If you have been a victim of internet crime, you can report it to the Federal Bureau of Investigation via the Internet Crime Complaint Center. Unsolicited and spam email can be forwarded to the Federal Trade Commission at [email protected] for investigation.
About the Author
Tracy Gregorio is the President G2 Ops, Inc., a firm specializing in model-based systems engineering, cybersecurity and strategic consulting in support of the Federal Government and commercial organizations. She has served in strategic and leadership positions for Regent University, The Family Channel and U.S. Government. She holds an M.S. in Computer Science from Old Dominion and a B.S. in Computer Science from Virginia Tech.
|
<urn:uuid:37ad9837-a54a-4dea-8838-f77b75973122>
|
CC-MAIN-2022-40
|
https://g2-ops.com/blog/ten-ways-to-identify-a-dangerous-email-2/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00436.warc.gz
|
en
| 0.920327 | 817 | 2.75 | 3 |
Everyday uses of artificial intelligence that can talk, listen and see are coming. Is government ready?
When the 2008 Summer Olympics were about to open in Beijing, China, government authorities grew increasingly concerned about the city’s notorious pollution problem. Rather than risk the health of athletes and guests at the games, dozens of nearby factories were ordered closed and driving restrictions reduced traffic by 90 percent, according to state news reports. While the moves were considered radical and impacted the region’s economy, the Beijing government felt it had little choice.
Today, Beijing is saturated with sensors that can measure CO2 content and other pollutants. Data from the sensors is now combined with information from the city’s weather service and run through algorithms developed by IBM’s Almaden Laboratory in Silicon Valley that help to predict whether or not the city is going to be impacted by high levels of pollution. Based on the findings, authorities can select which factories need to shut down if they want to reduce the chances of high pollution by 50 percent, before the problem emerges.
The technology behind all this is artificial intelligence. By collecting an enormous amount of data and combining it with historical data on weather patterns, the city can predict just how bad pollution will be and then modestly dial back the industrial sector and traffic, rather than shut down the entire city, which is what happened in 2008.
“This is a practical way of using AI to mitigate a problem, minimize the impact on the economy and reduce pollution overall,” said Jeff Welser, vice president and director of the Almaden Laboratory.
Thriving on lots of data
If you’re looking for a single word that sums up the status of artificial intelligence today, it could be “practical.” While the general public might get excited or alarmed by the concept of computers that can see, hear and speak, government has become quite bullish on real-world applications of AI that can find ways to improve the environment, make public spaces safer and, most importantly, strip out the mundane, manual work that clogs up government operations.
This era of practical AI has already taken root in the private sector. In a special report, The Economist showcased how AI technology will reshape traditional business functions, such as supply chain, finance, human resources and customer service. For example, companies will use AI to predict when equipment might fail or when a client is going to pay late. Already, 30 percent of companies now have standalone bots that can answer questions and solve problems. In HR, companies are building systems that can predict which job candidates are worth interviewing and can virtually screen candidates to increase diversity in hiring.
These are just some of the examples of what experts define as narrow AI, in which machine learning, neural networks and predictive analytics produce an output that is well understood. “Narrow AI is about intelligent automation of processes that have too much manual intervention, as well as questions that require decisions that can be off-loaded to the computer,” said Rick Howard, vice president of research at Gartner. […]
|
<urn:uuid:30fd178d-d731-43d5-b6bf-1f458c3f298a>
|
CC-MAIN-2022-40
|
https://swisscognitive.ch/2018/07/04/is-government-ready-for-ai/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00436.warc.gz
|
en
| 0.961849 | 628 | 3.234375 | 3 |
The advancement in technology released through 5G, artificial intelligence, and machine learning in the last decade has seen a tremendous increase in cyber-attacks and data breaches. Recent reports show that hackers attack computers across the US every 39 seconds this compromising data from individuals, companies, and governments. In recent years, there have been high-profile breaches incidents such as Solar Winds, Colonial Pipeline, and the Kaseya attack, among dozens of other cyber-attacks that have had a major economic and security-related impact.
According to the US Cybersecurity and Infrastructure Security Agency, cybersecurity aims at protecting networks, devices, and data from unauthorized access or criminal use through practices that ensure integrity, confidentiality, and availability of information. With businesses and governments changing their operation strategies to take advantage of the new hybrid work models, there is a need to implement cybersecurity technologies including SD-WAN, zero-trust network access, and secure access service edge for improved security.
The 5 Types of Cybersecurity
Cybersecurity is a critical part of managing risks in today’s business landscape. Maintaining business continuity rests on the ability of organizations, individuals, and the government to protect networks and data. To do this, it is important to understand the five different types of cyber security and how they defend against cyber-attacks.
1. Application Security
Mobile and web applications are available over various networks and connected to the cloud, increasing vulnerabilities to security breaches and threats including DDoS attacks, misconfigurations, and SQL code injections. Application security protects sensitive information at the app level, ensuring the safety of code and data within the App.
It uses software, hardware, and procedures to tackle external threats that may arise not only during the development stage of an application but throughout its lifecycle. An example of such an attack on applications of the WannaCry ransomware campaign that exploited weaknesses in the Microsoft Windows operating system.
App security can involve tactics requiring strong passwords from users, step-two authentication, security questions, and other protective measures that ensure users are identified. Continuous risk assessments and patching can help companies and organizations detect the sensitive data sets within apps and secure them.
2. Critical Infrastructure Security
Critical infrastructure cybersecurity techniques are deployed to secure systems that have critical infrastructure. These systems include major utility services that society heavily relies on, such as the electricity grid, traffic lights, water purification, hospitals, and shopping centers. Although they are not directly linked to a possible cyber infringement, they act as target platforms through which threats can happen to the endpoints through crypto-jacking, credential theft, and social engineering.
Trends including hybrid work models, bring-your-own-device, and workplace mobility create additional risks and complexities in securing critical infrastructure. Common security practices such as endpoint protection platforms, encryption, and mobile device management can be adopted to secure critical infrastructure.
Organizations responsible for critical infrastructure should perform due diligence to understand the vulnerabilities and mitigate their businesses against potential risk to enhance the security and resilience of critical infrastructure. Organizations that are not directly responsible for critical infrastructures but rely on them for their business operations should develop contingency plans by evaluating how potential cyber-attacks on critical infrastructure affect their business operations and users.
3. Network Security
Network security refers to the protection of your computer network from opportunistic malware, intruders, and targeted attackers. It is a broad term that includes activities and controls created to protect the physical, technical, and administrative network infrastructure, defending them from data threats, intrusion, breaches, and misuse.
The internet has an assortment of networks associated with various websites that contain third-party cookies which can track users’ activities. Although cookies are important for business growth through personalized advertising and lead generation, customers become prey to malware attacks, mainly through phishing.
To counter cyber-attacks and malware associated with the network, organizations can deploy security programs that monitor the internal network infrastructure. Network security uses many different protocols to block attacks but still allows authorized users to access secure networks. Organizations can leverage machine learning and artificial intelligence technology that can alert authorities in the event of abnormal traffic. Furthermore, organizations must continuously upgrade their network security, for instance through firewalls, encryption, and extra logins as additional security policies.
4. Cloud Security
Cloud computing has evolved to create an enabling environment for the organization to improve their businesses, enhance customer experience, and boost efficiencies. Cloud security refers to the technology, processes, and policies used to mitigate security risks in cloud computing, whether on public, private, or hybrid clouds.
The major cybersecurity challenges with cloud computing encompass a multi-cloud environment, limited visibility into the data stored in the cloud by in-house IT teams since the services are managed by a third-party provider, and additional regulatory compliance. There is a challenge arising from the shared responsibility model that cloud providers use for security, regardless of whether they are delivering platform-as-a-service (PaaS), software-as-a-service (SaaS), or infrastructure-as-a-service (IaaS) cloud services.
Security solutions and best practices for cloud security include a cloud access security broker that helps identify misconfigurations and provides additional security through access controls, multifactor authentication, and identity and access management.
5. Internet of Things (IoT) Security
The Internet of Things has been realized as a major tool for the technological revolution. IoT comprises a wide variety of both critical and non-critical cyber systems, like sensors, appliances, Wi-Fi routers, security cameras, and medical devices. IoT data center analytics, networks, consumer devices, and connectors are the core technology enablers of the IoT market.
With the interconnection of IoT devices, security is a major threat to the adoption of this technology. Since IoT devices offer no security patching, they pose major security threats. An attack on one device will affect the entire IoT infrastructure.
Businesses must integrate methodologies including application program interface security, public key infrastructure, and authentication security to thwart the growing threats in the IoT landscape.
Let Cyber Sainik help with your cybersecurity needs.
As the global business landscape changes, technology will continue to create opportunities for business continuity. Organizations, individuals, and governments must implement cyber security solutions to keep up with the pace of technological advancement. Cyber Sainik identifies security solutions to help protect business assets against cyber-attacks. With our cybersecurity solutions, we help improve cybersecurity programs by discovering data breaches. This helps your business achieve end-to-end security, including network security, penetration testing, and intrusion detection and prevention. Call us and schedule free consultation with our security experts and get a proactive approach to cyber security.
|
<urn:uuid:d1958eda-42e7-4952-82e9-1de4e9dfce31>
|
CC-MAIN-2022-40
|
https://cybersainik.com/understanding-the-5-types-of-cyber-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00436.warc.gz
|
en
| 0.929274 | 1,360 | 3.46875 | 3 |
Has WannaCry set a Precedent? Enterprises Need to Stay Prepared!
The Impact of WannaCry
WannaCry Computer virus had caused quite an uproar in mid-May as it affected over 200,000 computers all over the world. This affected systems in over 150 countries, including Asia, Europe, and even North America. This was perhaps the most massive ransomware campaign that negatively affected companies.
This catastrophe also affected the National Health Service (NHS) in the United Kingdom. It also affected medical operations as hospitals were rendered useless due to their inability to take in fresh patients. It also changed their ambulance services since they needed to divert their routes to their counterpart hospitals. Many college and university students in China got to know that their study-based data was encrypted, and were unable to access it.
Many railway services were affected in Germany, including Telefonica, one of the largest mobile companies in Spain. It is quite evident that the WannaCry ransomware virus attack affected every business and technology hemisphere around the world, thereby resulting in huge losses. This essentially answers the question concerning who did the WannaCry virus affected, since it affected almost every technology company all around the world, and every industry of every domain.
Where did the WannaCry virus come from?
Lazarus Group of Companies was suspected to be the official WannaCry Virus Creators. There have been no confirmed arrests, but only the suspicions were raised from their side. This ransomware crypto worm targeted computers and other systems that ran the Microsoft Windows Operating System. It consisted of encrypted data that demanded payments in the form of Bitcoin. The range of this ransom would be close to 300-700 bitcoins for decrypting their data. Users would have an additional risk of losing their data if they do not pay the money to them.
This virus primarily originated from EternalBlue, which was a cyber attacking exploit developed by the United States National Security Agency (NSA) for previous Windows versions. This was released by a hacker group called the Shadow Brokers, a month after Microsoft released its scheduled security patches. EternalBlue was stolen by this group, and that caused a widespread effect of the WannaCry virus across multiple systems around the world.
WannaCry virus removal procedures began immediately after these security patches were discovered along with the kill switch to end it all so that any further attacks would be nipped in the bud. As per many security experts, the origin of this virus was from North Korea, and it was officially confirmed by the United States, the United Kingdom, and even Australia.
How should Companies prepare for WannaCry?
The fact should be known that the total losses reported for WannaCry were over $130 billion. Before you plan to curb this issue, you should seethe virus WannaCry characteristics through and through. It is not confirmed whether the WannaCry virus removal tool is available currently, but you can have antivirus WannaCry because most of them would be able to detect them.
More than 77% of the companies do not believe that they can handle the new-age attacks. An official study states that some of the most significant internal hurdles so that increased levels of security can be achieved. It is essential to know that these attacks are widespread, and countries get attacked every day through ransomware in one way or another.
It should also be noted that only 3% of the companies are entirely equipped with fighting an attack like WannaCry. To achieve the highest level of security against these attacks, companies must mandatorily integrate and even collate their security infrastructures so that they would be able to function in a better way.
With advanced security measures, organizations would be able to track their data in real-time on every security endpoint. This would mean that companies would evolve from a traditional layered security approach to a more sophisticated and holistic security approach, thereby creating a streamlined security architecture around them.
Companies need to prepare for more advanced cyber-warfare measures that would make them against the next-generation attacks. They should prioritize protecting their infrastructure, personal assets, and even business information that might, in due time, possibly become collateral damage.
Some of the things that can be carried out by companies should include segmenting their networks, quarantining the attacks, and stemming them from ever propagating again. Companies should deploy advanced threat protection systems before the virus gains access to vital information through your software systems. The security system must be unified on all the environments, mainly cloud, mobile, and on networks so that you would get adequate protection against these attacks.
Computer Solutions East provides security against the next-generation cyberattacks that have ridden many companies for a long. CSE’s Advanced Threat Protection gives clients a range of solutions that would protect your system against complicated security threats. The software is even capable of handling hacking-based threats to protect your sensitive data. This is available for clients as a cloud service, thereby making it much easier to use flexibly.
|
<urn:uuid:75ca396e-e225-4ff8-95b9-735900e432e8>
|
CC-MAIN-2022-40
|
https://www.computersolutionseast.com/blog/cybersecurity-trends/has-wannacry-set-a-precedent-enterprises-need-to-stay-prepared/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00436.warc.gz
|
en
| 0.975466 | 1,012 | 2.546875 | 3 |
The scene is all too familiar: a pivotal case against a suspected child predator is being analyzed by digital forensic experts, and it has all come down to a single grainy JPEG image. Cybersecurity experts and law enforcement officers pack a room filled with aerodynamically-crafted desks, custom-made quad-LCD screens and other futuristic gear, scanning the image for any clue that could blow the case wide open.
After a few tense moments, the team notices the tell-tale orange cylinder of a prescription bottle in the background of the image. The team becomes excited, shouts commands to the voice-recognition software. “Zoom,” they say, and the software snap-focuses on the prescription bottle. The software goes through cycles of Gaussian edge detection, and suddenly, the potato-camera image is replaced with an image whose details a $75,000 Hasselblad camera owner could only dream of capturing. A partial name and address is exposed, and agents begin scrambling to piece the new info together.
One agent points to the side of the now-crisp image, at the suspect’s hand. “Enhance,” he commands, and the image recognition software seemingly bends physics, logic, time, and space as it isolates the ridges of the suspect’s skin, revealing their fingerprints. In seconds, a fingerprint is extracted by the software and scanned into the CODIS database, exposing the perp and pulling up his home address. The team high-fives and celebrates yet another job well done.
Apparently, the Department of Homeland Security (DHS) made this scene actually happen. Whether or not futuristic office supplies were involved is still up for debate; however, the mad scientists and research engineers of DHS Science and Technology (S&T) have now developed new algorithms for analyzing low resolution digital images for crucial forensic data.
This software provides the team with new and unique ways to extract identifiable details from blurry, grainy, and otherwise hard to read images. The new technology, dubbed Photo DNA, utilizes edge-enhancing technology to separate noise from images as well as selectively extracting key points from photos. This ultimately reduces the physical and psychological burden put upon investigators and allows them to categorize and analyze over 500,000 images a week.
Cybersecurity experts and privacy advocates have long warned that with the current megapixel count in digital cameras increasing year over year, photographs that show people’s hands may soon give malicious individuals the ability to extract fingerprint data. Many fear that stolen fingerprint data could be used to extract data from locked mobile phones, or to potentially frame innocent individuals for crimes or other nefarious acts. However, the Department of Homeland Security’s case against Stephen Keating proves for the first time that this level of forensic data is now obtainable and reliable enough to be used in court to convict people of actual crimes.
With this level of forensic capability confirmed and the moody landscape of a dystopian 1984 lurking just around the corner, we wonder if new types of privacy and anti-surveillance fashion and makeup will kick in and we’ll finally have the kind of future we were promised by Blade Runner. We can’t help but wonder how soon social media websites will implement protection from fingerprint scanning by offering automatic finger detection and fingerprint scrubbing features.
Until then, the best practice for staying safe online is to avoid uploading close-up pictures where your hands and fingertips are clearly exposed… and try not to think about how the human ear is actually the largest uniquely identifiable source of biometrics on the human body.
Holy Cow, Even WiFi On Your Phone Isn’t Safe
When your OPSEC isn’t being betrayed by your WiFi, it may soon become the latest platform for malicious attackers to gain unfettered access to your mobile device.
The bright minds of Google’s vulnerability discovery team, Project Zero, have uncovered a series of vulnerabilities that could allow bad guys within WiFi range to send malformed wireless signals to Broadcom devices in order to compromise those devices. For those of you who are shrugging Broadcom off as some third-party company who is only in a few mobile devices, you are partially right – Broadcom just happens to be the main third-party WiFi provider for Samsung, Nexus, iPhone, and about 20 other devices.
Gal Beniamini first discovered the vulnerabilities, which are due to a series of logic flaws and overflows in the devices’ 802.11r Fast BSS Transition(FT), CCKM Fast and Secure Roaming, and Tunneled Direct Link Setup (TDLS) sub-protocols. These vulnerabilities can be exploited without user interaction and merely require WiFi to be enabled on the targeted devices in order to exploit. To make matters worse, the exploitation occurs on the WiFi chip itself, which, unlike modern operating systems, does not implement stack cookies, DEP, or any other modern defense-in-depth procedures to protect against exploitation.
Once an attacker has crafted an exploit designed for your device, you merely have to be sitting in a public area with your WiFi on and preferably not looking at your device (since a reboot is likely required to gain full access). It seems that standing in line at that Hipster coffee shop which sells those glorious triple-shot, half-sweet, non-fat, soy/almond/hemp-milk-mix caramel Macchiatos would be the ideal attack location. At this point, your mobile device’s fate can be summarized in the words of the recently late great Bill Paxton, “It’s Game Over, Man.”
Luckily, Apple has released a timely update covering the fix for their devices, with Samsung and Google and other manufacturer’s devices being patched as soon as possible. Beniamini has promised to divulge more details of the vulnerability in a follow-up blog series once security patches have been rolled out.
|
<urn:uuid:7c361b1c-ab4a-470f-b02c-ad03e5a4042a>
|
CC-MAIN-2022-40
|
https://blogs.blackberry.com/en/2017/04/this-week-in-security-4-07-2017
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00436.warc.gz
|
en
| 0.929519 | 1,228 | 2.84375 | 3 |
In the digital age, personal data is one of the most valuable commodities — it wields the potential to sway elections and affect the economy. Being a lucrative asset, big tech companies like Facebook make huge profits from gathering and selling the information we share online, from names to physical locations to web searches.
And it’s not only multi-billion dollar tech conglomerates that do this; the US government also collects data from citizens, likely much more than we’re even aware of.
With two of the most powerful organizations engaging in ethically ambiguous data handling practices — Facebook with its fair share of data security scandals and the government’s past of secretly monitoring of citizens’ communications — the question comes down to, who can US citizens trust more with their data?
A study by Privacy Tiger with over 1,000 US participants found this: despite Facebook’s spotty record of data privacy malpractices, 32% of Americans trust Facebook with their personal data more than they trust the government.
Trust in the US government has steadily declined over the last four decades, hitting an all-time low in recent years, according to a study by the Pew Research Center, which found that 83% of Americans don’t trust the government to do what’s right.
Mistrust in the government also increasingly applies to cybersecurity. Another Pew Research study found that about half of Americans don’t trust the government or social media platforms to protect their data.
Privacy Tiger’s study revealed further insights into how the level of trust in Facebook and the government varies for different demographics.
The youngest and oldest trust Facebook the least
Privacy Tiger’s data found that 35-44 year-olds had the highest level of trust in Facebook over the government. Older baby boomers trusted Facebook the least, with Zoomers close behind.
This sentiment is also echoed in a research paper on privacy and trust in Facebook, which stated that Facebook leverages “trust among users to make advertisements look like social posts … [leading] its users into a false sense of trust and security when none exists.”
Case in point? Facebook was penalized $5 billion by the Federal Trade Commission (FTC) for deceiving users about their ability to control the privacy of their personal information.
With the younger generation’s awareness of Facebook’s scandals and the older generation’s skepticism of social media, Facebook data practices need to change if they hope to establish trust with users.
Women trust Facebook more than men
Level of trust varied between genders as well, with women 74% more likely than men to trust Facebook over the government.
With women holding a strong presence on social media but an underwhelming one in government, it’s within expectations that they’re more likely to trust Facebook over the government with their personal information.
The future of US data privacy
While tech companies and governments recognize the value of personal data, they seem to neglect the protection of the people whose data is being collected and used. Without federal privacy laws in the US, the onus is put on sector-oriented and state legislation such as the California Consumer Privacy Act (CCPA) to set data privacy standards.
The CCPA, a meeker version of Europe’s General Data Protection Regulation (GDPR), aims to protect users’ data by mandating businesses to grant users rights over their data and allow them to opt out of the sale of their data. However, the legislation applies only to businesses, not governments, and is riddled with loopholes.
Most internet users are aware of the copious amounts of data that Facebook and Google store on users, but the reality is, the government knows even more. In addition to having information about citizens’ education, employment, and medical history, the government also has all the personal information that Facebook and other tech companies collect.
Both the government and Facebook hold a lot of power over citizens and their data, but little of their trust. To regain their trust, the US needs to reform its approach to data privacy, starting by creating a national comprehensive data-protection framework that applies to all institutions — public and private.
|
<urn:uuid:0cd04fec-5a02-4b03-8005-456dc0402c27>
|
CC-MAIN-2022-40
|
https://www.cpomagazine.com/data-privacy/americans-dont-trust-the-us-government-especially-with-their-data/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00436.warc.gz
|
en
| 0.93842 | 855 | 3 | 3 |
Phishing attacks are on the rise, and cybersecurity experts are sounding the alarm. These attacks use fraudulent emails and websites to try to steal your personal information, such as your login credentials and credit card numbers, as well as sensitive data that small and medium-sized businesses may host on company computers or networks.
Fortunately, there are steps you can take to protect yourself and your business from these types of cyberattacks. Email filters can help protect you from malicious links and attachments, and cybersecurity awareness can help you and your employees spot suspicious emails.
In this article, we’ll look at what phishing is, how it works, and the best ways to protect yourself from it.
Don't Reveal Sensitive Information
When it comes to cybersecurity, one of the most important rules is to never reveal sensitive information like personal and financial info via email. This is because email is not a secure communications channel, and cybercriminals can easily capture this information in transit.
If you need to send sensitive information, consider using a secure messaging app or encrypted email service. These services will encrypt your messages, making it much harder for cybercriminals to intercept and read them.
Check the Security of Websites
One of the best ways to protect yourself from phishing attacks is to check the security of websites you visit. You can do this by checking to see if the website uses HTTPS.
Hypertext Transfer Protocol Secure, or "HTTPS," is a secure communications protocol that helps protect your data from being intercepted by cybercriminals. When you see the HTTPS prefix in the address bar of your web browser, it means that the website is using this protocol.
If a website doesn’t use HTTPS, that doesn’t mean it’s necessarily unsafe. However, it’s always best to err on the side of caution and only visit websites that do use HTTPS.
Pay Attention to Website URLs
When browsing the internet, it’s important to pay attention to the website URLs and look for variations in spelling or domain names. This is because cybercriminals often use fake websites to phish for personal information. An example of this would be "www.homdepot.com" instead of "www.homedepot.com". This is called "typosquatting" and it's very easy to fall prey to.
The best way to protect yourself from these fake websites is to check the website’s URL carefully and the site's security certificate. You can do this by clicking on the padlock icon in your web browser’s address bar. This will bring up information about the website’s security certificate, including who issued it and when it expires.
If you see a warning message or an error message, that means the website is using an invalid or expired security certificate. Do not enter any personal information on these websites, as it could be used by cybercriminals.
Verify Suspicious Email Requests
In addition to checking the security of websites, it’s also important to verify suspicious email requests. This can be done by looking for certain clues that indicate an email is fake.
Some of the most common clues that an email is fake include poor grammar and spelling mistakes, mismatched sender information, and generic greetings like “Dear Valued Customer.”
If you receive an email that looks suspicious, do not respond to it. Instead, reach out to the company or individual that supposedly sent the email using another method, such as a phone call or a verified email address.
Beware of emails requesting information. Another common type of phishing attack is known as “spoofing.” This is when cybercriminals send an email that looks like it’s from a legitimate company or organization but is actually fake. These emails often request personal information, such as login credentials or credit card numbers. They may also contain attachments or links that, if clicked, will download malware onto your computer.
If you receive an email that looks like it’s from a legitimate company but seems suspicious, do not respond to it. Instead, contact the company directly to inquire about the email. Do not use the contact information provided in the email, as it may be fake.
Keep a Clean Machine
A computer with the latest OS, software, antivirus, and malware protection is important for protecting yourself from phishing attacks. These security measures will help protect your computer from being infected with malware, which can be used to capture your personal information.
In addition, it’s important to keep your web browser and email client up to date. These updates often include security patches that can help protect you from phishing attacks.
Phishing attacks are becoming more sophisticated, but there are steps you can take to protect yourself. Email filters can help block malicious emails, and cybersecurity awareness can help you spot suspicious emails. When browsing the internet, pay attention to website URLs and security certificates. And be wary of any email that requests personal information or seems suspicious in any way. By following these simple tips, you can help keep yourself safe from phishing attacks.
|
<urn:uuid:2413db54-da18-4592-8b92-dffee07fb2c4>
|
CC-MAIN-2022-40
|
https://www.digitalboardwalk.com/2022/08/how-to-block-phishing-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00436.warc.gz
|
en
| 0.933424 | 1,052 | 3.1875 | 3 |
Thanks to a continuous onslaught of nation-state cyberattacks, exploited vulnerabilities and ransomware, the term Zero Trust has been thrust into the mainstream, but the term isn’t new.
In fact, it’s about a decade old, but the ideas and concepts behind the term are even older.
However, there isn’t one Zero Trust solution or any one piece of software or hardware that defines it. Rather, it’s an IT security model and a concept that helps harden IT networks and prevents the bad guys from doing damage even if they successfully infiltrate your network.
What is Zero Trust?
Zero Trust is based on the idea that IT networks are inherently insecure and that the network has already been compromised. Users and devices – regardless of identity or legitimacy – are not to be trusted by default.
“Therefore, you need to build trust from the ground up,” says Brian Foster, vice president of product at cybersecurity firm ReliaQuest.
The concept was initially developed as a response to devices and networks being hacked by bad actors, who then moved laterally with ease because of a false sense of security, Foster says.
Essentially, a Zero Trust architecture makes it extremely hard for any user or device to do things they aren’t supposed to do since users – both legitimate and malicious – are treated with the same level of scrutiny.
Now, the concept is being adopted by organizations everywhere, including at the highest levels of the U.S. government.
According to Microsoft, Zero Trust is now the top security priority, and 90% of security decision makers are in the process of implementing the concept across their IT environments.
The benefits of Zero Trust are clear: it provides stronger overall security and leads to better cybersecurity hygiene by focusing on role-based access, risk-based identity assignment and micro-segmentation within a network, says Charles Griffiths, head of IT operations at U.K.-based AAG IT Services.
How do you implement it?
Adopting a Zero Trust architecture takes several key steps, many of which most IT admins should already be doing, including control over identities, devices, applications data, infrastructure and networks, according to Griffiths.
“Zero Trust is not a single product or appliance to buy, but an ideology of security. It involves pulling the traditional perimeter back and combining traditional network access controls with user behavior analytics (UBA) and micro-segmentation,” he says.
Identity management is a fundamental part of a Zero Trust architecture since they are the basis of verifying users before they can access systems. Griffiths suggests implementing multi-factor authentication across the entire organization to help ensure any and all activity is legitimate and authentic.
In addition to a strong password policy and multi-factor authentication via a mobile phone, smart card, security key or app, continuous authentication confirms identity in real time and helps prevent attacks that have been successful in the past because it doesn’t rely on static data, Griffiths says.
Instead of using passwords, which security experts say are becoming less secure as hacking methods evolve, Griffiths says organizations can use hardware-based authentication keys to provide a convenient method of authentication that can also be used as a USB HID device or NFC.
By segmenting networks and implementing network controls, administrators can better manage traffic for each department and application. Micro-segmentation allows for finer levels of granular controls within the firewall or perimeter to limit access, protect against DDOS attacks and more.
Secure every device
Today, every employees has at least one person device they bring to work, and that device may be connected to the organization’s network. If those devices aren’t scrutinized like company-issued devices, you open yourself up to compromise. Every device should be viewed as a potential threat and should have limited access to sensitive resources.
Be specific with user roles
Roles and access should be as granular as possible, Griffiths says, and each role should have clear definitions on what they are allowed to do.
Monitor traffic everywhere with Zero Trust
Traditional IT architecture allows for monitoring of user traffic coming in and out of the network, but remote work is now forcing organizations to monitor traffic on user devices wherever they are, says Michael Wilson, chief technical officer at managed security services provider Nuspire.
Wilson equates Zero Trust to moving from castles to high-tech body armor. They can work together, but are oftentimes at odds.
While we will always have castles, we should no longer implicitly trust anyone inside the castle just because they are in it,” Wilson said. “These services/systems have to be rearchitected to no longer assume trust because someone is on the network or at a specific location. This is why having a strong identify program and technology to support it is a prerequisite to a true Zero Trust approach.
|
<urn:uuid:d4bd1bec-9c0c-4fe6-9b5d-0749baf513c5>
|
CC-MAIN-2022-40
|
https://mytechdecisions.com/it-infrastructure/what-is-zero-trust-and-how-do-you-implement-it/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00636.warc.gz
|
en
| 0.951503 | 1,019 | 2.875 | 3 |
What is Data Mining?
According to Berry and Linoff (Mastering Data Mining, John Wiley & Sons Inc. 2000) ?Data mining is the process of exploration and analysis of large amounts of data in order to discover meaningful patterns and rules.? This is often achieved by automatic and semi-automatic means. Data mining came out of the field of statistics and Artificial Intelligence (AI), with a bit of database management thrown into the mix. The term predictive analytics, while a bit of a misnomer, is a loose term used to describe advanced modeling. People now use the two terms interchangeably, or sometimes refer to simple data profiling as data mining – a historical misnomer, but now seemingly acceptable.
The goal of the data mining analysis is either classification or prediction. In classification, the idea is to predict into which classes (i.e. discrete groups) data might fall. For example, a marketer might be interested in who will respond vs. who won?t respond to a promotion. These are two classes. In prediction, the idea is to predict the value of a continuous variable.
Common algorithms used in data mining include:
- Classification Trees. This is a popular data mining technique that is used to classify a dependent categorical variable based on measurements of one or more predictor variables. The result is a tree with nodes and links between the nodes that can be read to form if-then-rules.
- Logistic Regression. This is a statistical technique that is a variant of standard regression but extends the concept to deal with classification. It produces a formula that predicts the probability of the occurrence as a function of the independent variables.
- Neural Networks. This is a generic software algorithm that is modeled after the parallel architecture of animal brains. The network consists of input nodes, hidden layers, and output nodes. Each of the units is assigned a weight. Data is given to the input node and by a system of trial and error the algorithm adjusts the weights until it meets a certain stopping criteria. Some people have likened this to a black box approach.
- Clustering Techniques. Like for example, K-nearest neighbors – a technique that identifies groups of similar records. The K-nearest neighbor technique calculates the distances between the record and points in the historical (training) data. It then assigns this record to the class of its nearest neighbor in a dataset.
Let?s examine a classification tree example. Consider the case of a telephone company that wants to determine which residential customers are likely to disconnect their service. They have on hand information consisting of the following attributes:
How long the person has had the service
How much they spend on the service
Whether they have had problems with the service
Whether they have the best calling plan for their needs
Where they live
How old they are
Whether they have other services bundled together with their calling plan
Competitive information concerning other carriers? plans
AND Whether they still have the service or they have disconnected the service.
Of course, there can be many more attributes than this. The last attribute is the outcome variable; this is what the software will use to classify the customers into one of the two groups ? perhaps called stayers and flight risks.
The data set is broken into training and a test set. The training data consists of observations (called attributes) and an outcome variable (binary in the case of a classification model)? in this case the stayers or the flight risks. The algorithm is run over the training data, and comes up a tree that can be read like a series of rules. For example,
If the customer has been with the company >10 years and they are over 55 years old, then they are stayers
These rules are then run over the test data in order to determine how good this model is on ?new data?. Accuracy measures are provided for the model. For example, a popular technique is the confusion matrix. This matrix is a table that provides information about how many cases were correctly vs. incorrectly classified. If the model looks good, it can be deployed on other data, as it is available. Based on the model, the company might decide, for example, to send out special offers to those customers that they think are flight risks.
This technique is a type of ?supervised learning? where the outcome is known and training is used.
It?s Not Just About The Algorithms
Clearly, a number of steps need to be undertaken in order to make the process work. Access to clean data is required, and issues like missing data need to be dealt with. For example, how should a blank field be handled? Outliers? Does the data need to be normalized – transformed to make sense with the rest of the data? Additionally, sound analytical techniques call for data to be explored before starting to run the algorithms. This may include visualization and simple statistical analysis. In this way, the analyst can determine whether all attributes need to be used and also come up with some hypotheses about the data.
After the analyst is comfortable with the model results, it needs to be deployed. This means that there needs to be real data available in company systems (not the training or test data) that the model can actually run against. Additionally, some sort of process needs to be put in place both to schedule when the model runs and to deal with the results. Vendors such as SPSS and SAS have the right philosophy about all of this. For more on SPSS, take a look at the article entitled, ?Empowering Business Analysts with Predictive Analytics? in this newsletter.
A Final Word
|
<urn:uuid:430a1039-8762-4f95-a589-e9dc5660da86>
|
CC-MAIN-2022-40
|
https://hurwitz.com/diving-into-data-mining/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00636.warc.gz
|
en
| 0.94526 | 1,152 | 3.375 | 3 |
In new legislation, California decided to ban easy to guess, default passwords.
The bill entitled SB-327, or Information Privacy: Connected Devices demands that electronics manufacturers in California equip their products with “reasonable” security features.
What does this mean practically for users?
All those generic passwords such as “Admin” and “Password” will be prohibited. Starting 2020 when the law comes into effect, computers, smartphones, tablets, and all the rest, will either require users to create a unique passwords on configuration, or may come ready with a complex random password installed.
Solid Steps for a Real Problem
The law is California State’s effort to deal with a serious problem facing the cybersphere: negligence in security configurations for password-based systems. Sloppy security on millions of new consumer devices sold every year, create a vast Internet of Things (IoT), connecting both corporate and home wireless networks. The fact that such a large number of these devices are wide open targets for hackers due to embarrassingly simple passwords can endanger the entire community of users. SB-327 asserts that at least part of the responsibility falls on the manufacturers, as leaving users the easy option for keeping default passwords encourages this trend.
A Milestone for Digital Security
It’s important to highlight how much of giant leap SB-327 is for cyber regulation. Other laws such as GDPR, and New York’s DFS Cyber Regulations have almost uniformly consisted of guidelines–some of them albeit pretty strict–for organizations handling sensitive data. Rarely if ever have specific protocols been instituted on how a given security platform must be used. The new California bill on the other hand recognizes the threat to the digital sphere posed by specific, current practices and took measures to correct them. Furthermore, the law places responsibility for those measures at the feet of manufacturers–and enforces them with harsh penalties.
The fact that SB-327 chose to single out passwords was a wise choice.
In the words of Bruce Schneier:
“Hooray for doing something, but it’s a small piece of a very large problem”
After years of a slow and steady decline, passwords have become one of the single biggest contributing factors to the rise in data breaches over the past several years. California’s new law is another nail in the coffin for the obsolete password.
Well Intentioned, Yes. But Effective…?
To quote the Register
“It’s good news, but overall a wasted opportunity” Kieren McCarthy
The new bill shows the State of California has its head in the right place addressing the problem of weak authentications.
But there is one important question users need to ask about the efficacy of SB-327: what problems will it actually solve?
Granted, the bill does fight some of the more common threats facing password authentication. Low level hacks such as password “spraying” (a form of brute force hack in which the malicious actor attempts a single password against many accounts before moving on to attempt a second password) and other automated attacks can often be prevented with complex passwords. But SB-327 fails to do go to the root of the problem.
The issue is not just password strength. The issue is the password itself.
Sooner or later, the industry will have to wake up to the reality of password authentication. Passwords are inherently weak, impinging the security of entire networks on secrets vulnerable to exposure that users must remember and manage. Passwords promote a slew of bad security practices, from creating passwords that are easily cracked, to storing them insecurely. Furthermore, more sophisticated hacks like social engineering are geared toward the vulnerabilities of password authentication. California’s legislators did not even begin to address these threats.
Password-less as the New Paradigm
When a lock is faulty, replacing the key is not the solution. Replacing the lock is.
Taking on the vulnerabilities presented by passwords will require a fundamental shift in security standards. Out of band, password-less authentication is the technology that will once and for all do away with the security challenges facing digital identities today, leaving users with solutions that are both safer and more user friendly.
|
<urn:uuid:051b7248-41b7-4ba4-9c4f-e2eed87178e9>
|
CC-MAIN-2022-40
|
https://doubleoctopus.com/blog/general/california-weak-password-ban/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00636.warc.gz
|
en
| 0.93001 | 866 | 2.640625 | 3 |
The Associated Press (via Yahoo) writes that IP addresses are to be considered as Personal Information according to the man in charge of the EU's group of data privacy regulators.
IP addresses allow companies such as Google or Yahoo to identify the geographical location of a computer and this can be used to provide with more targeted and pertinent advertising and content.
Peter Scharr, intervening at a European Parliament hearing, said that when someone can be identified by an IP address, then it has to be regarded as personal data.
But Google says that the IP address only identifies the location of the computer, not who is using it.
An analogy could be that of your landline phone number which can tell you the exact location of the phone but cannot exactly identify how many people are using it.
Personal data like IP addresses are crucial for marketers and search engines who can narrow down the type of consumer they're targeting - the more data they have, the more valuable it becomes as it gives a sharper, more detailed profile of the individual.
An interesting development is the emergence of tools which not only tie the user to the phone but also provide the service providers with an approximate location of the user.
Apple's iPhone for example uses a triangulation method to generate a pseudo-GPS feature which gives you a rough idea where you could be.
|
<urn:uuid:2c5508cc-e315-4ded-9c26-c349b0a8f05b>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/2008/01/24/your-ip-address-private-says-eu/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00636.warc.gz
|
en
| 0.950815 | 269 | 2.859375 | 3 |
We look at how the world is moving towards electric vehicles and how the UK plans to use this industry to reach its net zero goals by 2050.
We take a closer look at the important role technology is expected to play in reaching the world’s net zero goals in the next few years.
We look back at the last two weeks of the COP26 conference and what has come out of the negotiations.
The UK announced at COP26 that it plans to ban the sale of gasoline and diesel vehicles by 2040.
Science and Innovation is an essential part of finding solutions to limit the global temperature rise to 1.5°C. Day 9 of the COP26 demonstrated the various science and Innovation that can deliver urgent climate action. Throughout the day, several big announcements could positively impact the world’s net zero goals.
After a day full of pledges from large companies and countries across the world, leaders cast their attentions to finance on the 3rd day of the UN climate conference COP26.
Prime Minister Boris Johnson launches an international plan to deliver affordable and clean technology everywhere by 2030 at COP26.
Billionaire Founder of Amazon, Jeff Bezos, took to the stage on the 2nd of November at COP26 to pledge $2 billion toward nature conservation and transforming food systems.
|
<urn:uuid:2ee05d78-bbbc-4125-baa5-4749bf157561>
|
CC-MAIN-2022-40
|
https://www.tbtech.co/tag/cop26/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00636.warc.gz
|
en
| 0.944843 | 263 | 2.6875 | 3 |
A Black Box for Everything
In the aftermath of a plane crash, one of the first things the accident investigators look for is the flight recorder. Inaptly named the “black box”, it is neither a box, nor is it black – a very bright shade of red is much easier to locate. Recording on a virtual continuous loop, it presumably will have a record of events for the last critical moments of the flight.
Beginning in the 1990’s, the magnetic tape units originally used in flight recorders were replaced by solid state memory boards. Today, the Boeing 787 Dreamliner can log 146,000 separate flight parameters, resulting in several terabytes of data per flight. There is a separate recorder which preserves the last two hours of cockpit audio (crew member expletives included). All this data requires sophisticated analysis software.
Automobile dash cameras, building surveillance cameras and law enforcement body-cams are further examples of how pre-event data logging has infiltrated our lives. Thieves have been apprehended, legal issues have been resolved, and car vandals have been brought to justice using this recorded information. Imagine what a future civilization might think of us if they unearthed all this data.
While it might be satisfying to show the police a video of the person who keyed your expensive car, there are folks who are taking a more expansive view of so-called “Black Box” recording. Climate change is yet another of those divisive issues of our times, and there are those on one side who want to preserve a clear picture of what we humans were doing leading up to what they believe is the inevitable crash of the earth as we know it.
A consortium of like-minded data researchers, architects and artists are putting together a repository that will sit in environmentally and geopolitically safe Tasmania beginning later this year. The original design is intended to last for 50 years, which is longer than the climate clock implies we have left. Work is already underway to extend that life just in case earth outlasts the clock. Three-inch-thick steel walls are meant to protect the archives from whatever might destroy earth, although allowing for visitors to access the information stored in the box remains unsolved.
John Wooden once wrote “The true test of a man’s character is what he does when no one is watching.” Earth’s flight recorder will hold humans accountable for climate change by documenting news articles, scientific journals, tweets, and other assorted records. Land and ocean temperatures will be charted, along with atmospheric greenhouse gas concentrations and biodiversity losses. As a measure of our changing priorities, military spending will be included. It is hoped that an archive of leaders’ climate-conserving efforts will inspire more. Expletives will likely be omitted to conserve storage space.
Some scientists see very little evidence that global warming will result in human extinction, but others remain concerned. How this story ends is anyone’s guess, but all of the events, actions and in-actions will be faithfully recorded in Tasmania.
As pilots say, brace for impact.
Author Profile - Paul W. Smith - leader, educator, technologist, writer - has a lifelong interest in the countless ways that technology changes the course of our journey through life. In addition to being a regular contributor to NetworkDataPedia, he maintains the website Technology for the Journey and occasionally writes for Blogcritics. Paul has over 40 years of experience in research and advanced development for companies ranging from small startups to industry leaders. His other passion is teaching - he is a former Adjunct Professor of Mechanical Engineering at the Colorado School of Mines. Paul holds a doctorate in Applied Mechanics from the California Institute of Technology, as well as Bachelor’s and Master’s Degrees in Mechanical Engineering from the University of California, Santa Barbara.
|
<urn:uuid:79ee9685-98c9-43e3-bc71-a123507e8806>
|
CC-MAIN-2022-40
|
https://www.networkdatapedia.com/post/a-black-box-for-everything
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00636.warc.gz
|
en
| 0.947832 | 781 | 2.703125 | 3 |
With talk of a growing skills gap that is causing some employers to say there aren’t enough workers qualified to fill available positions, educators have begun talking about how they can better prepare students for future careers. While the conversation is ongoing, companies, schools and government officials are beginning to introduce career-readiness programs and new technology to help students gain the expertise they will need to be successful in their post-graduate jobs.
Identifying the skills gap
A McKinsey survey conducted at the end of 2012 found that nearly 40 percent of employers cited a skills shortage as the top reason for entry-level vacancies. Thirty-six percent of employers also said that graduates’ lack of skills caused “significant problems in terms of cost, quality and time.” But this same survey found that 72 percent of schools believed that when their students graduated, they were prepared for work. This shows a major gap between educational organizations and employers – with students caught in the middle.
A recent Twitter discussion, moderated by Gigaom’s Ki Mae Heussner, became a forum for talking about how emerging technologies can help students better prepare for careers. The primary contributors to the discussion were McGraw Hill’s SVP of College and Career Readiness, Jeff Livingston, and the principal of New York’s Pathways in Technology Early College High School, Rashid Davis. The chat revealed several major points:
- Science, technology, engineering and mathematics (STEM) skills are in increasing demand in the workforce, showing that there’s a disconnect between what students learn and what employers need students to learn.
- Some majors don’t translate well into the workforce, which means that those students aren’t graduating with skills that will make them qualified for work.
- While STEM skills are valuable, students need to be well-rounded. Learning math and science is important, but so is enhancing writing, research and critical thinking abilities that come from the liberal arts.
- Students should seek out – and schools should support – experience in the workplace, and stronger connections with professionals, through internships, mentoring, co-ops, etc.
Partnerships, technology offer solutions
The skills gap is most strongly felt in jobs that require a high level of skill in math, science and technology. In response, many states have begun to focus on these areas.
The Governor of Wisconsin recently announced that he plans to invest $100 million into programs to develop the state’s workforce, including investments in Wisconsin’s technical school system. A survey conducted by GE found that business executives believe one of their top priorities is connecting with educators so that students come out prepared to contribute in the fast-paced business world.
In Texas, reported Ben Philpott on KUT Austin, lawmakers are trying to pass legislation that would support offering more classes in math and science and would encourage students to pursue technical or vocational training programs. And in other states, like New York, Illinois, Maine and Massachusetts, companies and schools are teaming up to ensure that skilled workers are graduating with relevant skill sets.
The Twitter chat suggested that new adaptive learning technology and analytics platforms could also personalize the educational experience and help teachers prepare students for specific career paths. Some classroom software, like Faronics Insight, helps teachers take a more personal approach to learning. Students can ask teachers questions directly through Insight on classroom computers, and educators can also initiate individual or group discussions so they can manage their classroom while allowing students to pursue different topics.
With more groups helping students prepare for careers, and emerging technology enabling innovative learning methodologies, many are already finding ways to close the skills gap.
“Choices should be made early and often. Change your career path frequently. The point is to have a destination in mind,” concluded Jeff Livingston at the end of the Twitter discussion.
How do you think schools could better prepare young adults for the workforce? Please share your thoughts below!
|
<urn:uuid:5b14b6fd-eb4b-411a-af9a-2fa6c9eb8fa2>
|
CC-MAIN-2022-40
|
https://www.faronics.com/news/blog/skills-gap-prompts-schools-companies-to-better-prepare-students-for-work
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00636.warc.gz
|
en
| 0.963766 | 808 | 3.390625 | 3 |
Constraint satisfaction problems (CSPs) are mathematical problems defined as a set of objects whose state must satisfy one or more constraints.
- Map coloring problems — Assign colors to regions (US states, for instance) on a map such that no adjacent regions have the same color.
- Logical puzzles such as Sudoku — Complete a grid by entering digits 1 through 9 in each square such that each digit appears once and only once in each column, row, or subgrid.
- Traveling salesman problem — Plan the most efficient route through a series of cities so that each is visited once and only once.
In industry, problems of scheduling and resource allocation can be formulated as constraint satisfaction problems.
|
<urn:uuid:a8552a03-a621-47e5-ab2c-94d9abffcdef>
|
CC-MAIN-2022-40
|
https://support.dwavesys.com/hc/en-us/articles/360002750294-What-Is-a-Constraint-Satisfaction-Problem-
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00036.warc.gz
|
en
| 0.912887 | 149 | 3.625 | 4 |
New logical technologies are helping create operational efficiencies at all layers of the data center model. New terms and technological concepts are born because of gaps in the cloud deployment model.
This has led to the introduction of software-defined technologies (SDx), which abstract a number of different services to improve cloud and data center performance. But this also causes a bit of confusion. Where does this technology fit in? Is it really complicated? What does it really all mean to me?
To help simplify the many facets of the software-defined revolution, here is your SDx dictionary, which provides explanations and examples of the many ways in which software is redefining the ways data center and cloud infrastructure is managed.
Software-Defined Networking (SDN)
Because we have so many new connection points, it became necessary to create a better system to help control the flow of traffic. Traditional networking equipment focused too much on the physical layer where connections where required to accomplish the job. When cloud became a more widely used platform, it became necessary to abstract that physical layer. Now, we’re capable of controlling traffic which traverses the WAN completely at the software layer. This means network automation, optimization, and efficiency are no longer dependent on the physical infrastructure. VMware’s NSX, for example, creates a new model for how network traffic is controlled at the virtual layer. This introduces the capability to program, provision, and better manage both virtual and physical resources within the environment.
It’s important to note that SDN is also happening on the physical layer as well. Cisco’s NX-OS creates a modular building-block approach to the networking layer. Deployed on the entire switching stack, this networking operating system controls resiliency, virtualization, efficiency and even extensibility all at the logical layer. This type of intelligence can help dynamically route traffic in during peak times or even during outages. Not only is the physical layer being utilized to the fullest efficiency, administrators are able to create network flow automation policies to ensure continuous availability for both critical and standard workloads.
This has become a very interesting approach to controlling the storage layer. Much like servers and desktops, storage has experienced a bit of a physical infrastructure boom. There had to come a point where storage management became even more efficient. With that came the concept of software-defined storage. This is a virtual layer that sits in front of all storage components to control and distribute incoming requests to the appropriate storage pool. Atlantis ILIO USX, for example, creates a virtual layer where any storage controller can be inserted into the pool. With that, you can point DAS, Flash, SSD, spinning disk, and even RAM as a storage pool repository to the USX appliance.
From there, the software-defined storage system will intelligently push appropriate traffic to the appropriate pool. For example, archive data might be sent to less expensive storage while VDI requests are sent to a flash array. Similarly, VMware’s Virtual SAN, aims to aggregate both compute and storage resources directly from VMware vSphere hosts to create a simpler and better managed infrastructure. VMware introduces Storage Policy Based Management (SPBM) where administrators can now create intelligent storage policies aimed at availability and the enhancement of other virtual services. In creating that virtual layer, storage provisioning, scaling, and performance become direct benefits for the entire virtual infrastructure.
The concept of software-defined security falls directly in line with next-generation security technologies. Traditional security is simply not enough for today’s diverse infrastructure. The logical layer in the security realm was created to address new challenges around data in the cloud and more data within the actual data center. Checkpoint’s Virtual Appliance for Amazon Web Services helps create a direct software-defined security extension from a primary infrastructure directly into a cloud environment. This means utilizing advanced features spanning an entire WAN infrastructure including IPS, access controls, DLP, and unified security management.
Similarly, Palo Alto completely abstracted the security layer with their next-generation security operating system, PAN-OS. These virtual appliances can sit anywhere within the data center to process a variety of security requests. With an intelligent security operating system, administrators are able to utilize next-generation firewall capabilities, such as dynamic address groups, complete virtual machine monitoring, the creation of security policies that instantly sync with virtual workload creation, and a unified security management platform.
We continue our guide to SDx technologies providing explanations and examples of the many ways in which software is redefining the ways data center and cloud infrastructure is managed.
Software-Defined Data Center
This is a very important concept to understand since many data center and infrastructure shops are adopting the technology. The data center has truly become the home of all modern technologies. There is more data being pushed through a data center platform than ever before. To help control critical resources at all layers, data center controls needed to be abstracted. This happened at the virtual as well as at the infrastructure layer. VMware’s push around the software-defined data center describes an environment capable of robust performance while maintaining very high resiliency.
Effectively, they strive to completely unify the entire data center stack into the virtual layer to control network, storage, compute, and even management. Along the same idea of the software-defined data center comes the very powerful technology of a data center operating system. The IO.OS from IO Data Centers creates a completely logical layer to manage an entire global data center platform. This means complete visibility into a very distributed data center model where controls include integration with big data, various cloud environments, critical APIs, mobile resources, and much more. By creating a software layer to manage the entire data center model, you’re able to create a proactive, intelligent, infrastructure which is capable of real-time visibility and dynamic extensibility.
Automation and workflow automation has become critical to proactively manage the very dynamic nature of the cloud. Software-defined infrastructure takes into account the concept of hardware and software profiles within a converged system. Cisco UCS, for example, allows an organization to create a “follow-the-sun” data center model where resources can be dynamically re-provisioned based on workload, user location, time of day, and much more. This entire process can span a couple of racks, an entire data center or many data center environments which are globally distributed. The great part here is that it’s all designed around intelligent automation policies. Both physical and virtual resources can be allocated based on a variety of critical needs.
Controlling the cloud layer has become very important. Many organizations are actively looking into solutions which allow them to better manage a very heterogeneous cloud infrastructure. Technologies like Citrix’s CloudPlatform create a truly unified cloud management infrastructure capable of direct cloud elasticity, control, and optimized efficiency. By creating an application-centric platform, administrators are able to reliable orchestrate cloud workloads which span multiple data centers. This helps create a great turn-key solution which allows your software-defined cloud to span multiple cloud environments – thereby creating a powerful hybrid cloud platform which can still leverage existing resources; as well as new ones.
Similarly, OpenStack allows for a powerful private and public cloud control mechanism built on an open-source platform. The technology from OpenStack allows organizations of all sizes to create dynamic private and public cloud connections spanning the globe. In creating their open-source platform, OpenStack aims to simplify the implementation of cloud environments, as well as provide massively scalable solutions which are feature rich at that software-defined cloud layer.
Controlling resources, data flow, and the overall cloud infrastructure required the abstraction of the physical layer. This had to revolve around the entire process – Network, Storage, Compute, and even Data Center. Software-defined technologies directly interact with each other as well as their physical counterparts to create an intelligent environment capable of automation and much greater resiliency.
As cloud and infrastructure multi-tenancy continue to increase, SDx will help proactively control the allocation of critical resources. In many cases, this will dynamically improve both data center performance and the overall user experience. Although there are marketing terms tied around these solutions – remember, there are very real and tangible technologies behind all the buzz. As you build out your next-generation data center model , make sure to look at software-defined technologies and how they can positively impact your overall infrastructure.
|
<urn:uuid:03e82920-4105-4469-aae1-75a46b7f2dc5>
|
CC-MAIN-2022-40
|
https://www.datacenterknowledge.com/archives/2014/02/21/dck-guide-software-defined-technologies/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00036.warc.gz
|
en
| 0.917513 | 1,745 | 2.75 | 3 |
It seems you can barely turn on the news or read headlines from your favorite news site without seeing yet another major corporation falling prey to cyber attacks. Whether they are caused by phishing, data infiltration or even brute force, the barrage of assaults seems never-ending. However, for every organization that is victimized, hundreds if not thousands of others are able to protect themselves and their sensitive data from penetration. Here are tips to prevent some of the most common types of cyber attacks by proactively managing your risk profile.
Spear Phishing and Whaling Attacks
As organizations become more global and attacks become more sophisticated, what used to be relatively simple to spot has now become a nightmare even for savvy internet users. As recently as the last five years, phishing attacks began morphing from poorly-spelled email pleas to send money to a prince overseas to become highly detailed and realistic-looking asks that appear to be from executives within your organization. Email address masking and other tools that are often utilized by marketers to create a more pleasing customer experience are being leveraged in nefarious ways by individuals who are attempting to defraud your organization. Information gathered from social media and public profiles are leveraged to define a picture of a specific executive or group, and then that information is used in “whaling” attacks — so named because they are truly going after the big fish in the sea.
How to prevent spear phishing and whaling attacks:
- Encourage staff members to make their social media profiles private, and be wary of accepting friend requests from individuals they do not know
- Create an educational series to show how these attacks differ from valid communication
- Use up-to-date email filters, anti-phishing tools and utilize active protection at the system network level
- Teach caution as employees click on links embedded in emails
Cross-site scripting (XSS) attacks are some of the wiliest because a user is unlikely to realize that they have even been hacked. Instead of going after the host website, these snippets of code are built to run when the page loads via a comment or other auto-loading section of the site. The dangerous snippet then attacks the user’s login and password information and other personal details, exfiltrating them for later use.
How to prevent cross-site scripting:
- Limit the amount of user-provided data on your websites and web apps to only what is absolutely necessary
- Regularly scan your website using a vulnerability scanning tool to look for XSS
Poor Compliance Behavior
Perhaps one of the easiest ways to maintain cybersecurity within your organization is through continual compliance monitoring and behavioral review. Users tend to reuse the same password on multiple platforms, increasing the chance that there could be a major security breach within your organization. Passwords are often simplistic or easily guessed, especially when cyber attackers leverage social engineering to enhance their knowledge of their prey. According to the Harvard Business Review, vulnerabilities were caused by insiders in more than 60% of the attacks in 2016. This is especially true for industries such as healthcare, financial services, and manufacturing where there are large quantities of valuable intellectual property, personal information and financial assets available for the taking.
How to prevent poor compliance behavior:
- Regularly audit access to key systems, ensuring that access is restricted to individuals who actively need it
- Review compliance guidelines with supervisors and staff on a regular basis
- Require strict password guidelines on a rigorous reset schedule
- Implement log management and active system monitoring to detect intrusions as they’re happening
While no systems are infallible, there are ways to protect your organization from the dangers that are associated with doing business today. Protect your business and your staff with the dedicated support structure of CoreArmor from Coretelligent. Our behavioral monitoring, asset discovery and reporting provide 360 degrees of protection with our Defense-in-Depth (DiD) strategy. Contact us today at 855-841-5888 for the office nearest you, or fill out our online contact form for assistance.
|
<urn:uuid:e92ce7ba-a84c-47fc-98b8-cd85b8a4112c>
|
CC-MAIN-2022-40
|
https://coretelligent.com/insights/most-common-types-of-cyber-attacks-how-to-prevent-them/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00036.warc.gz
|
en
| 0.94412 | 822 | 2.75 | 3 |
What is a NoSQL database?
NoSQL, also referred to as “not only SQL”, “non-SQL”, is an approach to database design that enables the storage and querying of data outside the traditional structures found in relational databases. While it can still store data found within relational database management systems (RDBMS), it just stores it differently compared to an RDBMS. The decision to use a relational database versus a non-relational database is largely contextual, and it varies depending on the use case.
Instead of the typical tabular structure of a relational database, NoSQL databases, house data within one data structure, such as JSON document. Since this non-relational database design does not require a schema, it offers rapid scalability to manage large and typically unstructured data sets.
NoSQL is also type of distributed database, which means that information is copied and stored on various servers, which can be remote or local. This ensures availability and reliability of data. If some of the data goes offline, the rest of the database can continue to run.
Today, companies need to manage large data volumes at high speeds with the ability to scale up quickly to run modern web applications in nearly every industry. In this era of growth within cloud, big data, and mobile and web applications, NoSQL databases provide that speed and scalability, making it a popular choice for their performance and ease of use.
NoSQL vs. SQL
Structured query language (SQL) is commonly referenced in relation to NoSQL. To better understand the difference between NoSQL and SQL, it may help to understand the history of SQL, a programming language used for retrieving specific information from a database.
Before relational databases, companies used a hierarchical database system with a tree-like structure for the data tables. These early database management systems (DBMS) enabled users to organize large quantities of data. However, they were complex, often proprietary to a particular application, and limited in the ways in which they could uncover within the data. These limitations eventually led to the development of relational database management systems, which arranged data in tables. SQL provided an interface to interact with relational data, allowing analysts to connect tables by merging on common fields.
As time passed, the demands for faster and more disparate use of large data sets became increasingly more important for emerging technology, such as e-commerce applications. Programmers needed something more flexible than SQL databases (i.e. relational databases). NoSQL became that alternative.
While NoSQL provided an alternative to SQL, this advancement by no means replaced SQL databases. For example, let's say that you are managing retail orders at a company. In a relational model, individual tables would manage customer data, order data and product data separately, and they would be joined together through a unique, common key, such as a Customer ID or an Order ID. While this is great for storing and retrieving data quickly, it requires significant memory. When you want to add more memory, SQL databases can only scale vertically, not horizontally, which means your ability to add more memory is limited to the hardware you have. The result is that vertical scaling ultimately limits your company’s data storage and retrieval.
In comparison, NoSQL databases are non-relational, which eliminates the need for connecting tables.Their built-in sharding and high availability capabilities ease horizontal scaling. If a single database server is not enough to store all your data or handle all the queries, the workload can be divided across two or more servers, allowing companies to scale their data horizontally.
While each type of database has its own advantages, companies commonly utilize both NoSQL and relational databases in a single application. Today’s cloud providers can support SQL or NoSQL databases. Which database you choose depends on your goals.
For a deeper dive into the differences between the two options, see "SQL vs. NoSQL Databases: What's the Difference?"
Types of NoSQL databases
NoSQL provides other options for organizing data in many ways. By offering diverse data structures, NoSQL can be applied to data analytics, managing big data, social networks, and mobile app development.
A NoSQL database manages information using any of these primary data models:
This is typically considered the simplest form of NoSQL databases. This schema-less data model is organized into a dictionary of key-value pairs, where each item has a key and a value. The key could be like something similar found in a SQL database, like a shopping cart ID, while the value is an array of data, like each individual item in that user’s shopping cart. It’s commonly used for caching and storing user session information, such as shopping carts. However, it's not ideal when you need to pull multiple records at a time. Redis and Memcached are examples of an open-source key-value databases.
As suggested by the name, document databases store data as documents. They can be helpful in managing semi-structured data, and data are typically stored in JSON, XML, or BSON formats. This keeps the data together when it is used in applications, reducing the amount of translation needed to use the data. Developers also gain more flexibility since data schemas do not need to match across documents (e.g. name vs. first_name). However, this can be problematic for complex transactions, leading to data corruption. Popular use cases of document databases include content management systems and user profiles. An example of a document-oriented database is MongoDB, the database component of the MEAN stack.
Want to know more about MongoBD? Check out the IBM tutorial on getting started with using IBM Cloud Databases for MongoDB.
These databases store information in columns, enabling users to access only the specific columns they need without allocating additional memory on irrelevant data. This database tries to solve for the shortcomings of key-value and document stores, but since it can be a more complex system to manage, it is not recommended for use for newer teams and projects. Apache HBase and Apache Cassandra are examples of open-source, wide-column databases. Apache HBase is built on top of Hadoop Distributed Files System that provides a way of storing sparse data sets, which is commonly used in many big data applications. Apache Cassandra, on the other hand, has been designed to manage large amounts of data across multiple servers and clustering that spans multiple data centers. It’s been used for a variety of use cases, such as social networking websites and real-time data analytics.
This type of database typically houses data from a knowledge graph. Data elements are stored as nodes, edges and properties. Any object, place, or person can be a node. An edge defines the relationship between the nodes. For example, a node could be a client, like IBM, and an agency like, Ogilvy. An edge would be categorize the relationship as a customer relationship between IBM and Ogilvy.
Graph databases are used for storing and managing a network of connections between elements within the graph. Neo4j (link resides outside IBM), a graph-based database service based on Java with an open-source community edition where users can purchase licenses for online backup and high availability extensions, or pre-package licensed version with backup and extensions included.
With this type of database, like IBM solidDB, data resides in the main memory rather than on disk, making data access faster than with conventional, disk-based databases.
Examples of NoSQL databases
Many companies have entered the NoSQL landscape. In addition to those mentioned above, here are some popular NoSQL databases:
- Elasticsearch, a document-based database that includes a full-text search engine.
- Couchbase, a key-value and document database that empowers developers to build responsive and flexible applications for cloud, mobile, and edge computing.
To learn more about the state of databases, see “A Brief Overview of the Database Landscape.”
Advantages of NoSQL
Each type of NoSQL database has strengths that make it better for specific use cases. However, they all share the following advantages for developers and create the framework to provide better service customers, including:
- Cost-effectiveness: It is expensive to maintain high-end, commercial RDBMS. They require the purchase of licenses, trained database managers, and powerful hardware to scale vertically. NoSQL databases allow you to quickly scale horizontally, better allocating resources to minimize costs.
- Flexibility: Horizontal scaling and a flexible data model also mean NoSQL databases can address large volumes of rapidly changing data, making them great for agile development, quick iterations, and frequent code pushes.
- Replication: NoSQL replication functionality copies and stores data across multiple servers. This replication provides data reliability, ensuring access during down time and protecting against data loss if servers go offline.
- Speed: NoSQL enables faster, more agile storage and processing for all users, from developers to sales teams to customers. Speed also makes NoSQL databases generally a better fit for modern, complex web applications, e-commerce sites, or mobile applications.
In a nutshell, NoSQL databases provide high performance, availability, and scalability.
NoSQL use cases
The structure and type of NoSQL database you choose will depend on how your organization plans to use it. Here are some specific uses for various types of NoSQL databases.
- Managing data relationships: Managing the complex aggregation of data and the relationships between these points is typically handled with a graph-based NoSQL database. This includes recommendation engines, knowledge graphs, fraud detection applications, and social networks, where connections are made between people using various data types.
- Low-latency performance: Gaming, home fitness applications, and ad technology all require high throughput for real-time data management. This infrastructure provides the greatest value to the consumer, whether that’s market bidding updates or returning the most relevant ads. Web applications require in-memory NoSQL databases to provide rapid response time and manage spikes in usage without the lag that can comes with disk storage.
- Scaling and large data volumes: E-commerce requires the ability to manage huge spikes in usage, whether it’s for a one-day sale or the holiday shopping season. Key-value databases are frequently used in e-commerce applications because its simple structure is easily scaled up during times of heavy traffic. This agility is valuable to gaming, adtech, and Internet of Things (IoT) applications.
Microservices and NoSQL databases
The need for large companies to provide services without latency and to scale more quickly has spurred growth for microservices, which has led companies to examine what type of database to use for different applications.
Companies have found that using a single, relational database for every component of an application has its limitations, especially when better alternatives exist for specific components. Microservices are an attractive option, in part, because they eliminate the need for a single, shared data store for an entire application. Instead, the application has many, loosely coupled and independently deployable services, each with their own data model and database, and integrated via API gateways or an iPaaS.
The pattern of using multiple databases within a single application, also known as polyglot persistence, has helped to create space in the market for NoSQL databases to thrive. Today, developers can leverage the right database for the right microservice without trying to make everything work in the context of a single, relational database.
NoSQL and IBM Cloud
Today, many applications are delivered as services, and those services must be available 24/7, accessible from a wide range of devices, and scaled to what can potentially be millions of users. IBM services and partners address a wide range of data needs so companies can adapt quickly.
NoSQL was created to manage the scale and agility challenges that face modern applications, but the suitability of a database depends on the problem it must solve. SQL and NoSQL are each suited to different use cases, so which tool to use depends more on what you are trying to accomplish. Further, over the past few years, SQL technologies like PostgreSQL have been bridging the gap between NoSQL and SQL by offering JSON support or scale-out capabilities. With IBM Cloud Databases for PostgreSQL, IBM offers enterprise-ready, fully managed PostgreSQL built with native integration into the IBM Cloud.
MongoDB Enterprise Advanced is available as an add-on for IBM Cloud Pak for Data, a fully integrated, multicloud data and AI platform.
To integrate into your existing data management solution for your x86, IBM Power and IBM Z environments, IBM Data Management Platform for MongoDB Enterprise Advanced, offers a modern database platform designed for mission-critical, highly secure, highly available deployments.
IBM Cloudant is a scalable JSON document database optimized for web, mobile, IoT, and serverless applications. The service is compatible with an open source ecosystem that includes Apache CouchDB, PouchDB, and libraries for the most popular web and mobile development stacks.
Sign up for an IBMid and create your IBM Cloud account.
|
<urn:uuid:bd67e955-385c-4487-9577-9e652621f70c>
|
CC-MAIN-2022-40
|
https://www.ibm.com/cloud/learn/nosql-databases
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00036.warc.gz
|
en
| 0.910805 | 2,751 | 3.640625 | 4 |
Researchers with the U.S. Geological Survey have been working to create geographic information system software to assist pedestrian evacuation from sudden disasters.
The spate of recent natural disasters, from the 2011 earthquake and tsunami in Tohoku, Japan, to this year’s mudslides in Oso, Wash., have raised concerns – and the odds – that more people will need to evacuate a disaster zone at some point in their future.
Beyond the direct physical threats posed by such catastrophic events, their shifting, unstable geographies present serious challenges for public safety officials directing evacuees to safety. A path to safety can turn into a barricade – or worse – in an instant.
To confront these scenarios, researchers and developers working with the U.S. Geological Survey have been developing geographic information system software to assist pedestrian evacuation from disasters zones.
Due to the limited notice of the arrival of a hazard, evacuations are typically on foot and across the landscape, according to a research report on the development of the Pedestrian Evacuation Analyst (PEA), a tool for helping people find various routes to safety.
Researchers’ recent efforts have focused on a “path distance modeling approach that incorporates travel directionality, multiple travel speed assumptions and cost surfaces that reflect variations in slope and land cover.”
The tool, which has been installed in Esri’s ArGIS desktop advanced version, is intended to assess exposure to risk by evacuees and to help communities visualize evacuation scenarios from sudden-onset hazards, such as tsunamis.
“By automating and managing the modeling process, the software allows researchers to concentrate efforts on providing crucial and timely information on community vulnerability to sudden-onset hazards,” according to the report’s authors, Jeanne Jones, Peter Ng and Nathan Wood.
The software can calculate the travel time to safety from any location in a study area, determine the population in the evacuation zone, automate the processing of evacuation procedures and map various populations of residents, employee and dependent-care facilities.
The model gauges possible evacuation scenarios based on elevation, direction of movement, land cover and travel speed and creates a map showing travel times to safety (a time map) throughout a hazard zone.
The app also shows a view of the evacuation landscape at different pedestrian travel speeds. Data on the size and location of different populations within the hazard zone can also be integrated with time maps to create tables and graphs of at-risk populations, according to the developers.
Among potential data graphics considered for the app are a chart showing the “percentage of population to reach safety,” which shows the percentage of a population to reach safety before a tsunami, and a “population as a function of travel speed chart,” showing a group’s ability to reach safety at different travel speeds before the arrival of a tsunami.
The initial release of the software features a simplified design geared toward a researcher analyzing a single study area, according to the USGS report. That means future upgrades to the tool might include adding the ability – including expanding the time map – in order to compare or aggregate multiple scenarios.
“It is hoped that this tool will assist hazard research both within the U.S. Geological Survey and in the wider research community, providing valuable information on strategies for minimizing loss of life in catastrophic events,” according to the paper.
|
<urn:uuid:a2c50e29-9306-48b3-9054-60e721e217d1>
|
CC-MAIN-2022-40
|
https://gcn.com/data-analytics/2014/09/usgs-software-visualizes-evacuation-scenarios/303392/?oref=gcn-next-story
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00036.warc.gz
|
en
| 0.921212 | 694 | 3 | 3 |
A wave of ransomware attacks on public schools in the southern U.S. state of Louisiana has prompted an unusual response, as the governor declared an official emergency to kick-start a coordinated response by several state-level law-enforcement and technology agencies.
This latest ransomware attack, which locks up the data on computer systems and only provides the key to free up the data if the victim pays an online ransom, continues a global wave of similar attacks that lately has targeted municipal governments as well as the healthcare and manufacturing industries.
A new trend joins the existing ones
The incident reflects two important cybersecurity trends as well as one notable new one, namely:
- Cybercriminals (in some cases helped by state-actor sponsors) are largely focusing their ransomware efforts on public institutions and private-sector corporations. Their rationale is obvious: these larger targets not only are typically in a better position to pay the extortion to restore access to critical data, but in many cases have other incentives to cave to the criminals’ demands. In the case of healthcare, patient lives may be at stake In manufacturing, downtime can result in daily losses running to tens or hundreds of thousands of dollars In the public sector, officials face embarrassment and voter outrage if they do not respond swiftly and effectively to restore citizen-facing services and the education of children, both of which are increasingly reliant on online applications. Municipal governments across the US especially have garnered humiliating headlines in recent months for being caught unprepared for expensive, destructive ransomware attacks.
- Despite having presumably better tech skills and resources than consumers, many businesses and public institutions remain largely unprepared in both defenses and responses to ransomware attacks. Cybersecurity best practices here are clear: Keep critical systems and applications up-to-date, especially with critical security updates. Many ransomware attacks take advantage of security vulnerabilities that are already known but for which the relevant patches have not yet been installed. Make sure that critical data is backed up regularly to diverse locations and media (including the cloud) and that the backup copies themselves are hardened against malware attacks. Train users to be wary of the most common infection tactics, namely opening malicious links in or attachments to phishing emails. Complement traditional anti-malware defenses with newer technologies like artificial intelligence and machine learning to identify and stop ransomware in response to its behavior rather than an easily camouflaged virus signature.
A broad lack of preparedness has left many ransomware victims facing the difficult choice of lengthy operations to restore systems from outdated backups, or ignoring the advice of law enforcement and cybersecurity professionals by paying the ransom and hoping that the promised remedy from the criminals actually works – when many discover that it doesn’t at least half of the time.
Having a plan in place like Louisiana’s to coordinate tech and law-enforcement resources is a good idea, at a minimum from a public relations perspective. It remains to be seen how quickly the state will be able to recover its crippled public-school systems’ data and at what cost. In the meantime, the ability to get ahead of the news by firing up a seemingly forceful, well-thought-out plan at least enables the troops to focus more on solving the technical problem and less on dealing with the negative press and political fallout.
The fact remains, though, that without complete, intact and recent backups, the state may not actually be in any better position to resume tech operations in the afflicted schools than the municipal governments of Atlanta, Baltimore, or several cities in Florida, to name a few high-profile ransomware victims.
The attack in Louisiana provides another reminder that ransomware remains one of the cybercriminal underworld’s most lucrative and effective weapons, capable of not only harming public-sector services and private-sector bottom lines, but potentially crimping the careers of political, executive and tech leaders who are caught unawares by an attack.
Having an effective, coordinated response like the state of Louisiana is a good public-facing start, but other important questions remain. How quickly can your tech teams restore your operations in the wake of a ransomware attack? More to the point, what measures are you implementing to detect and prevent ransomware attacks from occurring in the first place?
Judging from the steady drip-drip-drip of news stories from the past year – each outlining yet another devastating ransomware attack – these questions are not going away anytime soon. That is unless organizations start adopting the next-gen strategy of cyber protection, which combines traditional data protection with innovative cybersecurity to ensure the safety, accessibility, privacy, authenticity, and security of data.
This integrated approach, found in business solutions like Acronis Backup, eliminates the defensive gaps created when using a patchwork of several solutions. As a result, organizations become #CyberFit, developing the resiliency to withstand any data loss event.
If you’re ready to start looking for some answers, Acronis can help. You can experience what modern cyber protection looks like with a free 30-day trial of Acronis Backup.
Are your IT needs met using a service provider? You can get the same cyber protection if they have Acronis Cyber Cloud, which delivers reliable backup, fast disaster recovery, and effective anti-malware services through a single platform.
|
<urn:uuid:bc85102e-64ae-4909-a5fe-ef9f1a9bb9c4>
|
CC-MAIN-2022-40
|
https://www.acronis.com/en-us/blog/posts/louisiana-ransomware-attack-prompts-statewide-emergency/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00036.warc.gz
|
en
| 0.942045 | 1,071 | 2.84375 | 3 |
Drug-involved overdose rates and deaths have been increasing at alarming rates across the country, and a majority of those deaths have been attributed to opioids in recent years. Amid the growing crisis, the National Institutes of Health started the Helping to End Addiction Long-Term Initiative (HEAL) to study and address the opioid epidemic. HEAL Director Dr. Rebecca Baker discusses the strides NIH has taken to help advance science for pain management, barriers for substance use disorder treatment and other critical pieces to addressing opioid misuse and addiction.
| Episode: 15
NIH's HEAL Initiative is taking a multi-pronged approach to addressing pain management and opioid addiction and misuse.
Dr. Rebecca Baker, Director, HEAL Initiative, NIH
|
<urn:uuid:708073fe-4b95-43d2-a580-e52262facbaf>
|
CC-MAIN-2022-40
|
https://governmentciomedia.com/listen-inside-trans-nih-effort-address-opioid-crisis
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00036.warc.gz
|
en
| 0.919631 | 151 | 2.8125 | 3 |
Modern industrial and critical infrastructure organizations rely on the operational technology (OT) environment to produce their goods and services. Beyond traditional information technology (IT) operations that use servers, routers, PCs and switches, these organizations also rely on OT, such as programmable logic controllers (PLCs), distributed control systems (DCSs) and human-machine interfaces (HMIs) to run physical plants and factories. While OT devices have been in commercial use for more than 50 years, a complete transformation has occurred, changing the way people operate, interact with and secure the OT environment.
Many organizations have opted to converge their IT and OT environments, which can yield many benefits, but these decisions are not without risk. Convergence can produce new attack vectors and attack surfaces, resulting in breaches that start on one side of the converged infrastructure and laterally creep from IT to OT and vice versa.
Threats impacting OT operations are not the same as those that impact IT environments, thus the required security tools and operating policies are different. Deploying the right ones can harness all the benefits of a converged operation without increasing the security exposure profile of the organization. It is important for organizations to establish a carefully planned strategy prior to a convergence initiative, rather than bolting on security as an afterthought.
The air gap argument
What happens when a business makes the strategic decision to not converge their IT and OT operations? Many organizations follow this path for a variety of reasons including strategic, technical and business factors. By keeping IT and OT systems separate, these organizations are implementing an “air gap” security strategy.
Traditionally, air gapping OT operations has been viewed as the gold standard when it comes to industrial and critical infrastructure environments. Operating as a “closed loop” without any interfaces to the outside world, the OT infrastructure is physically sequestered from the external environment. With no data traveling outside the environment, and nothing from outside coming in, this buffer is viewed as the ultimate methodology in securing an organization from security threats.
While the notion of air gapping seems simple enough, it is difficult to maintain. Cutting connections is only part of maintaining a sterile environment. There are many other paths into what is supposedly an isolated infrastructure (see Figure 1). For example, true isolation requires eliminating electromagnetic radiation from the devices in an OT infrastructure. This requires the implementation of a massive faraday cage to eliminate potential leakage vectors.
Over the years, additional attack vectors have been discovered, including FM frequency signals from a computer to a mobile phone, thermal communication channels between air gapped computers, the exploitation of cellular frequencies and near-field communication (NFC) channels. Even LED light pulses among OT equipment have exposed critical systems to malicious activity.
There are many examples of highly-enforced air-gapped facilities that have suffered a breach due to something as simple and innocuous as an external laptop being used as an HMI or a USB thumb drive used for OT purposes. In an average OT environment, upwards of 50% of the infrastructure comprises IT equipment. Organizations with no specific initiatives for IT and OT convergence are among the most at risk because no additional security is implemented beyond air gapping. Securing operations requires more than building a digital moat around the OT infrastructure. Even under the most favorable of circumstances, this isolation is almost impossible to maintain. The introduction of one seemingly harmless variable into a sterile environment can permanently destroy the most stringently enforced air gap. This is known as “accidental convergence.”
While air gapping OT from the “rest of the world” is considered the gold standard in terms of securing OT environments, it is not foolproof. Many organizations are lulled into a false sense of security even though their isolated OT infrastructure is anything but isolated. As a result, it is anything but secure.
Over the last 10 years, increased incidents of attacks have targeted manufacturing and critical infrastructure. Among them are examples of sophisticated attacks that took advantage of “accidental convergence” to infiltrate and gain a foothold within organizations. The accidental convergence of IT and OT environments can occur at any time. Even more concerning is it happens in many organizations without their knowledge and without consequence because of the erroneous belief the air gap is doing the job. After gaining a foothold, these attacks can continue for weeks and even months until a catastrophic failure occurs. The security thought to be in place was an illusion far from the actual reality.
Vigilant planning for cybersecurity ahead
For most industrial organizations, the need for vigilant security is nothing new. Threat vectors and the security forecast is constantly evolving given emerging threats. The convergence of IT and OT operations, whether planned or unplanned, is in almost all cases a reality. Setting the appropriate safeguards will help ensure secured operations for the organization. What should be considered?
Visibility that extends beyond traditional borders
Up until this point, IT security and OT infrastructures inhabited different worlds and the ability to see into either environment was bifurcated along these lines. Modern-day attacks are amorphous and travel across the traditional IT and OT security borders without regard. The ability to track these types of propagation routes requires the de-siloing of traditional visibility parameters. Being able to gain a single view of IT and OT gear, along with the conversations happening between the two worlds, is essential. This “single pane of glass” view can help illuminate potential attack vectors and asset blind spots that may have eluded traditional security strategies.
Deep situational analysis
Whether or not a planned convergence initiative is in the works, it is important to recognize the significant difference in IT and OT lifecycles. While IT infrastructures update regularly, OT infrastructures often persist for years, even decades.
It is not uncommon for an OT infrastructure to be as old as the plant itself. The result is a full inventory of assets, along with maintenance and change management records, may not be current. Therefore, crucial data may be missing, including important details such as model number, location, firmware version, patch level, backplane detail and more. Since it is impossible to secure assets users may not even know exist, having a detailed inventory of OT infrastructure that can be automatically updated as conditions change is essential to protecting industrial operations.
Reduction of cyber risk
When it comes to modern OT environments, cyberthreats can originate from anywhere and travel everywhere. It is important to use as many capabilities and methodologies as possible to find and mitigate exposure risk including:
- Network-based detection that leverages policies for “allow/disallow” capabilities.
- Anomaly-based detection that can find zero-day and targeted attacks and is predicated on baseline behaviors unique to an organization.
- Open-source attack databases that centralize threat intelligence from the greater security community. The notion is more eyes on a potential threat yields a significantly better security response.
Since most attacks target devices rather than networks, it is essential to use a solution that queries and provides security at the device level. Because OT device protocols can vary, security and health checks must be unique to the make and model of the device, including the device language. These deep checks should not scan, but rather be precise in query nature and frequency.
In 2020, more than 18,500 new vulnerabilities were disclosed, affecting OT devices as well as traditional IT assets. However, less than half of these vulnerabilities had an available exploit. Gaining a full awareness of the vulnerabilities relevant to the environment, along with a triaged list of exploitable vulnerabilities and critical assets, will enable users to prioritize the threats with the highest risk score, reducing the overall cyber exposure profile.
Security that contributes to the ecosystem of trust
While it is important to identify and leverage the best IT and OT security products for the environment, it is even more important the products work together. The age-old notion of a layered and cooperative security approach, where point products can work together, creates an impermeable layer — the totality of the solution becomes greater than the sum of its parts.
One such example is an OT security solution that feeds valuable details to a security information and event management (SIEM) system or next-generation firewall (NGFW), providing a new and important view of industrial operations to the security ecosystem. This not only enhances security monitoring and response, but it also unlocks greater value and practical utility from existing security investments.
Cybersecurity solutions that scale
The lineage and approach of the legacy IT and OT teams could not be greater opposites. This polarity goes far beyond product lifecycle timelines.
IT teams are often driven by key performance indicators (KPIs) involving availability, integrity and confidentiality, which result in an “always secured” mentality. OT teams monitor metrics revolving around environment, safety and regulation, resulting in an “always on — set it and forget it” approach.
Today’s need to address security across the entire organization — not just IT or OT — requires these different “upbringings” to come together and find common ground to work together. Failure to do so leaves the organization with a gaping cyber exposure hole that could result in severe consequences if left unaddressed.
Accidental convergence, intentional security
IT and OT teams must find common ground to eliminate the substantial risk factors of both planned and accidental IT/OT convergence. But the mission does not end there. OT security solutions that work in conjunction with IT security solutions can be the catalyst that provides the visibility, security and control needed to thwart new cyberthreats. It also brings these once separate teams together to provide security that every manufacturing, critical infrastructure and industrial organization needs to securely fulfill its core mission.
|
<urn:uuid:be4481cb-43a6-40b0-bea9-349b3c33ee72>
|
CC-MAIN-2022-40
|
https://www.industrialcybersecuritypulse.com/it-ot/beware-of-accidental-convergence/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00036.warc.gz
|
en
| 0.944949 | 1,979 | 2.65625 | 3 |
Getting behind the buzzwords: The true meanings of AI, machine learning, and deep learning, and understanding how they relate to each other.
Algorithmic IT Operations (AIOps) is a new category created by Gartner, primarily to deal with the challenges associated with operating the next-generation of infrastructure. AIOps is quickly making its way into enterprise initiatives — Gartner even estimates that half of all global enterprises will actively be using AIOps by 2020.
Two of the biggest buzzwords that have crossed from the world of computer science and technology startups to the mainstream media over recent years are “Machine Learning,” and “Artificial Intelligence” (AI). Throw in “Deep Learning,” and we’ve got the start of a great game of buzzword bingo.
The core appeal of AIOps is the “algorithmics.” This implies the use of machine learning to automate tasks and processes that have traditionally required human intervention. Real machine learning for IT incident Management is readily available today, however it does not exist in every vendor solution that claims AIOps.
In this upcoming series of blog posts, I will demystify machine learning in the content of IT Incident Management.
Part 1: Behind the Buzzwords
Two of the biggest buzzwords that have crossed from the world of computer science and technology startups to the mainstream media over recent years are "Machine Learning," and "Artificial Intelligence" (AI). Throw in "Deep Learning," and we've got the start of a great game of buzzword bingo. These terms are closely linked and are often used interchangeably, but they aren't the same thing. So what's the difference?
In many fields, definitions are not always as clear as we'd like them to be — they have fuzzy boundaries and the definition can change over time as our understanding of the field and the capabilities within that field develop. AI falls into that category. The relationship between AI, machine learning and deep learning is such that each is a specialisation within the other. AI covers the broadest range of technologies, machine learning is a set of technologies within AI, and deep learning is a specialisation within machine learning.
AI: More Artificial or Intelligent?
One of the most general definitions of AI, taken from the Merriam-Webster dictionary, is "The capability of a machine to imitate intelligent human behaviour." The term "machine" is important, because AI does not have to be restricted to computers.
A truly AI-enabled machine requires multiple technologies from a wide range of subjects including areas such as speech recognition and Natural Language Processing, computer vision, robotics, sensor technologies, and of course one of our other buzzwords, "Machine Learning." In many cases, machine learning is a tool used by these other technologies.
In its very earliest days, AI relied upon prescriptive expert systems to work out what actions to take, an "if this happens, then do that" approach. And while prescriptive expert systems still have a place in some sectors, their influence is much diminished, and that function has largely been replaced by machine learning. Most observers would agree that machine learning is the biggest single enabler for high-performance AI systems today.
A prime example of modern AI is autonomous vehicles. They rely heavily on many different technologies working in harmony, some of which rely heavily upon machine learning, and particularly those that allow the car to detect and understand its surroundings. The now common-place voice assistants such as Siri, Cortana, Alexa, etc. all employ a variety of technologies that allow them to "hear" a human voice, to understand which sounds correspond to which words and phrases, to infer meaning from the series of words it has heard, and to formulate an answer and respond accordingly — all systems that require multiple technologies including machine learning.
What is Machine Learning, Then?
So, machine learning is a field within computer science that has applications under the wider umbrella of AI. One of my preferred definitions is one quoted in Stanford University's excellent machine learning course: "Machine learning is the science of getting computers to act without being explicitly programmed." So rather than programming a system using an "if this, then that" approach, in the world of machine learning, the decisions that the system makes are derived from the data that has been presented to it. Some describe it as a "learn by example" approach, but there is more to it than that.
Machine learning is now so common in the world around us that there are countless applications where we may not even realise it plays a part. Automatic mail sorting and speed limit enforcement systems rely upon incredibly accurate implementations of what is known as "Optical Character Recognition" (OCR), i.e. identifying text in images — a technology that allows us to identify addresses on envelopes and parcels, or the license plates on a vehicle as it passes through a red light or travels too fast outside a school. OCR would not exist without machine learning (unfortunately speeding tickets still would).
The "did you mean" and "similar searches" functionality in search engines, as well as spam filters, facial recognition systems, and recommendation systems on e-commerce, video and music streaming services — the list is endless, and not all of the applications are of the headline-grabbing variety.
Supervised and Unsupervised
As we will cover in more detail in posts later in this series, machine learning contains many, many different fields, which brings us to two further additions to our collection of buzzwords — "Supervised Machine Learning" and "Unsupervised Machine Learning." Although the names are similar, the underlying algorithms and their applications are very different. Unsupervised techniques are generally simpler and try to find patterns within a set of given observations, patterns that you didn't know existed prior. Recommender systems rely heavily on these techniques.
In contrast, supervised learning is the "learn by example" approach. Supervised learning systems need to be given examples of what is "good" and what is "bad" — this email is spam, this email isn't. In the field of OCR, the system would be provided with multiple images of different letters and told which letter that image represents. As a system is provided with more and more examples, it "learns" how to distinguish between a spam email and one that isn't, it learns the different arrangements of pixels that can represent the same letters and numbers. The consequence being that when a new example is presented to the system, specifically an example it hasn't seen before, it can then correctly identify whether or not the email is spam, or the address that the letter needs to go to, or the licence plate of the speeding car.
Within the field of supervised learning there are numerous techniques, one of which is a technique called "neural networks." Neural networks are software systems that try to mimic, albeit very crudely, the way a human brain works. The concept of the neural network has been around for decades but it is only relatively recently that their true power has been realised. A neural network is made up of artificial neurons, with each neuron connected to other neurons. As different training examples are presented to the network (e.g. an image or an email) along with the expected output of the system (e.g. the letter in the image, or whether or not the email is spam), the network works out which neurons it needs to activate in order to achieve the desired output under different circumstances.
The network knows how to configure itself so the neurons that get activated when a spam email is presented to it will be different to a non-spam email, and the rest of the system can then make a decision on how to handle that email.
We now get to our final buzzword (for the time being at least) — "Deep Learning." Deep learning is a very specific and phenomenally exciting area within neural networks.
The easiest way to think of a deep network is as a larger and more complex network with more complex and sophisticated interactions between the individual nodes. The term "layers" is often referenced in the area of neural networks, and astonishing results can be achieved with networks that have only a single layer. Deep learning employs multiple "layers" with complex interactions within each layer and between layers. Consequently the patterns it can identify, and the problems it can be applied to, are more complex as well.
Deep learning is at the leading edge of machine learning research, and some of the advances in it have resulted in technologies such as automatic translation, automatic caption generation for images, and automatic text generation (e.g. automatically generating text in the style of Shakespeare). And in the same way that machine learning is the main enabler of AI, deep learning, right now, is the main enabler of advances in machine learning.
Coming Up Next
In the next post, I will give an introduction to the different machine learning techniques, APIs and frameworks that are available today for IT Incident Management.
About the author
Rob Harper is Chief Scientist at Moogsoft. Previously, Rob was founder and CTO of Jabbit and has held development and architect positions at RiverSoft and Njini.
|
<urn:uuid:ff2e1aa9-a7af-4795-882e-971604541011>
|
CC-MAIN-2022-40
|
https://www.moogsoft.com/blog/aiops/understanding-machine-learning-aiops/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00036.warc.gz
|
en
| 0.954147 | 1,902 | 2.75 | 3 |
In 2018, billions of people were affected by data breaches and cyberattacks and not only did people lose money, but they also lost their security. We hear about these cyberattacks every day and it's easy to think, "Why should I protect myself if the stats are against me?"
Or are we against the stats? 52% of us use the same passwords for different online services. This means that half of us have decided we're happy to risk our financial security and personal identity, and that can be a costly mistake. It turns out, one of the easiest ways you can protect yourself from becoming just another statistic: use a password manager. At Five Nines, when a company comes to us for managed IT solutions, we consider how they can tighten up their data security. Let's talk about ways you can quickly improve your cyber-security habits so you avoid the risk and become less of a predictable target.
1. Choose a password that's not obvious.
According to the National Institute of Standards and Technology's updated guidelines as of 2019, your passwords should be user-friendly and memorable, but not easy enough for a stranger to guess. You can use longer phrases that are easier to remember than complicated passwords, such as “I support the NE Huskers.” You should also avoid overly simple passwords. Hackers take bad, commonly used passwords like "huskers1" and try to test it against lots of people to see who they can breach online. Since many people in Nebraska probably have a password like this, it's a good rule to avoid this style of password.
2. Use a password manager to track all of your passwords in one place.
At Five Nines, we recommend two password managers: LastPass and 1Password. LastPass has a free option and allows you to automatically save and fill passwords on Windows, macOS, Android, and iOS devices. It can automatically change passwords for you and even shows you how strong your passwords are on its platform. The Emergency Access feature also lets you pick one or more contacts who can access your passwords if anything were to happen to you.
1Password is a great paid option for families or small businesses who want to store some of their passwords and it's Watchtower feature lets you know if any of your passwords are known to be compromised. Bottom line: you can vary up your passwords more often when you have a place to put them.
3. Be aware of data breaches.
Stay aware of when breaches are reported and when they do, double-check that your information wasn't compromised. Right now, according to a new report by Risk Based Security, 2019 is on track to being the “worst year on record” for data breach activity. Besides checking places like the Identity Theft Resource Center (a California based non-profit that puts out information on the latest data breaches), you can also use free tools, like Credit Karma. Their identity monitoring service will alert you about data breaches and exposed passwords so you're in the loop about a potential threat. Your managed IT solutions provider should discuss potential data breaches with you.
As IT professionals, we know how cumbersome changing passwords can be, but these are the tools we use at Five Nines, and we hope by passing them along, your information stays safe and out of sight. We’re here to help as your managed IT solutions provider.
|
<urn:uuid:0ec5bdea-7501-43db-8f34-d5ac53e032bb>
|
CC-MAIN-2022-40
|
https://blog.fivenines.com/topic/password
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00036.warc.gz
|
en
| 0.959512 | 685 | 2.53125 | 3 |
Food could become independent of sunlight through artificial photosynthesis technology, increasing the conversion efficiency of sunlight into food by up to 18 times
Scientists are creating food which will become independent of sunlight by using artificial photosynthesis technology, with only about 1% of the energy found in sunlight ending up in the plant.
Photosynthesis has evolved in plants for millions of years to turn water, carbon dioxide, and the energy from sunlight into plant biomass and the foods we eat.
Combined with solar panels to generate the electricity to power the electrocatalysis, this hybrid organic-inorganic system could increase the conversion efficiency of sunlight into food, up to 18 times more efficient for some foods.
Growing food in difficult conditions imposed by climate change
The technology uses a two-step electrocatalytic process to convert carbon dioxide, electricity, and water into acetate, the form of the main component of vinegar. Food-producing organisms then consume acetate in the dark to grow.
“By increasing the efficiency of food production, less land is needed, lessening the impact agriculture has on the environment”
Electrolysers are devices that use electricity to convert raw materials like carbon dioxide into useful molecules and products. Integrating all the components of the system, the output of the electrolyser was used to support the growth of food-producing organisms.
The amount of acetate produced was increased while the amount of salt used was decreased, resulting in the highest levels of acetate ever produced in an electrolyser to date.
Experiments showed that a wide range of food-producing organisms can be grown in the dark directly on the acetate-rich electrolyser output, including green algae, yeast, and fungal mycelium that produce mushrooms.
Producing algae with this technology is approximately fourfold more energy-efficient than growing it photosynthetically, where yeast production was about 18-fold more energy-efficient than how it is typically cultivated using sugar extracted from corn.
A more efficient method of turning solar energy into food
Corresponding author Feng Jiao at the University of Delaware said: “Using a state-of-the-art two-step tandem CO2 electrolysis setup developed in our laboratory, we were able to achieve a high selectivity towards acetate that cannot be accessed through conventional CO2 electrolysis routes.”
Elizabeth Hann, a doctoral candidate in the Jinkerson Lab and co-lead author of the study, added: “We were able to grow food-producing organisms without any contributions from biological photosynthesis.
“Typically, these organisms are cultivated on sugars derived from plants or inputs derived from petroleum – which is a product of biological photosynthesis that took place millions of years ago.
“This technology is a more efficient method of turning solar energy into food, as compared to food production that relies on biological photosynthesis.”
The potential for employing this technology to grow crop plants
Cowpea, tomato, tobacco, rice, canola, and green pea were all able to use carbon from acetate when cultivated in the dark – creating the potential for more foods to be generated artificially, saving more land and time.
By liberating agriculture from complete dependence on the sun, artificial photosynthesis opens the door to countless possibilities for growing food under the increasingly difficult conditions imposed by anthropogenic climate change.
Drought, floods, and reduced land availability would be less of a threat to global food security if crops for humans and animals grew in less resource-intensive, controlled environments. Crops could also be grown in cities and other areas currently unsuitable for agriculture, and even provide food for future space explorers.
Jinkerson said: “Using artificial photosynthesis approaches to produce food could be a paradigm shift for how we feed people. By increasing the efficiency of food production, less land is needed, lessening the impact agriculture has on the environment.
“And for agriculture in non-traditional environments, like outer space, the increased energy efficiency could help feed more crew members with less inputs.”
|
<urn:uuid:c06750d4-5eab-492c-896a-ff5051548b94>
|
CC-MAIN-2022-40
|
https://blog.johnadly.com/using-artificial-photosynthesis-technology-to-produce-food/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00237.warc.gz
|
en
| 0.942496 | 826 | 3.40625 | 3 |
The word “football” means one thing in the US, another thing in Australia and something very different in most of the rest of the world. The same is true with the word “privacy.”
In the U.S., our private personal data is the property of the people who hold it. In most other parts of the world, personal data is the property of the individual. While it is true that the data “owner” has responsibility for its safe keeping, it is also true that individuals have very little control—which means that viewing personal data as the property of the individual isn’t enough to keep it safe.
Laws are evolving to protect the sanctity of an individual’s information, but an ongoing debate questions their effectiveness. One thing, however, is beyond debate: doing business across borders and across laws is a daunting problem.
When dealing with the complex world of compliance, the challenge is to develop a clear understanding of the differences and the similarities in the evolving myriad of privacy policies. Dealing with one law and one locale at a time creates an impossible task. To overcome that, you need to find the commonalities, the underlying principles that knit seemingly disparate laws together.
Perform a Google search on “global privacy” and you’ll get over a billion returns. This is an apropos illustration of the complexity of the global compliance landscape. In the U.S. alone, there are approximately 13 federal bills in the legislative process and dozens of state regulations already in effect to safeguard nonpublic, private information (NPI).
In many cases, these laws and proposals present requirements that are mutually exclusive. Many of the pending federal laws are designed to supersede the labyrinth of state laws. This motivation comes from a desire to normalize data protection in order to simplify doing business across state borders. But while the goal is laudable the issues are extremely complicated.
Take the European Privacy Directive (EPD). It provides guidelines for safeguarding the personal information of any “identified or identifiable natural person” by any entity with permission to collect or process that information. This includes any information by which someone could be identified, directly or indirectly, by reference to an identification number or to one or more factors specific to his/her physical, physiological, mental, economic, cultural, or social identity.
Compliance specifically requires insight into and control over the use and disclosure of personal information. But each country that falls under the EPD is responsible for creating their own specific laws implementing the EPD, all of which have different requirements.
Similar to state laws in the U.S., many of these regulations clash with each other.
An important point to understand is that legal requirements, independent of their country of origin or industry of application, are technology agnostic. It is our responsibility as fiduciaries of sensitive data to show that we have control. This paints IT personnel into a tricky corner when it comes to acting on compliance mandates handed down from management.
This means that organizations dealing with diverse requirements encompassing broad issues, ranging from keeping doors locked to information technology management, must go back to the roots of compliance.
EPD, PCI, SOX and HIPAA were all designed to ensure that companies could be trusted. At the root of all of these requirements is a simple request: be accountable for what happens to the information you have in your trust.
Marv Goldschmitt is VP of Business Development for Tizor Systems, a provider of enterprise-class, real-time data auditing and protection solutions for regulatory compliance, data security and business assurance.
|
<urn:uuid:5fe2ec02-9858-4f37-b7a4-d854c5b6c0e5>
|
CC-MAIN-2022-40
|
https://cioupdate.com/global-privacy-compliance-an-oxymoron/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00237.warc.gz
|
en
| 0.939772 | 742 | 2.640625 | 3 |
Digital Rights Management (DRM) are measures taken to protect digital media copyrights. DRM tries to prevent unauthorized redistribution of digital media and places restrictions on the ways consumers can copy content they’ve purchased. DRM products were developed in response to the rapid increase in online piracy of commercial material due to the widespread use of peer-to-peer file sharing programs (Pirate Bay). DRM embeds codes that prevent duplicating digital content and specifies a time period in which content can be accessed. It also limits the number of devices the media can be installed on.
Although digital content is protected by copyright laws, policing the Internet and catching law-breakers is very difficult. DRM technology focuses on making it impossible to steal content in the first place, a more efficient approach to the problem than the strategies aimed at arresting online poachers after the fact.
|
<urn:uuid:a2393d7a-6a10-4e1f-8990-7d6387259452>
|
CC-MAIN-2022-40
|
https://cyberhoot.com/cybrary/digital-rights-management-drm/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00237.warc.gz
|
en
| 0.918921 | 174 | 3.40625 | 3 |
News broke last April, at the height of the pandemic fears, that Google, Apple, and MIT were working together to build a smartphone app that would conduct contact tracing much faster and more efficiently than the old way of hiring humans to do the same thing.
Contact tracing is the laborious process by which human investigators backtrack the recent movements of a person who has tested positive for COVID-19, in order to locate all the people who might have come in contact with them.
Obviously, when performed by humans, this is a time-consuming and imperfect way to try and contain the virus, but it’s all we’ve had - until now. There are now a few questions before us. Will enough people use the new Google/Apple/MIT (GAM) app to make it worthwhile? After all, it’s voluntary (so far). There’s no doubt that, in a perfect world, the process would be faster and more effective than human fumbling but if, say, only 10% of diagnosed positive cases use it, what’s the point?
Some people are also interested in how much of our data the new app will collect, store, and use, and in what manner. Is this going to be a trampling of privacy or will the brain trust be able to gather useful data while maintaining boundaries? An even more basic question is, which do we as a society value more -- health or privacy?
These are tricky questions without easy answers.
How the Contact Tracing App Works
The GAM app is a cross-platform effort that will be available to both iPhone and Android users. The plan is for the developers to build the API (application programming interface) and then let individual health agencies incorporate the contact tracing technology into their own apps.
The contact tracking technology actually resides in the phone’s operating system, so a user first needs to download and install the app, then update the OS to get started. Once you have opted in, the app goes to work by sending out a ping powered by a random Bluetooth identifier that is allegedly not able to be linked to the phone owner’s actual identity.
No matter what happens to the data in the future, whether stolen by a hacker or included as part of a standard backup process at some point down the road, there should be no way to connect an individual’s identity to any other part of the data. More on that later.
Other phones with the app loaded do the same thing. Any time one phone gets within a set distance of another phone, the encounter is logged as a contact. If you receive a positive coronavirus test result, you enter that into the app and automated messages go out to anyone you have been in contact with.
It’s a simple, effective process that theoretically should work like a charm. The problem, as the government of Singapore found out in the midst of the rising pandemic, is that only about 12% of people actually used the app. Such a low compliance rate makes it almost worthless. Why such little interest in the concept of automatic contact tracing? For a great many, it probably just seemed like a hassle but, for more than a few, there are privacy concerns.
Senators Query Apple and Google About Privacy Concerns
Not long after the GAM project was announced, Google and Apple received a letter from a group of U.S. senators expressing reservations about how the data collected through the contact tracing app would be identified, collected, stored (there is always the risk of losing it to phishing attacks), and perhaps ultimately used for marketing purposes once COVID-19 is far in our rearview mirror.
The major questions:
- Will the data collected fall under the jurisdiction of HIPAA, the country’s major health data privacy law?
- Will the data collected be regulated by other major data privacy regulations like the GDPR and CCPA?
- Will any data collected be personally identifiable to specific individuals?
- Will the data collected ever be monetized?
Understandable questions to be sure, though Apple feels they have all been addressed through statements on the COVID-19 tool landing page, which asserts that users’ answers to screening questions are not collected and no personal data is retained that would allow any future versions of the company to connect data to a person. Also, no sign-in or password creation of any kind is required to use the tool.
At least on the surface, it appears the Google/Apple collaboration has no designs on collecting a massive dataset and then turning around and using it for marketing purposes.
MIT contributes to the app’s privacy: While the MIT designers created the app to collect values that are only stored on a list as random numbers and distances between them - no identifiable information related to a particular phone, user, email, or name - that protection only applies to the product before it is installed in a particular app, such as the CDC.
Remember that any healthcare organization that wants to use the GAM technology can do so free of charge, but they have to modify it to work with the particular operating system code before distribution. At this point, any guarantee of privacy is out of the hands of the GAM trio of companies. Account creation and safe password management could be applied to the process at any time, even though it’s wiser to use a reliable password from the very beginning.
The Problem with Privacy
As security experts have been pointing out for years, it is difficult to truly anonymize or even protect data from seemingly everyday threats, regardless of a company’s good intentions. The question, as already alluded to, is whether mass public health threats like the current pandemic should take precedence over privacy concerns.
To date, most controlling regulations in this area arise in the context of a commercial environment, where companies collect data in order to target their marketing efforts more precisely. Should standards be relaxed when people are dying from a still poorly controlled disease? That is the crux of the matter, and a slippery slope if ever there was one.
The issue of relaxing privacy regulations during the pandemic would seem to be a natural task for governments but, other than the GDPR and CCPA, politicians and bureaucrats around the world have shown reluctance to get involved, perhaps out of fear that the genie is already out of the bottle and will refuse to go back in.
Regardless, security concerns such as those raised by the introduction of contact tracing apps into society aren’t going to disappear magically in a fairy-tale ending. These are questions that will require a resolution eventually. Not to decide is still to make a choice. For now, as the Romans used to say, “Caveat emptor”.
|
<urn:uuid:06688ecb-9665-4aa2-875f-5f0cc3c2ed94>
|
CC-MAIN-2022-40
|
https://www.msp360.com/resources/blog/contact-tracing-apps-covid-19/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00237.warc.gz
|
en
| 0.945796 | 1,385 | 2.875 | 3 |
How can we safely agree on a shared secret when we are in an insecure environment?
If you don't have a secure channel, then how can you securely agree on a shared secret key for symmetric cryptography? Diffie-Hellman key exchange saves the day! It's really key agreement or key negotiation rather than key exchange. This is a case where a failed attempt to do one thing led to an enormously useful development in another area. Diffie and Hellman were trying to develop a practical system for asymmetric cryptography. Rivest, Shamir, and Adelman developed their RSA asymmetric cipher around that same time. Diffie and Hellman's accomplishment has been extremely useful for symmetric cryptography. The ElGamal symmetric cipher designed the following decade finally solved the original problem that Diffie and Hellman had been working on.
We have two players, Alice and Bob.
They agree on a base,
and a modulus,
They can agree on the base and modulus in public.
In fact, since there are better and worse forms for those
values, they probably should do this in public, or simply
choose a base and modulus in common use.
For example, the modulus must be a prime number,
and it is taken into account when choosing the base,
see a detailed description
for more on these choices.
The modulus m should be a large prime, at least 300 digits,
but b can (and usually is) a small number,
2 and 5 being the usual choices.
Alice and Bob each pick a secret number,
Either of them can decide it's time to switch to a new key
for their communication channel, making this a
session key system.
For security, these should also be large — at least
Each then calculates a public value from their
secret value, by calculating
Public = bSecret modulo m
Alice says: Pa = ( b
Sa) modulo m Bob says: Pb = ( b
Sb) modulo m
Now, because modulus of discrete logarithms is a one-way function — easy to calculate in one direction, but impractical or impossible to calculate in reverse — they can safely publish the base, modulus, and their public values and be confident that even highly motivated mathematicians could not discover their secret values. This includes Alice and Bob themselves — they cannot discover each other's secret values.
However, they can calculate a shared secret using the base, the modulus, the other party's public value, and their secret value. X.509v3 digital certificates let them make certain that they really have the public key of the other player, and not some "man in the middle" pretending to be the other end.
Alice calculates the shared secret this way:
Bob calculates the shared secret this way:
bSaSb = bSbSa,
the two keys are equal.
Since both of the two players must use their secret key
in the calculation, it is a shared secret.
Note that each player must use the first form of the equation from each pair, as that form uses their secret and the other player's public value. The remaining lines are there for us to work through the equivalence. Note that any calculation of the shared secret requires some secret information, and thus an outsider cannot calculate it. Also, while an outsider might happen to guess the shared secret (although good choices of base and modulus make that extremely unlikely), the involvement of modulo arithmetic means that they would have no way of verifying that they had guessed correctly.
Security relies on:
Appropriate choice of base and modulus. The modulus should be a prime number with at least 300 digits. Surprisingly, the base can be small — values of 2 and 5 are commonly used.
Absolute secrecy of the secret values. They should also be appropriately large, at least 100 digits.
Identity of the other player. Alice needs to be absolutely certain that it's really Bob at the other end. Don't share secrets with strangers!
For a trivial example of a Diffie-Hellman exchange, not secure because of the small modulus and secrets:
b = 5
m = 31
Sbob = 14
Salice = 28
Pbob = 514 modulo 31
Pbob = 6103515625 modulo 31
Pbob = 25
Palice = 528 modulo 31
Palice = 37252902984619140625 modulo 31
Palice = 5
According to Bob,
Kbob,alice = [ ( Palice )Sbob ] modulo m
Kbob,alice = [ 514 ] modulo 31
Kbob,alice = [ 6103515625 modulo 31 ]
Kbob,alice = 25
According to Alice,
Kalice,bob = [ ( Pbob )Salice ] modulo m
Kalice,bob = [ 2528 ] modulo 31
Kalice,bob = 1387778780781445675529539585113525390625 modulo 31
Kalice,bob = 25
Their shared secret key is 25.
Ephemeral Session Keys and Perfect Forward Secrecy
Once you have the ability to safely agree on a shared secret, you should take advantage of that and make your session keys ephemeral. That is, only used once.
Used correctly, ephemeral session keys lead to Forward Secrecy, sometimes called Perfect Forward Secrecy or PFS.
Forward Secrecy means that compromise of long-term keys does not compromise past session keys. That sounds contradictory! The important thing is that the long-term keys are used for authentication, and then a one-use-only session key is generated for each session, message, or file, using a non-deterministic algorithm.
Government-imposed backdoors and any form of key escrow for the session keys will destroy the Forward Secrecy.
Google introduced HTTPS by default and began moving toward Forward Secrecy in 2011.
Twitter enabled Forward Secrecy in October 2013, as reported in the New York Times and The Guardian.
EFF described Perfect Forward Secrecy and other crucial security mechanisms.
The Heartbleed vulnerability of 2014 emphasized the importance of Forward Secrecy.
Microsoft enabled PFS for email in July 2014.
In 2014 Yahoo (yes, they were still around in 2014) enabled HTTPS encryption by default for email but left out PFS.
How Can Several People Securely Share a Secret Key?
The secure sharing of secrets or access is an old problem! Here is a classic example:Visiting
Perugia is today the capital city of the Italian region of Umbria. But the modern country of Italy is a fairly new thing, with the peninsula organized under one government only in 1870. From the time the Roman Empire fell apart starting in the late 300s until then, the peninsula of Italy was a collection of many Papal regions, colonies of other empires, and powerful city-states, including that of Perugia.
In the 1400s, the city of Chiusi had many visitors who came to see that city's prized relic — what purported to be the Virgin Mary's engagement ring. This was a jade disc about 6-7 cm in diameter and a little over 1 cm thick. Before complaining about the discomfort of wearing such a thing as a ring, or the lack of jade trading routes between China and Palestine in the first century BC, remember that this was in Renaissance Italy and dubious relics were common.
The leaders of Perugia decided that they should go to Chiusi in force and steal Mary's engagement ring so that Perugia could get all the benefits. Not just the income from pilgrimage tourism, but also the sacred bonus points for owning such a relic (and how that favor might be offset by their having stolen the ring through military force didn't seem to enter into the equation). So, they attacked Chiusi in 1473 and stole the ring.
Their immediate worry was that a bigger city, maybe Florence, would then attack Perugia and steal the ring. So they should get a local blacksmith to make a very sturdy box that would be attached into the stonework of their Cathedral of San Larenzo and locked with a strong lock.
But then who would hold the key to the lock? This was the era of the Borgias and conspiracy was common in Italy. No single person could be trusted to hold the key to the box.
So the blacksmith was directed to make a fairly large iron box with straps to be fastened around structural elements of the cathedral. The box had a very large hasp with fifteen holes, and fifteen locks were then used to lock the lid closed. Fifteen reasonably trustworthy citizens were selected, and each received the key to one of those locks.
So, a few times a year, at the pilgrimage seasons, these fifteen reasonably trustworthy citizens brought their keys to the cathedral, collectively unlocked all those locks, and the ring was displayed for the visitors.
If you are interested in more details about Perugia, Umbria, and travel in Italy, see my travel pages.
The leaders of Perugia had devised a M-of-N secret-sharing mechanism. No one person could had access. The access mechanism, a collection of keys, was distributed across a group of N (15) people, and some subset M (actually all 15 of them in this case) could combine their partial access to access the protected secret.
What if one of those supposedly upstanding Perugians had died of the plague? If the family couldn't find the key, they would have to get the blacksmith to cut into the box (and here we see that we have to trust the providers of the security technology). A better solution would have been to make it a true subset, M out of N where M < N. You can do that with combinations of chains and locks, but physical M-of-N control quickly becomes complicated.
Mathematics to the rescue! There are several ways to accomplish M-of-N secret key sharing, this is just one of the more commonly used methods:
Adi Shamir's method starts by choosing some prime p which is larger than the largest possible secret key. Remember, if a key is a string of k bits, it can be thought of as a number in the range 0 through 2k - 1.
Then generate an arbitrary polynomial of degree M - 1. Let's be a little less paranoid (smaller N) and more careful (M<N in case of plague loss), use a 5-of-7 scheme. So, N = 7 and M = 5. Here is our polynomial, where S is the secret number:
F(x) = (ax4 + bx3 + cx2 + dx + S) mod p
The coefficients a, b, c, d are chosen randomly, and they must be kept secret.
Now, we said we wanted to share the secret among seven people,
so we just need to generate seven "shadow" values:
ki = F(xi)
We could simply evaluate our polynomial for x = 1, ,2, ..., and give each result to one of our seven people. Our polynomial has five unknowns: a, b, c, d, S, and so you could solve for those with any five output values.
Here is an example:
Secret: S = 42
Prime: p = 101
Random coefficients: 4, 13, 9, 25
Polynomial: F(x) = (4x4 + 13x3 + 9x2 + 25x + 42) mod 101
Generate six shadow values:
F(1) = (4*14 + 13*13 + 9*12 + 25*1 + 42) mod 101 = 93
F(2) = (4*24 + 13*23 + 9*22 + 25*2 + 42) mod 101 = 94
F(3) = (4*34 + 13*33 + 9*32 + 25*3 + 42) mod 101 = 25
F(4) = (4*44 + 13*43 + 9*42 + 25*4 + 42) mod 101 = 37
F(5) = (4*54 + 13*53 + 9*52 + 25*5 + 42) mod 101 = 73
F(6) = (4*64 + 13*63 + 9*62 + 25*6 + 42) mod 101 = 74
F(7) = (4*74 + 13*73 + 9*72 + 25*7 + 42) mod 101 = 76
Distribute the shadow values to the seven individuals. In the future, any five of their shadow values can be used to create and solve a set of five linear equations in five unknowns.
Next ❯ Cryptographic Hash Functions
|
<urn:uuid:178f0578-cc8c-4a0e-8dad-5ab5aa1c138a>
|
CC-MAIN-2022-40
|
https://cromwell-intl.com/cybersecurity/crypto/diffie-hellman.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00237.warc.gz
|
en
| 0.930067 | 2,814 | 3.453125 | 3 |
Natural Language Processing, 4 Key Techniques, New Tech
Natural language Processing or NLP for short refers to the various technological processes that enable software programs and machines to decipher spoken and written language or text. This can be accomplished using a number of different methods and techniques. For example, sentiment analysis, the dissection of data into positive, negative, and neutral feedback, can help businesses and organizations gain a better understanding of how their customers view their respective products and services. To this point, some other common methods that can be used to implement NLP include topic modeling, named entity recognition, text summarization, and lemmatization and stemming.
Topic modeling uses unsupervised machine learning algorithms to create statistical models that can be used to effectively tag and group together different clusters of data or information. As NLP software models will need thousands, if not millions of words and phrases in order to function, topic modeling can be used to discover more abstract topics within a data set that may have been previously difficult to recognize. For example, all written documents will have overarching topics that are used to control the flow and direction of the narrative that is being conveyed. Through the use of topic modeling, software developers can gain a better understanding of these topics, and the manner in which these topics should be implemented into an NLP model.
Named entity recognition
Named entity recognition or NER for short is an NLP technique that software engineers can use to classify datasets into named entities. To illustrate this point further, consider the sentence “Red Cross founder John Doe purchased a community center in New York City for $10 million”. When using NER, a software developer would break this sentence down into more specific categories or named entities. As such, the Red Cross would be categorized as an organization, John Doe would be categorized as a person, New York City would be categorized as a location, and $10 million would denote monetary value. Through these named entities, a software engineer looking to create a customer service chatbot could gain more information about the ways in which customers view the products or services offered by a particular company.
Text summarization refers to the process of breaking down scientific, medical, or technical jargon into more basic terms, with the end goal of making the sentences, words, and phrases easier for an NLP model to understand. For example, consider the common terms due diligence and AWOL. The term due diligence refers to the work and research that should be done before making a serious decision, whether the decision in question is in relation to business or some other related pursuit. Alternatively, the acronym AWOL, which stands for absence without leave, is military jargon used to describe an enlisted individual whose whereabouts are currently unknown. While many people would easily recognize these forms of jargon when using them in casual conversation, computers do not have this knowledge, and text summary can be used to convey ideas and expressions in a format that is easier to grasp.
Lemmatization and stemming
A final technique that software engineers can use to create NLP algorithms and models is lemmatization and stemming. Lemmatization and stemming refer to the process of breaking down words into the stem of the word and the context in which the word is being used. The Porter Stemming Algorithm, created by English computer scientist Martin Porter in 1980, is one of the most commonly used algorithms for stemming words within the English language. To put the algorithm into more layman’s terms, Porter’s algorithm consists of five phases of word reductions that are applied sequentially. Using these five phases, software engineers can provide their models with words and phrases that will be more easily understood by the machine learning models that will be used to create a particular NLP software program.
From topic modeling to lemmatization and stemming, software developers have a number of tools and methods that they can use to break words and phrases into their most simple forms. As human language has a level of abstraction, complexity, and nuance that computers and machines will inherently struggle to comprehend, ensuring that the words and phrases that are used to create a language model are as concise and straightforward as possible is pivotal in creating cutting-edge and innovative software programs. Without these methods and techniques, many popular NLP software programs such as Siri, Cortana, and Alexa would struggle to engage with and respond to human language in a meaningful way.
|
<urn:uuid:15c5852d-14f1-4c7e-a1ee-b07b1969dc5b>
|
CC-MAIN-2022-40
|
https://caseguard.com/articles/natural-language-processing-4-key-techniques-new-tech/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00237.warc.gz
|
en
| 0.939227 | 892 | 3.5625 | 4 |
What do CIOs need to know about the changing profile of data center energy usage? And what role should IT leadership play in holding the line on expenses without reducing operating efficiencies?
Data center energy usage is dominated by servers and other infrastructure. But growing energy consumption by storage is also a concern. Estimates put the total data center power consumption in 2020 for the USA to be 73 billion kilowatt hours (kWh).
According to Brad Johns, President of Brad Johns Consulting, data centers consume 1.8% of all electricity in the United States. Servers consume the most power, followed by infrastructure, then data storage –Hard Disk Drives (HDD) and Solid State Disk (SSD) – followed by networking.
And while infrastructure’s power usage has leveled off over the past five years, storage energy consumption has edged upward.
Big Tech responds.
Those figures have put data center energy usage under the microscope from Greenpeace and other environmental bodies. Mega-enterprises with deep pockets are responding aggressively with various plans to reduce carbon emissions generated by their products, production, and supply chains.
Microsoft announced a “Transform to Net Zero” initiative. Amazon invested $2 billion for sustainable technology development to achieve net-zero carbon by 2040. Verizon added a billion to the pot as part of its plans to be carbon neutral by 2035. Walmart, Delta Airlines, and others have followed suit with similar announcements and investments.
Data center energy efficiency is on the rise.
Meanwhile, data center efficiency has soared over the past two decades. Data center power consumption soared by 90% between 2000 and 2005 period. But the rate of growth slowed to only 24% in the following five years, and down to 5% for the 2010-2020 decade. That’s quite an accomplishment over the last 15 years when you consider the rapid expansion of compute power, multi-core processors, cloud computing, analytics, the Internet of Things (IoT), streaming video, virtualization, social media, big data, and telecom bandwidth during this period.
Advances in Power Usage Effectiveness (PUE) have certainly helped. PUE is a ratio of total data center power consumption divided by the power usage by servers, storage, and other equipment. In the past, PUE was typically 2 or higher due to heavy requirements for cooling (PUE of 2 means that for every watt of energy used on equipment, another watt is consumed by cooling and other components).
The good news is that more and more data centers are pushing their PUE scores below 2.
What can CIOs do?
Data center managers, CIOs, IT managers can bring energy consumption under better control by:
Implementing higher efficiency systems – Generally speaking, the more modern the equipment, the higher its efficiency. Just like refrigerators of today are far more energy efficient than those of a decade ago, IT equipment tends to follow a similar trajectory. But check the specs before you buy.
Improving power management – Sophisticated power management systems are now available to isolate areas of wastage and offer ways to consolidate power and cooling loads to improve efficiency.
Streamlining data center design – I visited a data center in Denmark a few years ago that had large concrete beams running the length of the facility. The cooling systems were pointed perpendicular to those beams, thus blocking the flow of colder air from cooling the hot spots above the equipment racks and raising the overall power usage in the facility. The lesson is clear: Those with older facilities should review existing designs and equipment placement and adjust for better efficiency. New data centers should closely follow current best practices, particularly those employed by the likes of Google, Amazon, Switch and other major data centers.
Moving inactive data to tape – This will lower both data center energy usage and costs. Tape media can be stored off-line, further reducing energy usage. With storage consuming 19% of overall data center power, and upwards of 60% of that data rarely accessed, according to the Tape Storage Council, keeping such data on spinning disk is a serious area of overall energy wastage in IT.
With the growth of edge computing and data storage, individual businesses may ultimately see their data center energy expenses drop. But CIOs have a key role to play in reducing energy demand by cultivating continuous innovation and best practices from their data centers, whether they’re across town or on the other side of the world.
|
<urn:uuid:967d5c2e-4930-45d6-b435-bc36263290e8>
|
CC-MAIN-2022-40
|
https://www.cioinsight.com/news-trends/gain-control-of-enterprises-great-energy-hog-the-data-center/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00237.warc.gz
|
en
| 0.933064 | 904 | 2.5625 | 3 |
The COVID-19 pandemic has forced companies to rapidly implement remote access solutions for their staff to continue operations. Work tasks are now conducted from employees’ homes via personal computers. This has increased IT security risks for businesses, both large and small.
In October 2019, CNBC reported that 43% of cyberattacks targeted small businesses. The monetary cost of these attacks averaged $200,000. That cost doesn’t include other adverse effects. Ransomware costs are increasing year over year and are showing no signs of stopping.
A Cybercrime Magazine article, published in January 2019, addressed cyberattacks on businesses. They reported that 60% of small businesses closed their doors within six months of an attack.
If your pulse is increasing, it’s time to learn how to bolster your computer security systems. Continue reading to learn about the top warning signs and actions you can take.
IT Security Risks
Computer security risks arise from many factors. Anytime a remote device connects to your business’ network, you’re at risk. Malware and spam campaigns have dramatically increased since the COVID-19 pandemic began.
Cybercriminals are preying on the fears and unstable circumstances worldwide. Let’s explore specific roads used by hackers to access your systems.
There are two similar avenues to access a business’ network from a remote location. They can use Microsoft Remote Desktop Protocols (RDPs) and virtual private networks (VPNs). Each one allows a different level of access to the network.
RDP is a proprietary protocol that gives users a remote link to a computer system. The user is able to take over the computer remotely. Users can access all licensed software installed on that machine. It’s as if they are physically working in front of the device. This approach is one of the most straightforward solutions and yet the least secure. In fact, use of RDP represents one of the most common security risks.
Most experienced companies don’t allow direct access to their network. They use firewalls and other restrictions to enhance security. Yet, even these organizations can fall victim to shadow IT operations. These subversive groups strive to find a foothold on unmanaged cloud platforms or via third-party services.
A VPN connects to a network via an encrypted connection through the internet. This encryption increases the security of sensitive information. As a result, VPNs are more effective at preventing unauthorized eavesdropping on your network traffic.
Individuals connect to the VPN via a local area network (LAN). LANs join devices such as printers and computers in one physical location. LANs can range in size from a small home network to thousands of users. The LAN gives users the ability to print, download, or access files on a virtual desktop.
In the past, VPN traffic has traveled through full or split tunnel solutions. Today, there are more remote employees requiring access. Thus, many companies are moving toward full tunnel solutions to decrease bandwidth.
VPNs let users access network shares, applications, and internal resources via virtual environments. This remote access solution provides a safer, more secure method than allowing employees to simply “take over” onsite machines via RDP.
Most people have heard of encryption, but many do not understand what it is or its importance. Have you heard of IoT attacks? IoT stands for “internet-of-things”. This translates to all devices that connect to the internet. That includes laptops, desktops, and all other types of mobile devices.
In fact, 98% of internet traffic travels via IoT devices. Researchers estimate about 57% of IoT devices have a medium to high-severity risk of an attack. They also found that attackers often gain access via password-related security holes. Unencrypted devices increase the risk of personal and confidential business data breaches. One unpatched laptop can allow access to other IoT devices and their data. With corporations relying on remote workers, IoT attacks pose increased threats to networks. Likewise, IoT vulnerable devices can infect personal devices.
The important take away is for all systems and interfaces to have reliable encryption protocols. Encrypt all emails so an unintended person can’t eavesdrop. Also, encrypt stored data (“at rest”) in case the device is stolen.
Using PCs with No Antivirus Software Installed
All businesses should make sure remote users install antivirus software. This helps protect devices from viruses, spyware, malware, rootkits, Trojans, phishing, and ransomware. This security utility detects and removes viruses from the computer system. This provides a preventative approach to cybersecurity.
Using Unpatched Software
All computer software programs need updates from time to time. The common term for these updates is a “patch”. These patches also strengthen the software’s security against viruses and other breaches.
When employees run software without the most current patches, their system and your business network are at risk. Threats can occur due to vulnerabilities in the operating system’s programming. “Zero-day” vulnerabilities describe a security flaw discovered by the software vendor. When software companies find these problems, they issue a patch. Yet, criminals can invade before the patch is in place.
The SANS Institute, a leading security organization, published “The Top Cyber Security Risks” report. They defined the primary initial vector for infection as any computer with internet access. Using unpatched software represented a “priority one” risk.
Criminals often target emails via phishing and other internet connections on the worker’s side. They exploit weaknesses in programs such as Adobe PDF Reader, QuickTime, Adobe Flash, and Microsoft Office.
Users often open or download documents, music, or videos from trusted sites without a second thought. Some attacks don’t even need the user to open a document. Just going to an infected website can compromise the worker’s software. The infected computer can then spread the infection to other internal computers or servers. The attacker’s goal is to steal data and install “back doors,” so they can steal more information later.
So now we must worry about COVID-19 infections and computer system infections.
Ransomware attacks the company’s weakest link: the user workstation. This type of virus infects a personal computer or an entire business network. It causes systems to shut down and “locks” access to critical business data. The criminal then demands a monetary ransom fee to unlock the system and restore the affected data.
One example of an extremely destructive ransomware virus is “Wannacry”. It has attacked hundreds of thousands of computer systems. Ransomware can attack both large and small businesses indiscriminately.
When employees work onsite, the IT department controls the Wi-Fi network security. With remote workers, companies’ risks are higher due to home systems’ weaker protocols. Users often use insecure, outdated security protocols like WEP instead of WPA-2, giving hackers easy access to the home networks and remote systems.
Hackers love cracking passwords. Remote workers that use simple, insecure passwords across several platforms create significant risks.
In a short time, hackers can gain access to multiple accounts throughout your system.
Remote Working Strategies for Protecting Your Information
Every company today must develop IT security protocols. This protects both internal networks and remote workers’ devices. In April 2020, four cybersecurity groups discussed the dangers of remote work. This meeting included Global Cyber Center of NY, Cyber Ladies, the Israeli Economic Mission to North America, and Perimeter 81.
They summarized recommendations for developing more robust organizational network security plans. An essential part of every security protocol is a strong plan for backing up all data. Further actions discussed are as follows.
Move to a “Cloud-Agnostic” Platform
This group recommends moving from the VPN-based platform to a cloud-agnostic platform. In the strictest sense, this refers to tools, services, and applications that can move to and from onsite infrastructures. It can also interface with all public cloud platforms without specific operating systems or other dependency requirements.
Cloud-agnostic platforms are used on two or more cloud platforms. Examples of multi-cloud shared environments include AWS, Azure, and Google. Another approach is a hybrid-cloud environment, such as Azure Stack. This shares one operating system between an onsite private cloud and the provider’s public cloud.
True cloud-agnostic tools, services, or applications provide reliable and standard performance regardless of the platform used. Companies receive the most cost-efficient service without sacrificing performance.
Companies must have consistent levels of connectivity and automation, no matter where the data resides. You need interoperability between virtual machines, bare-metal servers, and containers. This also applies to IoT edge devices and private or public cloud-based services.
Lastly, you need a scalable network security solution. The network system can add nodes as your system needs to increase. This prevents your company from having to buy larger systems in the future.
Employee education is key to ensuring secure remote work practices. Teach them to identify possible signs their device may have a virus.
- The computer is running very slowly
- Files become damaged or deleted
- The hard disk reformats
- The device frequently crashes
- The user can’t find data
- The user is unable to complete tasks on the computer or the internet
- New programs appear the user didn’t install
- Unusual pop-up ads start appearing on the screen
- The user loses control of the mouse or keyboard
Employees should know about cybersecurity risks. They must watch for suspicious emails, malware, etc. Training should include the steps to take if they suspect a cyber-attack.
The Zero Trust Model
A new strategy is now being deployed to increase remote access security. The “Zero Trust Model” uses an identity provider to gain access to applications. It also makes a decision about the authorization rights to access the application.
This authorization uses determinants from both the user and the device. For example, a certificate is stored in the Trusted Platform Module (TPM), which manages identity checks. It evaluates the origination of the login and what the user’s role is.
Only Work on Work Computers
Establish a policy that all work must take place on a designated “work computer”. This ensures that safety measures are in place each time the employee accesses company networks. Remote workers can forget to charge their work laptop. They may be on their personal device when they receive a work call. It’s simple to address the issue using a personal device. This can result in security exposure via unpatched, unencrypted devices without antivirus protection and cloud back up. Workers often forget about all the security measures put in place by the IT department. It’s invisible to them.
Block Lines of Sight
Remote employees need to keep information on their screen blocked from wandering eyes. This is less of an issue with the lockdowns due to COVID-19. Most employees are now working at home. This concern is more significant if the employee works in public places like coffee shops. Yet, depending on the security level of your work, specific policies may need to be enforced.
Lock It Up
Keep all work-related devices locked when not in use to protect confidential information. Some government contracts, for example, stipulate that equipment always remains behind locked doors. Also, never leave your work device unattended in a car. Thieves love this.
No Random Thumb Drives
A favorite hacker tactic is to drop several large capacity thumb drives in a place they wish to attack. When a user picks it up and opens the files, JACKPOT, the hackers is granted access. Likewise, don’t use a thumb drive that’s been plugged into an unsecured system. If you must charge a device via a USB port at an unprotected public place, use a USB data blocker. This prevents data transmission and protects against malware.
Benefits of IT Consultants
This is a LOT of information. As a business owner, you may not have the time, expertise, or resources to meet all these standards. This is where our IT Consultants can benefit your company.
IT consulting services provide tiered packages to meet your company’s needs. Using our IT services in New Jersey prevents the need to hire IT employees with a wide range of expertise.
These IT companies have experts ready to work on problems at any level. This includes knowledge of compliance standards, firewalls, networks, servers, security, and backup/recovery.
Does Your Company Need to Up Its Cybersecurity Game?
Do you have concerns about IT security risks in general or due to increased remote workers with the COVID pandemic? If so, Ascendant Technologies, Inc. is ready to solve your security issues. We’re a Managed Service Provider who offers B2B services that function as an outsourced IT department.
Ascendant can provide IT solutions for businesses of all sizes. You have access to a tiered remote support desk for your employees. Our experts can manage your servers, workstations, and network infrastructure.
Our typical projects include mail migration to Office 365, new server/workstations, new firewalls, and system virtualization to Azure. Contact us today and request a quote for IT services to protect your business.
|
<urn:uuid:36caa04a-f972-4881-b37c-22f75e113ce2>
|
CC-MAIN-2022-40
|
https://ascendantusa.com/2020/06/15/diving-into-the-it-security-risks-of-working-remotely-during-quarantine/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00237.warc.gz
|
en
| 0.928803 | 2,821 | 2.59375 | 3 |
Fiber optic cables are seemingly everywhere, and responsible for the Internet connection we rely on every day. But how much do we know about these cables? Unless you consistently work with them up close, you may not know what makes up these cables and how important they are to everyday life. Even if you consider yourself a fiber optic cable expert, you may not know the fascinating history of the technology. Whether you are familiar with fiber optic cables or have only heard the name, read on to learn a few fascinating facts and figures about these increasingly important cables.
Fiber Optic Technology is Much Older Than You Think
Although fiber optic technology seems brand new, the original concepts used to create the first fiber optic cables were developed nearly 200 years ago! Two French inventors, Jean-Daniel Colladon and Jacques Babinet, came up with the concept of “Light Guiding” in the 1840s and helped bring it to life. They used tubes of flowing water to carry sunlight to a dark area. It took nearly another century until physicist Narinder Singh Kapany invented the initial fiber optic cables.
Fiber Optic Cables Contain Glass
Yes, these small and powerful cables have glass in them! It is this glass that helps move data through the cables at lightning speed and transmit data without signal loss.
Fiber Optic Cables Are Durable
You may think that because fiber optic cables have glass in them, they must be fragile. Each individual glass fiber is protected by several layers of coating. These are combined into a common multiple fiber cable where there is a central strength member and an outer jacket to give further protection. Those are further protected when used in multiple fiber cables by a strength member and an additional protective sheath. These cables are actually very durable and can be used in a variety of environments without becoming damaged. Cable installers need to be careful they do not stretch the cables, but even if damaged, cables can often be easily repaired.
Fiber Optic Cables Are “Greener” Than Copper Cables
If you are interested in “going green,” fiber optic cables are the superior choice. It takes little energy to send light through the cables, especially when compared to the energy required to send electrical signals through copper cables.
Fiber Optic Cables Provide High-Speed Connection for Numerous
Fiber Optic Cables Stretch Across Most of the U.S.
There are always new fiber optic cables being put into the ground across the United States. But so far, researchers have found that at least 100,000 miles of fiber optic cables are in the U.S., and the number will only continue to rise.
Get in Touch with FiberPlus
FiberPlus has been providing data communication solutions for 28 years in the Mid Atlantic Region for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include:
- Structured Cabling (Fiberoptic, Copper and Coax for inside and outside plant networks)
- Electronic Security Systems (Access Control & CCTV Solutions)
- Wireless Access Point installations
- Public Safety DAS – Emergency Call Stations
- Audio/Video Services (Intercoms and Display Monitors)
- Support Services
- Specialty Systems
- Design/Build Services
FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry.
Have any questions? Interested in one of our services? Call FiberPlus today 800-394-3301, email us at [email protected], or visit our contact page. Our offices are located in the Washington, DC metro area and Richmond, VA. In Pennsylvania, please call Pennsylvania Networks, Inc. at 814-259-3999.
|
<urn:uuid:5ef4b6ba-5997-48fc-925e-722b4e1819ad>
|
CC-MAIN-2022-40
|
https://www.fiberplusinc.com/helpful-information/fiber-optic-cables-facts/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00237.warc.gz
|
en
| 0.933869 | 803 | 3.03125 | 3 |
To understand what this article is about it’s important that we have an agreement on what we mean when we use the term “adaptive authentication”. It isn’t a difficult concept, but it’s best if we’re all on the same page, so to speak.
First, the basics: authentication is the ceremony which allows someone to present credentials which allow access to something. Typically and traditionally this is a username/password combination. But username/password is only one facet of one factor of authentication and we usually speak of three possible factors, identified as:
- Something you know (e.g., a password)
- Something you have (e.g., a token such as a SecureID fob)
- Something you are (e.g., a biometric such as a fingerprint)
There are multiple facets to each of these, of course, such as the so-called There are multiple facets to each of these, of course, such as the so-called “security questions” (mother’s maiden name, first pet’s name, city you were born in, etc.) which are part of the Something you know factor.
Beginning around 30 years ago, it was suggested that multi-factor authentication – using two of the three factors, or even all three – made for stronger security. Within the last ten years, on-line organizations (such as financial businesses) and even social networks (Google+, Facebook, etc.) have suggested users move to two-factor authentication.
While this is good practice, this multi-factor authentication is static. Every time you access the service you need to present the same two credentials in order to log in. It’s always the same. Once a hacker (usually through what’s called “phishing”) knows the two factors your account is as open to them as if there was no security.
Within the past 5 years we sat KuppingerCole have advocated moving to what we called “dynamic” authentication – authentication that could change “on the fly”. But because we advocated much more than a change in how the authentication credentials were established, we now call the technology “adaptive” authentication.
It’s called “adaptive” because it adapts to the circumstances at the time of the authentication ceremony and dynamically adjusts both the authentication factors as well as the facet(s) of the factors chosen. This is all done as part of the risk analysis of what we call the Adaptive Policy-based Access Management (APAM) system. It’s best to show an example of how this works.
Let’s say that the CFO of a company wishes to access the company’s financial data from his desktop PC in his office on a Monday afternoon. The default authentication is a username, password and hardware token. The CFO presents these, and is granted full access. Now let’s say another CFO of another company wishes to access that company’s financial data. But she’s not in the office, so she’s using a computer at an internet café on a Caribbean island where she’s vacationing. The access control system notes the “new” hardware’s footprint, it’s previously unknown IP address and the general location. Based on these (and other) context data from the transaction the access control system asks for additional factors and facets for authentication, perhaps password, token, security questions and more. Even so, once the CFO presents these facets and factors she is only given limited read access to the data.
The authentication is dynamically changed and adapted to the circumstances. That’s what we’re discussing here.
Subscribe to our Podcasts
How can we help you
|
<urn:uuid:d42735aa-b9e4-42c0-9890-a65a2b8bf7d2>
|
CC-MAIN-2022-40
|
https://www.kuppingercole.com/blog/kearns/adaptive-authentication-explained
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00237.warc.gz
|
en
| 0.942828 | 799 | 2.78125 | 3 |
Quantum Computer Error Correction Is Getting Practical
(SpectrumIEEE) Michael J. Biercuk, a professor of quantum physics and quantum technology at the University of Sydney, is the founder and CEO of Q-CTRL.contributed this SpectrumIEEE guest post.
Quantum error correction—or QEC for short—is an algorithm designed to identify and fix errors in quantum computers. It’s able to draw from validated mathematical approaches used to engineer special “radiation hardened” classical microprocessors deployed in space or other extreme environments where errors are much more likely to occur.
QEC is real and has seen many partial demonstrations in laboratories around the world—initial steps that make it clear it’s a viable approach. 2021 may just be the year when it is convincingly demonstrated to give a net benefit in real quantum-computing hardware.
The challenge comes when we look at the implementation of QEC in practice. The algorithm by which QEC is performed itself consumes resources—more qubits and many operations.
Returning to the promise of 1,000-qubit machines in industry, so many resources might be required that those 1,000 qubits yield only, say, 5 useful logical qubits.
This is why a major public-sector research program run by the U.S. intelligence community has spent the last four years seeking to finally cross the break-even point in experimental hardware, for just one logical qubit. We may well unambiguously achieve this goal in 2021—but that’s the beginning of the journey, not the end.
Crossing the break-even point and achieving useful, functioning QEC doesn’t mean we suddenly enter an era with no hardware errors—it just means we’ll have fewer.
None of this discussion means that QEC is somehow unimportant for quantum computing. And there will always remain a central role for exploratory research into the mathematics of QEC, because you never know what a clever colleague might discover. Still, a drive to practical outcomes might even lead us to totally abandon the abstract notion of fault-tolerant quantum computing and replace it with something more like fault-tolerant-enough quantum computing. That might be just what the doctor ordered.
|
<urn:uuid:fd2dfb9b-be78-451e-9888-da1720e4b578>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/quantum-computer-error-correction-is-getting-practical/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00237.warc.gz
|
en
| 0.900225 | 468 | 3.125 | 3 |
Authentication Types: SSO and MFA
In the past, you protected your assets with a lock and key and in the digital world, the equivalent is a username and password. This however is not enough anymore as cyber criminals are becoming much more creative in their tactics to break-in and steal your digital assets. It’s now a standard practice to layer two or more of the authentication types to sufficiently defend against cyber threats.
What you know: This authentication type is something that you memorize, like a password, a pin or a passphrase.
What you have: This authentication type is authenticated with a tangible object, such as your smartphone or keychain token.
Who you are: This authentication type uses biometrics, such as fingerprints, retina scans or full face scans available onsome modern smartphone devices.
The simplest authentication standard is called Two-Factor Authentication (2FA), more commonly known as Multi-Factor Authentication (MFA). Using what you know with what you have, exponentially strengthens your security posture, and reduces the attack surface that cyber criminals can attack. A common motto emerging from the cybersecurity realm is“MFA everything.”
According to the SANS Institute, the three most common vulnerabilities MFA can reduce are:
- Business email compromise
- Legacy protocols
- Password re-use
You are probably thinking that the number of keys on your keychain is getting too big and that you have too many passwords to remember right? This is very true, and a common mistake is to re-use passwords to make it easier to access all your digital services. Cyber criminals know this too and they are taking advantage of people’s poor password hygiene to break into your services and assets.
It has also been increasingly difficult to create and remember unique passwords for each and every service due to the sheer number of online services we have. To solve this problem, credential managers were developed to help us manage our digital credentials. Lastpass is one of these tools and has been adopted as a standard practice now to help ensure that you are able to generate and use all your credentials when requested.
We sometimes hear from our clients, “Oh we don’t need that level of security – we’re too small for anyone to care to hack in to our systems.” Well that’s simply not true.– Wayne Chow, Director of Cybersecurity
Another tool that has been developed to decrease the number of keys on your keychain is Single Sign-On (SSO). SSO is a way to use one of your main services as your identity provider, such as your email. By pointing your other cloud services to authenticate with one source ensures that you are not inundated with a multitude of credentials to manage and allows your organization to have better control and administration.
If you are interested in a Cybersecurity Assessment for your organization or would like to just talk to us about how you can strengthen your security framework, reach out to us and we can schedule a session with our Cybersecurity team here at Nucleus.
|
<urn:uuid:dec72a1e-d9d9-448e-aec1-07e82d2ef797>
|
CC-MAIN-2022-40
|
https://blog.yournucleus.ca/sso-and-mfa
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00237.warc.gz
|
en
| 0.956306 | 629 | 2.75 | 3 |
Tuesday, September 27, 2022
Published 5 Months Ago on Saturday, Apr 23 2022 By Ahmad El Hajj
The internet of things (IoT) is certainly not a new concept in the world of technology and engineering. With the improvement in connectivity, new use cases notably IoT healthcare applications have emerged. The idea of connected physical devices or “things” has existed for a long time with software-enabled sensing devices communicating data among others or even with a central entity that handles the processing and analysis of the incoming data.
Wireless sensor networks and mobile ad-hoc networks are all concepts that have existed for a long time and largely contributed to shaping up IoT as it is widely known today. The hype surrounding the advent of 5G as a game-changer in the telecommunications industry and as an enabler for new business-to-business (B2B) and business-to-consumer (B2C) opportunities has notably contributed to the increased investments and adoption of IoT solutions.
The prospective Wi-Fi 7 standard will complement the IoT landscape with its expected ultra-high throughput and low latencies. According to Statista, the number of connected IoT devices will increase from around 11 billion in 2022 to 25 billion in 2030, thus requiring a reliable holistic connectivity. The IoT market is expected to exceed one trillion USD in the same year.
The healthcare industry is among the highly affected verticals by the development of IoT solutions to the extent that a new area denoted as the Internet of Medical Things (IoMT) has emerged. Compared to regular IoT solutions, IoMT ones cater for specific requirements in healthcare provision such as security and utmost reliability.
Numerous use cases justify the implementation of IoT in healthcare. The introduction of such solutions into the healthcare industry is actually disruptive on different fronts:
Cybersecurity has recently been an active area mainly due the constantly increasing number of security attacks and breaches. Hackers have been innovative discovering loopholes and exploiting vulnerabilities for malicious endeavors. As IoT systems rely on continuous connection availability and reliability, security is definitely a prohibitive factor. This however does not bode well for healthcare applications as related collected data involves sensitive and confidential information. Data integrity and security is therefore the main challenge towards large scale adoption of “internet of healthcare things” solutions.
Another challenge in developing IoT solutions relate to the difficulty in integrating data from different sources. The channeling of the data from sensors to the decision making device is normally done using different communication standards. At the sink, the processing of the received information becomes harder as data cleaning and transformation needs to be done before viable learning can be done. The difficulty in integrating data from sources significantly affects the scalability of IoT solutions in healthcare.
A third challenge relates to the availability and reliability of the connections within the IoT network. As healthcare applications require quasi-real time collection of data, any disruption in the connection can prove costly in terms of the data accuracy and errors in the decision making process. For instance, smart insulin pumps require accurate periodical glucose measurements. Any erroneous interpretation of glucose levels can lead to wrongly administrated quantities of insulin.
Addressing the challenges related the integration of IoT solutions in Healthcare holds the key that determines the future of such projects, especially that companies and manufacturers investing in the field need some guarantees to continue with their investments.
As security is the top priority in such applications, advanced security enforcing systems have been investigated. Among others is the use of blockchain technology to improve security in the data management and operations, in particular data integrity, access control and privacy preservation. The distributed ledger can be used to validate data exchanges between IoT devices, thus reducing potential attacks.
The harmonization of communication standards for healthcare IoT applications is also essential in delivering efficient solutions.
As technology is evolving and digital transformation is at full throttle, new IoT use cases will certainly be developed in the healthcare sectors. Digital twins seem to build on IoT measurements to construct a virtual model of the patient, and customize treatment and monitoring opportunities.
Cost reduction and preventive services will also be at the center of future use cases. The current coronavirus pandemic has taught us how fragile the healthcare system is. Through proper data collection and interpretation, higher resilience towards future challenges can be achieved.
When advancements in electronics, communications, computing, and storage come together, the internet of things becomes a paradigm that could well disrupt different industries, notably healthcare. The benefits of developing healthcare IoT solutions are numerous ranging from improved service provision, to optimized operations and asset management. As the medical environment is heavily regulated, conceived solutions need to mature enough by addressing several challenges, notably security related issues. The future certainly holds a lot of positive prospects for IoT in Healthcare. It is up to companies and manufacturers to make the best out it!
“Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Technology and MedTech space to stay informed and up-to-date with our daily articles.”
With all its innovation and glory, the Qatar world cup is almost here. Qatar has branded this world cup edition as the world cup of the invention. In many ways, this statement can hold its weight. Some of the technology and aspects of the Qatar world cup are revolutionary. The Arabic nation will be hosting […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved
|
<urn:uuid:f99ca541-0dc4-4f32-9579-e67a26966b99>
|
CC-MAIN-2022-40
|
https://insidetelecom.com/iot-healthcare-when-overcoming-challenges-holds-the-key-for-a-flourishing-market/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00437.warc.gz
|
en
| 0.942127 | 1,121 | 2.671875 | 3 |
Skimming attacks occur when malicious actors steal credit or debit card data, create fake accounts, and then spend money that doesn’t belong to them. Skimming costs customers and businesses more than $1 billion each year. The rise of e-commerce presents a new opportunity for attackers to compromise and capture data online during the purchase and checkout process.
Here’s what companies need to know about the evolution of skimming attacks, how they target digital consumers, and what they can do to reduce their total risk.
First-generation skimming methods relied on physical card readers such as those on ATMs or point-of-sale (POS) terminals. By installing additional hardware on top of existing card scanners or readers that mimicked the form and function of legitimate devices, users unwittingly swiped their cards or entered their PINs only to have them stolen by criminals.
Armed with this data, fraudsters created new cards that worked just like their original counterparts, making it possible for them to spend someone else’s money in-person or online. To counter attacks, chip-based cards using the Europay, Mastercard and Visa (EMV) standard were developed, which helped frustrate attacker efforts.
The uptick of online purchasing, however, has created a new opportunity for attackers: web skimming.
Skimming in Cyber Security
While online shopping was already on the rise, pandemic pressures pushed it to the purchasing forefront. Offering speed and safety, digital card transactions have seen massive growth over the past two years.
Given the non-physical nature of these transactions, it’s logical to conclude that attacks would largely dissipate — without a device to compromise, how could attackers capture credit and debit card data?
Enter Magecart, now one of the most popular web skimming tools available. Much like its generation 1.0 counterpart, this digital skimming technique looks to capture card data in use. Attackers first look for e-commerce sites with minimal security measures and inject malicious payloads that bury themselves in legitimate site code. When customers move to the purchasing stage and enter their card details, skimming malware captures, copies and transmits credit data back to malicious actors, who then use this information to create digital replicas and defraud customers.
In some ways, digital attacks are more dangerous than their physical counterparts. Since e-commerce sites don’t store credit card information, attackers have found ways to target the point of digital sale and obtain even more detailed data.
The Impact of Digital Data Loss
Data loss due to attacks can lead to negative impact in three key areas:
- Consumer confidence: Trust plays a critical role in consumers’ willingness to provide their credit or debit card information online. Successful attacks undermine this trust and make customers far less willing to make purchases or share digital data, in turn, reducing your total sales volume.
- Remediation and reputation costs: Once attacks are identified on your site, it’s critical to address them quickly and completely. In some cases, however, this means temporarily shutting down digital purchasing until all traces of compromise are identified and eliminated. In practice, this is costly — from a revenue and reputational standpoint.
While your site is down, you’re not generating revenue, and substantial spending may be required to improve overall security. Customers, meanwhile, may not wait until your site is fixed; instead, they may choose to take their business elsewhere.
- Operational compliance: Compliance is also a concern. Rules such as PCI DSS, CCPA and other local legislation can lead to fines or penalties that may impact operations if companies can’t deliver on due diligence requirements around data collection and protection.
Credit Where Credit is Due
To reduce the risk of digital attacks, companies must take proactive, protective action that prevents malicious actors from installing malicious code.
This starts with regular updating and patching of your site to help ensure attackers can’t exploit previously unknown vulnerabilities. It’s also worth partnering with security and compliance experts to conduct a risk assessment of your current security posture and penetration testing to pinpoint potential problems.
If you’ve been victimized by attacks, incident response and forensic services from trusted providers can identify the attack vectors used by cybercriminals and help your teams craft an effective remediation strategy.
Skimming has gone digital, and now poses a significant threat to customer credit data, company reputation and effective regulatory compliance. By proactively protecting systems with scheduled patching, regular risk analyses, and continuous vulnerability testing, however, businesses can limit the impact of skimming at scale.
Ready to reduce the risk of successful skimming? See how HALOCK can help.
|
<urn:uuid:8642ef65-4520-4b0c-9f2f-5932bd1605be>
|
CC-MAIN-2022-40
|
https://www.halock.com/understanding-skimming-in-cyber-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00437.warc.gz
|
en
| 0.934928 | 964 | 2.59375 | 3 |
Encryption functions are algorithms designed to render data unreadable to anyone that does not have the decryption key. Data encrypted with a strong encryption algorithm can be transmitted over a public channel with no fear of eavesdroppers.
By default, email protocols have no built-in encryption, meaning that someone who intercepts an email in transit could read its contents. Email encryption addresses this issue by encrypting sensitive emails so that only the intended recipients can read them.
The Importance of Email Encryption
Emails can contain sensitive corporate data or personal information protected under data privacy laws. If these emails are intercepted and viewed by an unauthorized party, they could reveal intellectual property or trade secrets or put an organization at risk of legal penalties for regulatory non-compliance.
Email encryption enables an organization to protect the privacy and security of its communications and to maintain regulatory compliance. As a result, it is a core component of a corporate data and email security program and a common requirement of data privacy laws.
How Does Email Encryption Work?
Data encryption can be performed using symmetric or asymmetric encryption algorithms. Symmetric encryption uses the same secret key for encryption and decryption, while asymmetric or public key cryptography uses a public key for encryption and a related private key for decryption.
While it is possible to use symmetric cryptography for email encryption, this requires the ability to securely share a secret key with the intended recipient of the message. If this key is sent by email, the email would have to be unencrypted for the recipient to read it, so an eavesdropper could intercept this email and use the enclosed key to decrypt the encrypted email.
As a result, many email encryption schemes use asymmetric cryptographic algorithms. With asymmetric cryptography, the key used for encryption is public, so it can be sent over insecure email or posted on a website. For example, Check Point’s public key for reporting security issues via secure email is located here.
With a user’s public key, it is possible to generate an encrypted email that cannot be read by an eavesdropper. When the intended recipient receives the email, they decrypt it with the corresponding private key, producing the original message.
Types Of Email Encryption
The main challenge with using public key cryptography for email encryption is distributing and authenticating a user’s public key. Email encryption provides no benefit if the public key that is used belongs to an eavesdropper, not the attacker.
Different types of email encryption take different approaches to the distribution of these public keys. Two of the most common forms of email encryption include:
- Secure/Multipurpose Internet Mail Extensions (S/MIME): S/MIME is the most commonly-used email encryption protocol because it is built into many mobile devices and webmail platforms. S/MIME uses a centralized public key infrastructure (PKI) to create, distribute, and validate public keys. For example, an IT administrator may act as a root certificate authority (CA) that distributes digital certificates to employees that link their identity to their public key. These certificates can be distributed via the corporate email system to allow employees to send encrypted messages to one another.
- Pretty Good Privacy (PGP): PGP relies on a more decentralized and informal method of generating and distributing public keys. Users generate their own public/private keypairs and distribute their own public keys. The Check Point public key mentioned above is an example of a PGP key. PGP is not built into as many email systems and may require third-party software to encrypt and decrypt emails.
Benefits Of Email Encryption
Email encryption is a powerful tool for data privacy and security. Some of the main benefits that email brings to an organization include:
- Data Privacy and Security: Email encryption makes it possible to prevent eavesdroppers from reading intercepted emails. This helps to protect the privacy and security of sensitive corporate and customer data that may be contained within or attached to an email.
- Authentication: Email encryption ensures that an email can only be opened and read by someone with the appropriate private key. This can help to protect against email spoofing attacks where someone pretends to be a coworker or other trusted party.
- Regulatory Compliance: Data protection regulations commonly mandate that personal protected data be encrypted both while at rest and in transit. Email encryption helps an organization to comply with this second requirement.
Secure Your Email with Check Point
When email protocols were first defined, data privacy and security were not a primary concern, so many email and other Internet protocols are unencrypted by default. As a result, an eavesdropper may be able to intercept, read, and potentially modify these communications.
Email encryption helps to mitigate the threat of these man-in-the-middle (MitM) attacks by rendering an eavesdropper unable to read intercepted emails. Check Point and Avanan’s Harmony Email and Collaboration offers built-in email encryption functionality.
|
<urn:uuid:20963dc9-1b77-4978-a737-dec1ac3ad44b>
|
CC-MAIN-2022-40
|
https://www.avanan.com/blog/what-is-email-encryption
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00437.warc.gz
|
en
| 0.915155 | 1,010 | 3.546875 | 4 |
According to estimates from 2020, data centers around the world utilize about one percent of all electricity generated. While this may not sound like much, it’s important to look at the raw numbers as well. The best estimates in 2018 placed global data center energy consumption at about 205 terawatt hours (TWh), which was more than the total power used annually in highly industrialized countries (for example Sweden, which consumed 123 TWh in 2020).
But despite the drastic increase in digital services and applications over the last two decades, data centers have not seen their share of energy usage accelerate along with it. From 2007 to 2020, the average power usage effectiveness (PUE) of the largest data center operated by many organizations plummeted from 2.5 to 1.59.
A History of Data Center Sustainability
This generally positive trend was the result of two successive waves of data center sustainability movements. The first step was the transition away from outdated, poorly designed, and generally inefficient on-premises facilities into more modern colocation and cloud data centers. Simply migrating assets out of these environments produced huge power efficiency gains and cost savings, especially when it was also combined with a consolidation of sprawling deployments that managed extremely inefficient workloads.
The second sustainability improvement has played out over the last decade as colocation and cloud providers doubled down on automation and AI technologies that allowed them to drastically reduce cooling costs. Cooling infrastructure has long been one of the culprits of data center power consumption, but a new generation of DCIM software and innovative cooling solutions have allowed organizations to scale their capacity without gobbling up a larger share of electricity.
The Limits of Efficiency Gains
But as impressive as these gains may be, there are still harsh realities to be faced when it comes to total power consumption. According to simulations run in a 2021 study on data center electricity needs, could reach 321 TWh in 2030 if all existing growth factors remain the same. When accounting for the growth of the industrial Internet of Things (IoT) and the potential end of Moore’s law, that number could skyrocket to 752 TWh, or 2.13 percent of all global electricity.
These predictions echo a 2020 finding by the Uptime Institute that the average data center PUE of 1.58 has essentially flatlined since 2013. While the latest and most efficient facilities using the most advanced technologies can get that number as low as 1.2 or 1.4, there are still thousands of older facilities that operate at much less efficient rates. Furthermore, there seems to be little reason to hope that data centers will be able to consistently get closer to 1.0 PUE. Even if they could, no amount of efficiency can overcome the sheer quantity of electricity needed to keep these facilities up and running.
Data Center Sustainability 3.0
In order for data centers to take a significant step toward combating energy usage in the coming years, they will need to combine their efficiency goals with an effort to become carbon-neutral and make better use of more sustainable sources of renewable energy.
Europe’s Climate Neutral Data Centre Pact, launched in 2021 and now supported by dozens of major providers, is one such example of how data centers are exploring ways to embrace true sustainability in the future. The pact set a number of efficiency and green energy targets in response to the European Union’s commitment to become climate-neutral by 2050, many of which include innovative solutions like integrating data centers with growing smart grids, investing in ways to convert wasted heat into an energy source, and making more efficient use of water (including revising and refining current water usage effectiveness [WUE] metrics).
Green Powered Shell Data Centers
For organizations looking for a custom data center solution that provides maximum flexibility and the very latest in engineering and infrastructure, build-to-suit powered shell offerings are an increasingly attractive alternative. Far more economical and less capital intensive than building a standalone facility, powered shell solutions are ideal for hybrid deployments that keep essential data and workloads on physical servers in a private data center.
More importantly, build-to-suit data centers can also take advantage of the very latest in renewable energy and sustainability practices. An efficient facility equipped with AI-powered DCIM capabilities can also be designed to incorporate behind-the-meter, carbon neutral solutions that generate renewable energy on-site to lower the data center’s dependency on fossil fuel energy. As more organizations look for ways to minimize their carbon footprint, green powered shell facilities will become much more attractive from a sustainability and peace of mind standpoint.
Explore Evoque’s Build-to-Suit Capabilities
In addition to managing some of the world’s most energy efficient colocation facilities, Evoque Data Center Solutions is pioneering a new way forward with its build-to-suit solutions for enterprise, hyperscale, and government customers. Making these powered shell projects as sustainable as possible was one of our primary goals in developing the service. Benefiting from the extensive experience with sustainability of our parent company Brookfield Energy Partners we are proud to be going to market with our green powered shell build-to-suit offering. Together, we’re working toward building a more sustainable vision of tomorrow’s data center.
To learn more about our innovative build-to-suit solutions, talk to a member of our data center build team and tell us all about your data center and application needs.
|
<urn:uuid:e44ec982-1270-4d1b-9707-8eb4e3ca8c5e>
|
CC-MAIN-2022-40
|
https://www.evoquedcs.com/blog/the-age-of-data-center-sustainability-3.0
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00437.warc.gz
|
en
| 0.94179 | 1,105 | 3.328125 | 3 |
"NASA's Mars Perseverance Rover is equipped with sophisticated scientific instruments to aid in the search for signs of past life on Mars," said Jacobs Critical Mission Solutions senior vice president Steve Arnette. "The calibration device is another example of Jacobs' long standing partnership with NASA in delivering innovative technologies that lead to scientific discoveries."
One of the devices called the Scanning Habitable Environments with Ramen & Luminescence for Organics and Chemicals (SHERLOC) will be used to detect chemicals on the Martian surface that are linked to the possible existence of ancient life.
The calibration device is mounted on the front of the rover so that researchers can check SHERLOC's analytical instrumentation's accuracy by directing it to scan the baseline materials on the calibration target.
The researchers will know in advance what the readings on those materials should be when SHERLOC is working correctly. If the actual readings are off, they can make adjustments to SHERLOC to get it set properly or know to compensate for the errors when they analyze the data later.
"The rover's scientific instruments go through all sorts of harsh conditions from the time they leave the lab until they arrive on the surface of Mars," said Jacobs chief scientist Trevor Graff. "SHERLOC needed a way to make sure it still operates as expected once it's on the surface and throughout the duration of the mission."
The Mars Perseverance Rover was launched on July 30h, 2020 to find signs of past microbial life and collect rock and soil samples. NASA's Rover will travel over a seven-month period to the red planet.
At Jacobs, we're challenging today to reinvent tomorrow by solving the world's most critical problems for thriving cities, resilient environments, mission-critical outcomes, operational advancement, scientific discovery and cutting-edge manufacturing, turning abstract ideas into realities that transform the world for good.
With $13 billion in revenue and a talent force of more than 55,000, Jacobs provides a full spectrum of professional services including consulting, technical, scientific and project delivery for the government and private sector.
|
<urn:uuid:283e5be2-051b-4904-9f58-6e3040dea5f2>
|
CC-MAIN-2022-40
|
https://executivegov.com/2020/08/jacobs-teams-with-nasas-jpl-to-support-mars-perseverance-rover-steve-arnette-trevor-graff-quoted/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00437.warc.gz
|
en
| 0.921079 | 423 | 3.0625 | 3 |
June 15, 2022 — In this series, Argonne is examining the range of activities and collaborations that ALCF staff undertake to guide the facility and its users into the next era of scientific computing.
Bethany Lusch and Murali Emani are computer scientists at the U.S Department of Energy’s (DOE) Argonne National Laboratory, with a decade of high-performance computing (HPC) experience between them. Their current work includes leading efforts to prepare a programming library, oneDAL, and machine learning package, scikit-learn, for the rollout of the upcoming exascale system, Aurora, at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility.
Lusch and Emani, part of the ALCF’s Data Science group, answered questions about their highly collaborative research, which is helping to enable crucial machine learning and data science features on Aurora.
What are oneDAL and scikit-learn?
oneDAL is Intel’s data analytics library, and it includes classical machine learning algorithms such as support vector machines and decision forests. Scikit-learn is a popular open-source Python package for machine learning. The Python package Intel Extension for scikit-learn enables acceleration of scikit-learn by employing oneDAL as a backend with only minimal code changes. oneDAL, which is being optimized for Intel GPUs (graphics processing units) so that it can be scaled for full deployment on Aurora, can also be used to speed up XGBoost, a popular open-source Python machine learning library.
What are the challenges in preparing these libraries for Aurora?
Intel leads the efforts to actually write the software and develop the necessary interfaces; what we’re doing is a form of evaluation—we provide input and feedback to Intel, we help prioritize various aspects of development, we communicate with the science teams at Argonne to help ensure the most useful possible product is built, and so on. oneDAL and scikit-Learn both entirely lacked GPU implementations when we started. The other major challenges we face include prioritizing support for the most widely or heavily used algorithms, enabling distributed implementation across multiple GPUs and at full scale on Aurora, and facilitating interoperability with other data science libraries. We also want users to be able to use oneDAL from multiple programming languages, such as Python and C++. There’s also the problem of needing to create new interfaces where none currently exist, but making them familiar or intuitive so it’s easy for the user to port to Intel hardware. We require input from the science teams to determine which features are most important.
How does this work build on prior development or research you’ve done?
We’ve always made use of traditional machine learning algorithms in our research, but the difference is that is in the past we were running them on CPUs, whereas the work being done in preparation for Aurora extends the existing oneDAL implementations to run on Intel GPUs. Being able to run on Intel GPUs will help accelerate the machine-learning training process by leveraging their massive compute capacity. Furthermore, we want to develop the interfaces of required APIs in a way that coordinates a sense of continuity with existing interfaces; this is to help make GPUs accessible to users, and minimizes the difficulties associated with code refactoring.
Who do you collaborate with for this work?
We work closely with the science teams, including projects supported by the Aurora Early Science Program, that use ML algorithms in their research and are interested in integrating their codes with these libraries for deployment on Aurora. We also meet regularly with Intel’s oneDAL team to provide feedback; another part of our work involves testing their software on early hardware at Argonne’s Joint Laboratory for System Evaluation. Because oneDAL interacts with other Python data science libraries, we meet with the relevant Intel teams and the corresponding groups at Argonne.
How has your approach to preparing these libraries changed or evolved throughout the development process?
We started by developing a consensus about which algorithms would be most crucial driven by science use-cases. We then honed our initial discussions by collecting and examining specific case studies from the science teams. More recently we’ve been focused on providing feedback to Intel for advanced features, scaling across GPUs, scaling across nodes, APIs, and general performance.
The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
Source: Nils Heinonen, Argonne Leadership Computing Facility
|
<urn:uuid:2cf352d1-af17-4760-86a4-6f278beb6baf>
|
CC-MAIN-2022-40
|
https://www.hpcwire.com/off-the-wire/argonnes-bethany-lusch-and-murali-emani-help-enable-ml-capabilities-on-aurora/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00437.warc.gz
|
en
| 0.921331 | 1,185 | 2.765625 | 3 |
Sesame Workshop, which brought Elmo, Big Bird and other beloved characters to countless children worldwide, is teaming up with IBM to advance early-childhood education.
By combining Sesame Workshop’s expertise in children’s content with the cognitive computing technology of IBM’s Watson, the partners plan to develop an adaptive, personalized learning platform and products for preschoolers. They are working with leading teachers, researchers, gamers and other experts to brainstorm the best ways to help children learn during the first five years of their lives, a period of significant brain development.
Each company brings unique skills to the table. “Our core expertise is creating content that appeals to children,” says Stephen Youngwood, chief operating officer of Sesame Workshop, based in New York, N.Y. “Nobody has as much experience as us.”
IBM’s Watson offers natural-language processing, pattern recognition and other cognitive computing capabilities that can help create personalized experiences. Watson can correlate and learn from huge amounts of data and adapt to the aggregate experiences of anonymized groups of students.
The learning platform will include digital content that will most likely be delivered over the Internet and used on a touch device such as a tablet or smartphone or on a computer. Content will be relevant to an individual child’s interests and geared to the appropriate level of aptitude.
“Imagine content that would engage a particular child—whether that’s dinosaurs, fire trucks or mermaids—with a number of words and a level of complexity suitable for that child,” Youngwood offers as an example. What’s unique about this venture is the adaptive nature of the platform, he says, adding, “The content will evolve as a child learns and gains greater aptitude.”
In addition to kid-friendly content, there will be tools for adults. “For the greatest impact, we know we have to engage not only the kids, but their teachers and caregivers,” Youngwood says. For example, some products could help a teacher judge what students need and like and then adjust their lessons accordingly. “The best teachers can read a room and figure out what will connect with a kid,” he says.
Testing Interactive Platforms and Interfaces
Details on the platform and tools, and the staff involved in creating them, aren’t available at this early stage of the three-year agreement, but projects are already in the works. “There are a few things we’re playing around with now,” Youngwood says, including tests of a variety of interactive platforms and interfaces for use in homes and schools.
Once prototypes are ready, Sesame Workshop and IBM will seek feedback from leaders in technology and education to allow continued refinement of the products.
“Our hope is that over the next 12 months, we’ll launch pilot programs in partnership with some schools or other organizations in various parts of the country,” Youngwood says. “Like IBM, we are a global company, and it’s likely that one of the pilots will be outside the U.S.” The partners haven’t yet identified any schools or other organizations they might be working with.
The ultimate goal is to provide children from all socioeconomic backgrounds around the world with the opportunity for meaningful, personalized education in their most formative years. How that will be funded and implemented hasn’t yet been disclosed.
Though the venture relies on harnessing technology in the interests of education, Youngwood says they won’t lose sight of one key point: “It has to be fun. If you don’t engage children, you can’t teach them.”
|
<urn:uuid:d5a9b5c9-cf71-49d4-9400-622776b90df8>
|
CC-MAIN-2022-40
|
https://www.baselinemag.com/innovation/big-bird-and-watson-team-up-to-teach-preschoolers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00437.warc.gz
|
en
| 0.951368 | 772 | 2.84375 | 3 |
When Alex Rosenberg, PhD, and Charlie Roco, PhD, were graduate students in Georg Seelig’s lab at the University of Washington, they drew out their idea for how to increase the scalablility of single-cell RNA sequencing (scRNA-seq) on a whiteboard. At that time, roughly five years ago, “large scale was about 100 cells,” said Rosenberg. They developed their idea into the technique known as Split Pool Ligation-based Transcriptome sequencing (SPLiT-seq).
According to Rosenberg, the emails started rolling in as soon as the proof-of-concept paper was published in Science in 2018. A “huge number” of groups reached out, he said, asking how to set up SPLiT-seq in their own labs.
|
<urn:uuid:19b95da5-2490-4220-86d4-c5371776aef4>
|
CC-MAIN-2022-40
|
https://biopharmacurated.com/single-cell-rna-sequencing-may-be-split-by-parse-biosciences/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00437.warc.gz
|
en
| 0.980123 | 166 | 2.65625 | 3 |
New Training: Explain Linux Kernel and Boot Concepts
In this 8-video skill, CBT Nuggets trainer Shawn Powers explains the Linux boot process, including UEFI and BIOS, the Grub bootloader, and kernel modules. Watch this new Linux training.
Watch the one of these courses:
This training includes:
43 minutes of training
You’ll learn these topics in this skill:
BIOS and UEFI
GRUB and GRUB2 Bootloaders
Boot File Locations
Boot Modules and Files
Loading Kernel Modules on Boot
Manipulating Kernel Modules
The Linux Boot Process in 6 Steps
The Linux boot process is straightforward and consists of six main steps:
BIOS: The Basic Input/Output System kicks on first, loading and then executing the Master Boot Record boot loader. It performs integrity checks on the hard drive, then detects the boot loader program and loads it into the memory.
MBR: The Master Boot Record loads and executes the GRUB boot loader. It is located in the first sector of the bootable disk.
GRUB: In very old systems, this was the LILO. In modern systems, the GRand Unified Bootloader offers a simplified menu that allows you to select the kernel image you want to boot with (if you have more than one installed). The default kernel image is the latest one installed.
Kernel: This is the core of an operating system and controls everything within it. Once the GRUB selects it, the kernel mounts the root file system and then executes the /sbin/init program. The temporary root file system is established until the real file system can be mounted.
Init: Linux will choose one of six run levels to execute: 0, 1, 3, 5, 6, or Emergency.
Finally, Linux will execute runlevel programs which are executed from various directories (0 through 6).
|
<urn:uuid:87b8bfab-c105-4d99-9a12-0f58e13a0d04>
|
CC-MAIN-2022-40
|
https://www.cbtnuggets.com/blog/new-skills/new-training-explain-linux-kernel-and-boot-concepts
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00437.warc.gz
|
en
| 0.842891 | 400 | 3.328125 | 3 |
In January of 2021, law enforcement and judicial authorities across the globe disrupted one of the most notable botnets of the past decade: Emotet. Investigators have taken control of its infrastructure in an internationally-coordinated operation. For readers who don’t know, Emotet is malware operated by a Russian cybercrime organization first detected in 2014.
How Did It Work?
The Emotet hackers used a fully automated email delivery process, distributing malware to victims’ computers through infected e-mail attachments, using Phishing attacks as their primary attack method. A variety of effective campaigns were used to trick unsuspecting users into opening these infected attachments. In the recent past, Emotet phishing campaigns presented invoices, shipping notices, and information about COVID-19 to targets with alarming success rates. Each email contained a malicious Word document, either attached to the email itself or downloadable by clicking on a link within the email message. Once a user opened one of these documents, users were prompted to “enable macros” so that malicious code hidden in the Word file could run and install the malware on the victim’s device.
What made the Emotet malware strain so alarming is the malware was offered for sale to other hackers via the Dark Web. This allowed multiple criminal organizations to put the malware to use across the globe. This type of attack is one of the biggest cybercrime attacks used in the world today. Emotet ransomware grew quickly and rivaled other large ransomware variants including TrickBot and Ryuk.
How Did the Authorities Take Down Emotet?
The system used by Emotet involved hundreds of servers located across the globe, all having different functionalities to manage machines of the infected victims, spread the malware, serve other criminal groups, and make the network more resilient against takedown attempts.
To critically disrupt the Emotet infrastructure, law enforcement agencies from around the world teamed up. The United States, Canada, UK, Ukraine, France, Netherlands, Lithuania, and Germany all participated in the Emotet take-down. The result of their efforts was that law enforcement and judicial authorities now control Emotet’s infrastructure. Now in control of Emotet’s command and control infrastructure, law enforcement is testing issuing a massive uninstall yourself command to the virus (this approach and its results have yet to be confirmed).
It’s unquestionably good news that these countries banded together to take down this prolific cybercrime operation. The bad news is that cybercrime always has new and upcoming actors; when one operation gets shut down, others inevitably move in to try to fill that hole. Even though the law enforcement agencies have taken control of Emotet’s systems, until the hackers are arrested and convicted, there’s a chance they’ll rebuild their infrastructure and go back to their old ways.
What Can you Do?
Malware like Emotet is polymorphic in nature, meaning the malware changes its code and strategy often. Since many anti-virus and anti-malware programs scan the computer for known malware patterns, a code change can cause difficulties for its detection, allowing the infection to go undetected. That’s why it’s important to have a strong combination of cybersecurity tools (anti-virus/malware and operating systems), cybersecurity awareness training, and policy governance to avoid falling victim to advanced threats like Emotet. Users should always carefully check their emails to avoid opening messages or attachments from unknown senders. If a message seems too good to be true – it likely is! Oftentimes, the best line of defense is a human firewall, train and govern your staff to stop the threats where they start.
|
<urn:uuid:a289722c-4a68-4eec-959a-96c138fa6502>
|
CC-MAIN-2022-40
|
https://cyberhoot.com/blog/emotet-operation-takedown/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00437.warc.gz
|
en
| 0.932534 | 768 | 2.75 | 3 |
“To see a world in a grain of sand,” the opening sentence of the poem by William Blake, is an oft-used phrase that additionally captures a few of what geologists do.
We observe the composition of mineral grains, smaller than the width of a human hair. Then, we extrapolate the chemical processes they recommend to ponder the development of our planet itself.
Now, we’ve taken that minute consideration to new heights, connecting tiny grains to Earth’s place within the galactic surroundings.
Trying Out to the Universe
At an excellent bigger scale, astrophysicists search to grasp the universe and our place in it. They use legal guidelines of physics to develop fashions that describe the orbits of astronomical objects.
Though we could consider the planet’s floor as one thing formed by processes fully inside Earth itself, our planet has undoubtedly felt the results of its cosmic surroundings. This contains periodic modifications in Earth’s orbit, variations within the solar’s output, gamma ray bursts, and naturally meteorite impacts.
Simply wanting on the moon and its pockmarked floor ought to remind us of that, given Earth is greater than 80 occasions extra large than its gray satellite tv for pc. The truth is, current work has pointed to the significance of meteorite impacts within the manufacturing of continental crust on Earth, serving to to kind buoyant “seeds” that floated on the outermost layer of our planet in its youth.
We and our worldwide crew of colleagues have now recognized a rhythm within the manufacturing of this early continental crust, and the tempo factors to a very grand driving mechanism. This work has simply been printed within the journal Geology.
The Rhythm of Crust Manufacturing on Earth
Many rocks on Earth kind from molten or semi-molten magma. This magma is derived both immediately from the mantle—the predominantly stable however slowly flowing layer under the planet’s crust—or from recooking even older bits of pre-existing crust. As liquid magma cools, it will definitely freezes into stable rock.
By this cooling technique of magma crystallization, mineral grains develop and may lure parts resembling uranium that decay over time and produce a kind of stopwatch, recording their age. Not solely that, however crystals may also lure different parts that monitor the composition of their parental magma, like how a surname may monitor an individual’s household.
With these two items of data—age and composition—we will then reconstruct a timeline of crust manufacturing. Then, we will decode its principal frequencies utilizing the mathematical wizardry of the Fourier remodel. This instrument principally decodes the frequency of occasions, very similar to unscrambling components which have gone into the blender for a cake.
Our outcomes from this method recommend an approximate 200-million-year rhythm to crust manufacturing on the early Earth.
Our Place within the Cosmos
However there’s one other course of with an analogous rhythm. Our photo voltaic system and the 4 spiral arms of the Milky Manner are each spinning across the supermassive black gap on the galaxy’s middle, but they’re shifting at completely different speeds.
The spiral arms orbit at 210 kilometers per second, whereas the solar is dashing alongside at 240km per second, that means our photo voltaic system is browsing into and out of the galaxy’s arms. You may consider the spiral arms as dense areas that gradual the passage of stars very similar to a site visitors jam, which solely clears additional down the highway (or by the arm).
This mannequin leads to roughly 200 million years between every entry our photo voltaic system makes right into a spiral arm of the galaxy.
So, there appears to be a attainable connection between the timing of crust manufacturing on Earth and the size of time it takes to orbit the galactic spiral arms—however why?
Strikes From the Cloud
Within the distant reaches of our photo voltaic system, a cloud of icy rocky particles named the Oort cloud is believed to orbit our solar.
Because the photo voltaic system periodically strikes right into a spiral arm, interplay between it and the Oort cloud is proposed to dislodge materials from the cloud, sending it nearer to the internal photo voltaic system. A few of this materials could even strike Earth.
Earth experiences comparatively frequent impacts from the rocky our bodies of the asteroid belt, which on common arrive at speeds of 15km per second. However comets ejected from the Oort cloud arrive a lot quicker, on common 52km per second.
We argue it’s these periodic high-energy impacts which might be tracked by the file of crust manufacturing preserved in tiny mineral grains. Comet impacts excavate big volumes of Earth’s floor, resulting in decompression melting of the mantle, not too dissimilar from popping a cork on a bottle of fizz.
This molten rock, enriched in mild parts resembling silicon, aluminium, sodium, and potassium, successfully floats on the denser mantle. Whereas there are various different methods to generate continental crust, it’s doubtless that impacting on our early planet shaped buoyant seeds of crust. Magma produced from later geological processes would adhere to these early seeds.
Harbingers of Doom, or Gardeners for Terrestrial Life?
Continental crust is significant in most of Earth’s pure cycles—it interacts with water and oxygen, forming new weathered merchandise, internet hosting most metals and organic carbon.
Massive meteorite impacts are cataclysmic occasions that can obliterate life. But, impacts could very properly have been key to the event of the continental crust we reside on.
Nonetheless we got here to be right here, it’s awe-inspiring on a transparent night time to lookup on the sky and see the celebs and the construction they hint, after which look down at your toes and really feel the mineral grains, rock and continental crust under—all linked by a really grand rhythm certainly.
Picture Credit score: Pexels / 9143 pictures
|
<urn:uuid:f7214e16-e7ab-4fd3-807d-8b4f668b057d>
|
CC-MAIN-2022-40
|
https://blingeach.com/scientists-have-traced-earths-path-by-the-galaxy-by-way-of-tiny-crystals-discovered-within-the-crust/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00637.warc.gz
|
en
| 0.917881 | 1,254 | 2.703125 | 3 |
Today cryptocurrencies have become a global phenomenon known to most people. They are quickly going mainstream and more people are exploring the crypto world. However, investors aren’t the only ones interested in cryptocurrency - cybercriminals are thrilled with the idea of unregulated money. It has opened new attack vectors and a new way for cybercriminals to disappear leaving no trace.
Due to their anonymous nature, cryptocurrencies play an essential role in the underground economy. They are used for most criminal-to-criminal (C2C) payments on Darknet forums and marketplaces. Around $76 billion of illegal activity per year involves Bitcoin1, and by 2021 Cybersecurity Ventures
predicts that more than 70 percent of all cryptocurrency transactions will be for illegal activity. Also, many hackers demand payment from victims for attacks, such as ransomware or DDoS extortion, in cryptocurrencies (V2C – victim-to-criminal).
While the rise of cryptocurrency facilitates cybercrime in general, it gave a significant boost to the development of novel types of cyberattacks. Why is cryptocurrency so appealing to cybercriminals? How is it used in the cybercrime ecosystem?
What is Cryptocurrency?
“Cryptocurrency is an internet-based medium of exchange that uses principles of cryptography to conduct and secure transactions. Cryptocurrencies leverage blockchain technology to gain decentralization, transparency, and immutability.”
Simply put, cryptocurrency is digital money created using computer programs and computing power. It’s unlike fiat (regular) currency — dollars or euros — because it’s digital-only, there are no bills or coins to carry around.
The most important feature of a cryptocurrency is that it’s completely decentralized and is not controlled by any central authority. Unlike paper currencies controlled by governments, crypto operates independently of central banks – e.g. bitcoins are created in a pre-determined rate
regardless of its value and without any economic or political influence. Blockchain technology, which lies at the core of cryptocurrency, lets people and institutions shift funds instantly and without the need for a middleman.
It’s called cryptocurrency because it’s built on strong cryptography. All transactions are verified cryptographically by all users in the network and are recorded in a decentralized public ledger known as the blockchain. The reason that the blockchain cannot be altered is that the data in the blockchain
is validated by millions of participants, or “miners,” scattered across the globe.
Bitcoin is the first and most famous cryptocurrency, which serves as a digital “gold standard” for the whole ecosystem. It remains the preferred and most frequently used cryptocurrency among cybercriminals, according to the 2019 Internet Organized Crime Threats Assessment (IOCTA) report by Europol. In 2019, ten years since its initial release, Bitcoin‘s market cap reached $165.39 billion and transaction volume amounts to more than 300.000 transactions per day.
Why Are Cryptocurrencies Appealing to Cybercriminals?
Cryptocurrencies have inherently low levels of regulation and are not governed by a central authority, meaning the transactions can’t be closely monitored. This makes them a haven for criminal activity around the globe. Cryptocurrencies can easily carry millions of dollars across borders without
1) Pseudonymous: Neither transactions nor accounts are connected to real-world identities, so it’s easy for cybercriminals to remain unidentified when they use crypto. Payments are made from “Bitcoin addresses,” and individuals can easily create new addresses. While it is usually possible to analyze the transaction flow, it is not an easy task to connect the real-world
identity with owners of those addresses.
2) Fast and global: Crypto transactions are propagated nearly instantly in the network and are confirmed in a couple of minutes. Since they happen in a global network of computers, they are completely indifferent to physical location. It doesn‘t matter if you send Bitcoin to your neighbor or to someone on the other side of the world.
Cryptocurrencies have become the most popular means of payment on the dark web because they allow traders and buyers to remain anonymous. Alternative currencies such as Monero and Verge, which are privacy-focused and offer even greater anonymity than Bitcoin, have become favorites for criminal activities on the Darknet.
There’re several types of cyberattacks where cybercriminals are taking advantage of cryptocurrencies. They include ransomware, DDoS extortion, cryptojacking, and cryptocurrency exchange hacks.
One of the biggest cybersecurity trends in history, ransomware is designed to extort money by encrypting user data. This type of malware typically displays an on-screen message offering to restore access after the victim pays a ransom. Typically, cybercriminals demand payment in the form of Bitcoin or other digital currency. Thus, the attackers are virtually impossible to track down.
2017 was the biggest year for ransomware attacks – global outbreaks of the notorious WannaCry and NotPetya ransomware that brought down many large organizations. 2017 was also the year when the price of Bitcoin skyrocketed from below $1,000 to nearly $20,000, reaching its all-time high of $19,783.21 on Dec. 173. Coincidence? We don’t think so.
DDoS extortion (RDoS or ransom-driven DDoS) campaigns have become very common and are driven, in part, by their ability to use cryptocurrency payments, which make it difficult for investigators to track the money as it flows from victims to criminals.
The tactic is the following: cybercriminal blackmails organizations by asking them to pay Bitcoin to avoid their site or service being disrupted by a DDoS attack. Many hackers are motivated by the potential for financial gain and the ease at which such attacks can be performed. Extortion is one of the oldest tricks and one of the easiest ways for hackers to profit.
A prominent group that carried out a lot of activity using the 'DDoS-as-an-extortion' technique was DD4BC (short for "DDoS for Bitcoin"), which first emerged in 2014 and was arrested by Europol in 2016. In October 2019, a fake "Fancy Bear"4 group was sending ransom demands to banks and
financial organizations across the word, threatening to launch DDoS attacks. In some cases, the cybercriminals did carry out small DDoS attacks to demonstrate their capabilities and validate the threat, but no serious follow-up attacks have been observed.
Cryptojacking shook up the threat landscape in 2017 and 2018, when cryptocurrency prices surged to record levels. It also made a comeback during the summer of 2019. The primary reason for this was the general revival of the cryptocurrency market, which saw trading prices recover after a spectacular crash in late 2018.
The attack consists of hackers using the computing power of a compromised device to generate (“mine”) cryptocurrency without the owner’s knowledge. The types of devices vulnerable to cryptojacking are not limited to smartphones, servers, or computers. IoT devices can be infected as well. The main effects of cryptojacking for users include: device slowdown; overheating batteries; increased energy consumption; devices becoming unusable; and reduction in productivity.
There’re two main types of cryptomining - passive cryptomining through scripts running in a victim’s internet browser, and more intrusive cryptojacking malware. Both techniques exploit a victim’s processing power, without their permission, to mine cryptocurrencies.
In the beginning, malware operators deployed Bitcoin-based cryptominers, but as Bitcoin became harder to mine on regular computers, they shifted to other altcoins. Due to its anonymity-centric features, Monero slowly became a favorite currency among cybercriminal gangs.
The closure of Coinhive, the most popular mining script, in March 2019 led to a decline in the frequency of browser-based cryptomining. However, attacks against consumers and organizations continue to happen and evolve. There are reports of cryptojacking malware both going ‘file-less’ and incorporating the Eternal Blue exploit in order to replicate and propagate themselves over a network, like a worm virus.
Cryptocurrency itself is a very appealing target for cybercriminals. In 2018, over $1 billion in cryptocurrencies was stolen from exchanges and other platforms worldwide. Attacks and fraud, which historically targeted regular
payment systems, banks and fiat currencies, have now been adapted to incorporate cryptocurrencies. As such, attacks on various crypto assets like crypto exchanges or personal crypto wallets have now became a routine – an increasing number of malware and phishing activities are targeting cryptoinvestors and enterprises.
How to protect against ransomware:
In order not to get infected, follow basic security practices in your day-to-day, e.g. do not open suspicious email attachments, do not click on unknown links, make regular offline backups, install software updates when they become available, etc. To get more tips, check out our Ransomware Survival Guide.
• Avoid installing “free” apps from unofficial sources - other than Google Play Store or App Store.
• Never click on suspicious email links unless you know who sent it to you. Email is the most popular vector for infecting computer systems with malware.
• Use strong passwords for computers, mobile and IoT devices, and Wi-Fi networks.
• Patch operating system and software on a regular basis.
Look for the following symptoms of infection: slowdown of the device, a spike in CPU usage, overheating of the battery up to the point that the phone becomes unresponsive. However, that’s not always the case - some malware can be configured to limit the CPU/GPU usage, reducing its impact and thereby avoiding detection by not leaving the phone totally useless. In order to avoid this threat, users should check if their telecom operator offers a security service for cyberprotection of mobile and home devices that can block it.
To protect enterprise assets from cryptojacking and other security threats a multilayer security approach that combines prevention and detection is a best practice. Prevention that blocks unauthorized access is a general requirement, but specifically enterprises should incorporate network visibility and control that is able to detect and block crypto websites, applications and
protocols, and other risky apps that can serve as hidden channels for cybercriminal activity.
Against DDoS extortion:
Industry experts don’t recommend paying the ransom - there is no guarantee the attack will arrive or that the payment would prevent it. In many cases, such attacks are “empty” threats – their authors are using scare tactics hoping to fool victims into paying, and ransom letters aren’t followed by any serious attacks or disruptions to the service. Organizations should consider installing DDoS protection solutions that automatically detect and block even the smallest DDoS attacks.
Cryptocurrencies have been around for already a decade, but it is only since mid-2017 that they have gone mainstream and attracted huge criminal interest. Digital money is here to stay and will probably play a significant role in the future economy. As such, it will always be a lucrative target for cybercrime. Protection for the mass market and for enterprises against crypto-related attacks should be part of everyone’s security strategy.
Are you concerned about cyber threats related to crypto?
We can assist.
|
<urn:uuid:5d277533-209c-4bc9-8ada-29aad268b05a>
|
CC-MAIN-2022-40
|
https://www.allot.com/cyberhub/cryprocurrency-and-cybercrime/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00637.warc.gz
|
en
| 0.93971 | 2,347 | 3.171875 | 3 |
Need to know more about ransomware, including how to prevent ransomware and how to remove it following an infection? In this post, we explore elaborate on why ransomware is a dangerous security threat for businesses and individuals alike. We also offer targeted strategies you can use right now to reduce your risk of getting hit with ransomware, as well as your next steps should you get infected with encryption malware.
What is ransomware?
Once a rare and obscure form of malware, ransomware now has an outsized impact on almost everyone. This number of ransomware attacks increased 150% between 2019 and 2020. Although it only accounts for around 15% of all cyber attacks, ransomware is also one of the most expensive, with each attack costing businesses nearly $2 million to remedy.
With these types of attacks making headlines almost daily, that raises a legitimate question: What is a ransomware attack?
At a basic level, ransomware is just another form of malware. It’s a malicious program intended to infect and disrupt the normal operation of a computer system.
However, there are two key features of ransomware that distinguish it from most other forms of malware:
- Instead of stealing data, ransomware is designed to prevent you from accessing it
- Instead of selling or using your stolen data, cybercriminals try to force you to pay a ransom to regain access to your data
Where malware has traditionally been focused on stealing data, ransomware is about extortion. Outside of this, the methods cybercriminals use to infect computers with ransomware are the same as any other malware.
That being the case, you could get infected with ransomware if you:
- Download it from a malicious email attachment or link
- Load it onto your machine from a USB flash drive or DVD
- Download it while visiting a corrupted website
Hackers can also load ransomware onto a system if they hack into a system using brute force, or if they use stolen login credentials.
How does ransomware work?
Ranswomare’s function is relatively simple. There are multiple types of ransomware designs, but all are essentially encryption programs. Once installed on a system, the program executes and encrypts the type of files it was programmed to target.
Occasionally, ransomware writers could be out to target only a selection of file types, such as Word documents or Excel sheets. More often than not, however, hackers take a broad approach that involves encrypting every file within a system or server.
How do I know if I have ransomware?
Losing access to system files is a clear sign of a ransomware attack. Ransomware shuts down select system functions or denies access to files. In the case of Windows machines, it usually disables your ability to access the start menu (that way you can’t access antivirus programs or try to revert to Safe Mode).
Ransomware serves no purpose unless the attacker makes it clear that you’re infected and provides instructions for how to unblock your system. Given that, most ransomware will come with a message (on your screen or emailed to you) that your system has been encrypted. This message typically includes the ransom demand and information for how to pay the ransom (typically in Bitcoin or some other cryptocurrency).
You may see something like this:
Again, encryption is the go-to method for ransomware writers. This type of malware encrypts files on your device so they cannot be opened without the proper decryption key or password. Only the attacker will have this information (although sometimes they don’t).
Any file can be encrypted with ransomware, although most ransomware won’t attempt to encrypt all types of files. Some newer forms of encrypting ransomware have even taken to encrypting network shared files as well, a dangerous development for businesses in particular.
Until you clear theransomware from your machine (or pay the demanded ransom and hope the criminal either clears it for you or gives you the decryption key), you won’t have access to those files or critical systems. Some ransomware will even demand that you pay up within a certain amount of time, or else the files will stay locked forever or the virus will completely wipe your hard drive.
What you read above is a bit of a high-level explanation. If you need a more detailed technical explanation of how ransomware works to encrypt data, we recommend this excellent Medium post on the top from Tarcísio Marinho.
Types of ransomware
Ransomware has been around since the 1980s, but many attacks today use ransomware based on the more modern Cryptolocker trojan. File-encrypting ransomware is increasingly the most common type. Additionally, many hackers now employ double-encryption techniques that employ two types of malware to lock files.
According to Malwarebytes, there are several categories of ransomware that you may still encounter:
If ransomware finds its way onto your machine, it’s likely going to be of the encrypting variety. Encrypting ransomware is quickly becoming the most common type due to a high return on investment for the cybercriminals using it, and how difficult it is to crack the encryption or remove the malware. This is a favorite among hackers because most antivirus tools simply don’t work to prevent it and can’t effectively remove the encryption after an infection.
Encrypting ransomware will completely encrypt the files on your system and disallow you access them until you’ve paid a ransom, typically in the form of Bitcoin. Some of these programs are also time-sensitive and will start deleting files until the ransom is paid, increasing the sense of urgency to pay up.
On this type of ransomware, Adam Kujawa, Director of Malwarebytes Labs, had this to say: “It’s too late once you get infected. Game over.”
Online backup can be a great help in recovering encrypted files. Most online backup services include versioning so you can access previous versions of files and not the encrypted ones
Scareware is malware that attempts to persuade you that you have a computer virus that needs removal right away. It will then try to get you to clear the virus by buying a suspicious and typically fake malware or virus removal program. Scareware is highly uncommon these days, but some of these viruses do still exist out in the wild. Many target mobile phones.
A scareware virus typically won’t encrypt files, although it may attempt to block your access to some programs (such as virus scanners and removers). Nevertheless, scareware is the easiest to get rid of. In fact, in most cases, you can remove scareware using standard virus removal programs or other methods without even entering Safe Mode (although this may still be necessary or recommended).
Screen locker (or lock-screen viruses)
Screen lockers will put up a warning screen that limits your ability to access computer functions and files. These can be installed onto your machine or exist within a web browser. They’ll typically come with a message claiming to represent a law enforcement organization and try to convince you that you’ll face severe legal consequences if you do not pay a fine immediately.
This type of virus can be installed in numerous ways, including by visiting compromised websites or by clicking on and downloading an infected file contained in an email. When installed directly onto a computer, you may have to perform a hard reboot to regain access to your system. However, you may also find that you’re still greeted with the screen lock message even when the operating system loads up again.
Screen lockers tend to lock you out of your menu and other system settings, but don’t always block access to your files. Some of the malware’s primary attack methods prevent you from easily accessing your virus removal software, and at times may even prevent you from restarting your computer from the user interface.
Screen lockers are another good reason why having cloud backup is extremely important. While the screen locker won’t encrypt or delete your files, you may find yourself forced to perform a system restore. The system restore may not delete your important files, but it will return them to an earlier state. Depending on the restored states, that may still result in a lot of lost data or progress. Regular online backups will help prevent data loss that performing a system restore does not guarantee, especially if the virus has been hiding on your system for much longer than you realized.
How to prevent ransomware
Decrypting files encrypted with ransomware is incredibly difficult. Unless you pay the ransom and receive the decryption key from the attacker (NOT RECOMMENDED), decrypting the ransomware is effectively impossible Most ransomware these days will use AES or RSA encryption methods, both of which are functionally impossible to crack with brute-force methods.
To put it in perspective, the US government also uses AES encryption standards. Information on how to create this kind of encryption is widely known, as is the difficulty in cracking it.
The best method to fight ransomware is remove the risk of infection. The considerable cost and difficulty of decrypting without paying a ransom mean your best strategy is prevention.
Protection against ransomware can be accomplished by shoring up weaknesses in your system, network, or organization and changing the type of behaviors that put you or your business at risk of a ransomware attack.
Ransomware prevention best practices
- Invest in solid data backup. This is hard to understate. Data backup is the single best thing you can do. Even if you do get hit by ransomware, having effective and consistent data backup means your data will be safe, regardless of which type of ransomware you’re attacked with.
- Invest in effective antivirus software. In this case, you don’t just want malware or virus cleaners, but software that will actively monitor and alert you to threats, including inside web browsers. That way, you’ll get notifications for suspicious links, or get redirected away from malicious websites where ransomware may be housed.
- Never click on suspicious email links. Most ransomware spreads through email. When you make it a habit of never clicking on suspicious links, you significantly lower your risk of downloading ransomware and other viruses.
- Protect network-connected computers. Some ransomware works by actively scanning networks and accessing any connected computers that allow remote access. Make sure any computers on your network have remote access disabled or utilize strong protection methods to avoid easy access.
- Keep software up-to-date. Updates to Windows and other operating systems and applications often patch known security vulnerabilities. Updating in a timely manner can help lower the risk of susceptibility to malware, including ransomware.
- Invest in ransomware protection tools. Particularly useful for small businesses and for network administrators to monitor and respond to emerging threats. Many antivirus tools now include anti-ransomware solutions that detect use behavioral analysis to stop ransomware viruses from running before they start the encryption process.
What to do if you catch ransomware mid-encryption
Encryption is a resource-intensive process that requires a significant amount of computational power. If you’re lucky, you may be able to catch ransomware mid-encryption. This takes a keen eye and knowing what an unusually large amount of activity looks like (and sometimes sounds like). Ransomware will typically operate as a background process to avoid detection, making it easy to miss the malicious activity before you can stop it.
Additionally, the virus doing the encryption will likely be hiding inside another program, or have an altered file name that is made to look innocuous. You may not be able to tell which program is performing the action. However, should you discover what you think is a ransomware virus in the midst of encrypting files, here are a couple of options:
Place your computer into hibernation mode
This will stop any running processes and create a quick memory image of your computer and files (and saved to the hard drive). Do not restart your computer or take it out of hibernation. In this mode, a computer specialist (either from your IT department or a hired security company) can mount the device to another computer in a read-only mode and assess the situation. That includes the recovery of unencrypted files.
Suspend the encryption operation
If you can identify which operation is the culprit, you may want to try suspending that operation.
In Windows, this involves opening up the Task Manager (CTRL + ALT + DEL) and looking for suspicious operations. In particular, look for operations that appear to be doing a lot of writing to the disk. For macOS users, you can do this from Activity Monitor (CMD + SHIFT + U, then open the Utilities folder to find the Activity Monitor).
You can suspend operations from there. It’s better to suspend the operation instead of killing it, as this allows you to investigate the process in more detail to see what it’s actually up to. That way you can better determine whether you have ransomware on your hands.
If you do find that it’s ransomware, check which files the program has been attacking. You may find it in the process of encrypting certain files. You may be able to copy these files before the encryption process has finished and move them to a secure location.
You can find some other great suggestions by security and computer professionals on Stack Exchange.
Ransomware removal: How to remove scareware and screen lockers (lock-screen viruses)
Screen lockers are more troublesome to remove than scareware but are not as much of a problem as file-encrypting ransomware. Scareware and lock-screen viruses are not perfect attackers and can often be easily removed at little to no cost.
Of the options available for removing screen locking viruses, consider these two:
- Perform a full system scan using a reputable on-demand malware cleaner
- Perform a system restore to a point before the scareware or screen locker began popping up messages.
Let’s look at both of these in detail.
Option 1: Perform a full system scan
This is a fairly simple process, but before performing a system scan, it’s important to choose a reputable on-demand malware cleaner. One such cleaner is Zemana Anti-Malware. Windows users could even use the built-in Windows Security (formerly Windows Defender) tool, although it’s occasionally less effective than third-party antivirus software.
To perform the full system scan using Zemana Anti-Malware, do the following:
- Open your Zemana Anti-Malware home screen.
- From the home screen, change the scan type to Deep
Before running a scan, we recommend setting a restore point. Setting a restore point is a good best practice for virus scans, in general, just in case a critical error occurs during the scanning process. Your virus scan might tag and remove some files or program that aren’t problems (Chrome extensions often come up as problematic, for example), making a system restore necessary to get them back.
On a Windows computer:
- Type in “restore point”.
- Select Create a restore point from Control Panel.
- Select Create, then type a description of the restore point, such as “pre-malware scan”
- Additionally, you may want to go to Configure to turn on System Protection. This will start automatically creating restore points in the future, and allow you to choose how much space is dedicated to backups
On a macOS computer:
No need! Your macOS computer automatically creates restore points using Time Machine.
With your restore point secured, you can now click on Scan Now to begin the malware scan.
In my case, a recent Zemana system scan revealed a potential DNS hijack. Yikes! (It also misclassified a few programs as being malware and adware, so be careful to make sure to check which files you’re cleaning and quarantining properly.)
To perform a full system scan using Windows Security, do the following:
- Perform a quick system search for “Windows Security”
- Access Windows Security and click on the shield icon on the left
- Click on Scan options
- Switch to Full Scan
- Click on Scan now
Microsoft continually improves its built-in Windows antivirus software, but it’s still not as good a solution as an on-demand third-party tool option like Zemana or many other high-quality antivirus programs. Note that any third-party AV tool you install will automatically disable Windows Security.
When dealing with screen-locking ransomware, you may need to enter Safe Mode to get the on-demand virus removers to work or to run your system restore properly. Even some scareware can at times prevent you from opening your virus removal programs, but they usually can’t prevent you from doing so while you’re in Safe Mode. If you’re having trouble getting your computer to restart in Safe Mode (a distinct possibility if you have a screen locker), check out our guide on How to Start Windows in Safe Mode.
Option 2: Perform a system restore
Another option is to perform a system restore to a point before the scareware or screen locker began popping up messages. If you’re using Windows, this option is only available if you have your computer set to create system restore points at preset intervals, or that you’ve performed this action yourself manually.
(Those accessing this guide as a preventative measure against ransomware should refer to Option 1 where we talk about how to create restore points on Windows.)
If you don’t have any recent restore points for your Windows machine (or any at all), this option won’t be helpful for you if you are currently working through a virus infection.
- If it shows that you already have a backup in place, select the backup files from the most recent restore point or from whichever restore point you desire.
The backup restoration process may take several minutes, especially if the amount of data being restored is significant. However, this should restore your file system to a point before the virus was downloaded and installed.
Note that both a scan and a restore can have delayed reaction times, so it’s a good idea to do both.
Indiana University also provides a helpful knowledge base with a few advanced methods for more troublesome scareware. We also recommend checking out our Complete Guide to Windows Malware and Prevention. It will walk you through the process of malware removal and what that process looks like with several different programs.
Ransomware removal: How to remove file-encrypting ransomware
Once encrypted ransomware gets onto your system, you’re in trouble if you want to keep any unsaved data or anything that hasn’t been backed up (at least without paying through the nose for it). Paying the ransom is tempting, but it’s not always effective. According to Sophos’ 2021 State of Ransomware report, companies that paid a ransom typically only got back around 65% of their data. Just 8% got back all of the data that was held for ransom.
Considering the risks of paying the ransom, if you’re hit with a nasty piece of encrypting ransomware, don’t panic and if you can at all avoid it, don’t encourage ransomware hackers by paying up. You have two alternative options for ransomware removal:
- Hire a professional ransomware removal service: If you have the budget to hire a professional and decide that recovering your files worth the money, this might be the best course of action. Many companies, including Proven Data Recovery and Cytelligence, specialize in providing ransomware removal services. Note that some charge even if the removal is unsuccessful, while others don’t.
- Try to remove the ransomware yourself: This is typically free to do and may be a better option if you don’t have the resources to hire a professional. Recovering your files yourself will typically involve first removing the malware and then using a tool to decrypt your files.
If you’d like to resolve the issue yourself, try these steps:
Step 1: Run an antivirus or malware remover to get rid of the encrypting virus
Important note: Removing the encrypting ransomware virus is not the same as decrypting files. If you’ve been hit by ransomware, you will still need to decrypt or restore the files using other tools.
Refer back to the malware/virus removal instructions provided in the scareware/screen locker removal section above. The removal process in this step will be the same, with one exception: WE STRONGLY ENCOURAGE YOU TO REMOVE THIS VIRUS IN SAFE MODE WITHOUT NETWORKING ACCESS.
There is a chance that the file-encrypting ransomware you’ve contracted has also compromised your network connection. Some variants need to communicate back to the a host server; cutting off that communication can help prevent further action on the part of the hacker that’s infected the system.
Removing the malware is an important first step to deal with this problem. Many reliable programs will work in this case, but not every antivirus program is designed to remove the type of malware that encrypts files. You can verify the effectiveness of the malware removal program by searching its website or contacting customer support.
Step 2: Try to decrypt your files using a free ransomware decryption tool
Again, you should be doing everything you can to avoid paying a ransom. Your next step is going to be to try a ransomware decryption tool. Unfortunately, there is no guarantee that there will be a ransomware decryption tool that works with the ransomware infecting your system. You may have a variant that has yet to be cracked
Kaspersky Labs, McAfee, and several other organizations operate a website called No More Ransom! where anyone can download and install ransomware decryptors, or have ransomware identified.
Kaspersky also offers free ransomware decryptors on its website.
First, we suggest you use the No More Ransom Crypto Sheriff tool to assess what type of ransomware you have and whether a decryptor currently exists to help decrypt your files. It works like this:
- Select and upload two encrypted files from your PC
- Provide a website URL, email address, or onion address, or bitcoin address given in the ransom demand
- If no information was provided in the demand, upload the .txt or .html file with the ransom note
The Crypto Sheriff will process that information against its database to determine if a solution exists. If no suggestion is offered, don’t give up just yet, however. One of the decryptors may still work, although you might have to download each and every one. This will be an admittedly slow and arduous process, but could be worth to see those files decrypted.
The full suite of decryption tools can be found under the Decryption Tools tab on the No More Ransom! website.
Running the file decryptors is usually simple. Most of the decryptors come with a how-to guide from the tool’s developer (many are from Emsisoft, Kaspersky Labs, Check Point, or Trend Micro). Each process may be slightly different, so you’ll want to read the PDF how-to guide for each one where available.
Here’s an example of the process you’d take to decrypt the Philadelphia ransomware:
- Choose one encrypted file on your system and a version of that file that’s currently unencrypted (from a backup). Place these two files in their own folder on your computer.
- Download the Philadelphia decryptor and move the executable to the same folder as your paired files.
- Select the file pair and then drag and drop the files onto the decryptor executable. The decryptor will then begin to determine the correct keys needed to decrypt the file.
- This process may take time, depending on the complexity of the program
- Once completed, you will receive the decryption key for all files encrypted by the ransomware.
- The decryptor will then ask you to accept a license agreement and give you options for which drives to decrypt files from. You can change the location depending on where the files are currently housed, as well as some other options that may be necessary, depending on the type of ransomware. One of those options usually includes the ability to keep the encrypted files
- You will get a message in the decryptor UI once the files have been decrypted
Again, this process may not work, as you may have ransomware for which no decryptor is available. Still, given there are many decryptors available, it’s best to go this route before paying money for decryption services and long before considering a ransom payment.
Backup option: Wipe your system and perform a complete data restoration from a data backup
Steps 1 and 2 only work when used together. If either fails to work for you, you’ll need to follow this step. Hopefully, you have a solid and reliable data backup already in place. If so, don’t give in to the temptation to pay the ransom. Instead, have an IT professional restore your files and system from data backup.
This is also a reason why bare-metal backup and restoration are important. There’s a good chance your IT professional may need to perform the complete bare-metal restoration for you. This not only includes your personal files, but your operating system, settings, and programs, as well. Windows users may also need to consider a complete system reset to factory settings.
Microsoft provides offers some strategies (mostly preventative) for larger organizations with its Human-Operated Ransomware Mitigation Project.
The history of ransomware
As mentioned, ransomware is not a new concept and has been around for many years. While the timeline below is not an exhaustive list of ransomware, it gives you a good idea of how this form of attack has evolved over time.
- 1989 – “Aids” Trojan, aka PC Cyborg, becomes the first known case of ransomware on any computerized system.
- 2006 – After a decade-busting hiatus, ransomware returns en masse with the emergence of Gpcode, TROJ.RANSOM.A, Archiveus, Krotten, Cryzip, and MayArchive. All are notable for their use of sophisticated RSA encryption algorithms.
- 2008 – Gpcode.AK arrives on the scene. Utilizing 1024-bit RSA keys, it requires a massive effort, beyond the means of most users, to break.
- 2010 – WinLock hits users in Russia, peppering displays with porn until the user makes a $10 call to a premium rate number.
- 2011 – An unnamed Trojan locks up Windows machines, directing visitors to a fake set of phone numbers through which they can reactivate their operating systems.
- 2012 – Reveton informs users their machine has been used to download copyright material or child pornography and demands payment of a ‘fine.’
- 2013 – The arrival of the now infamous CryptoLocker. Ramping up the encryption level, it is incredibly hard to circumvent.
- 2013 – Locker turns up, demanding payment of $150 to a virtual credit card.
- 2013 – Hard to detect, CryptoLocker 2.0 adds the use of Tor for added anonymity for the criminal coder who created it.
- 2013 – Cryptorbit also adds Tor use to its repertoire and encodes the first 1.024 bits of every file. It also installs a Bitcoin miner to milk victims for extra profit.
- 2014 – CTB-Locker mainly targets Russia-based machines.
- 2014 – Another significant development, CryptoWall infects machines via infected website advertisements and manages to affect billions of files worldwide.
- 2014 – A somewhat more friendly piece of ransomware, Cryptoblocker avoids Windows files and targets files under 100 MB in size.
- 2014 – SynoLocker targets Synology NAS devices, encrypting every file it finds on them.
- 2014 – TorrentLocker utilizes spam emails to spread, with different geographic regions targeted at a time. It also copies email addresses from the affected users’ address book and spams itself out to those parties as well.
- 2015 – Another hard-to-detect piece of ransomware, CryptoWall 2.0 uses Tor for anonymity and arrives in a manner of different ways.
- 2015 – TeslaCrypt and VaultCrypt can be described as niche ransomware in that they target specific games.
- 2015 – CryptoWall 3.0 improves on its predecessor by coming packaged in exploit kits.
- 2015 – CryptoWall 4.0 adds another layer to its encryption by scrambling the names of the encrypted files.
- 2015 – The next level of ransomware sees Chimera not only encrypt files but also publish them online when ransoms are not paid.
- 2016 – Locky arrives on the scene, named primarily because it renames all your important files so they have a .locky extension.
- 2016 – Located on BitTorrent, KeRanger is the first known ransomware that is fully functional on Mac OS X.
- 2016 – Named for the Bond villain in Casino Royale who kidnaps bond’s love interest to extort money, LeChiffre program takes advantage of poorly-secured remote computers on accessible networks. It then logs in and runs manually on those systems.
- 2016 – Jigsaw will encrypt and then delete files progressively until the ransom is paid. After 72 hours, all files will be deleted.
- 2016 – SamSam ransomware arrives complete with a live chat feature to help victims with their ransom payment.
- 2016 – The Petya ransomware utilizes the popularity of cloud file sharing services by distributing itself through Dropbox.
- 2016 – The first ransomware worm arrives in the form of ZCryptor, which also infects external hard drives and flash drives attached to the machine.
- 2017 – Crysis targets fixed, removable, and network drives, and uses powerful encryption methods that are difficult to crack with today’s computing capabilities.
- 2017 – WannaCry is spread through phishing emails and over networked systems. Uniquely, WannaCry uses a stolen NSA backdoor to infect systems, as well as another vulnerability in Windows that was patched over a month before the release of the malware (more details below).
- 2018 – The Ryuk ransomware emerges and quickly becomes the worst ransomware to hit the market, becoming bigger than and more devastating than WannaCry. According to Trend Micro, Ryuk had the largest ransom demand of any encryptor, at $12.5 million USD.
- 2019 – The city of Baltimore is hit by a ransomware variant called RobinHood in May. Nearly every server the city used was taken offline. Hackers demanded 13 Bitcoin, which at the time was around $76,000, but at this time of writing (July 2021) is worth over $428,000 USD. The city’s system was not fully restored until the end of the month. To its credit, Baltimore did not pay the ransom but later stated that ransomware remediation cost the city $18 million.
- 2020 – In May 2020, the IT services provider Cognizant was hit with a ransomware attack so large it cost the company between $50 million to $70 million in revenue. This was notably a “Maze” double-threat, as the hackers not only encrypted the data but also created a copy of Cognizant’s data and then threatened to leak the data if their demands were not met.
- 2021 – This year saw a new threat emerge. In May, Colonial Pipeline, a company responsible for transporting fuel for a large percentage of the US East Coast, was hit by a ransomware attack thanks to a compromised password. The attack resulted in reduced fuel capacity in over a dozen states and over two weeks of fuel shortages for millions of people. It was the first time a country’s important infrastructure suffered from a ransomware attack. Colonial Pipeline later admitted to paying the ransom, but in a rare occurrence, the US government later recovered a majority of the ransom money.
Can you remove ransomware?
Yes, you can remove ransomware. Antivirus software can delete ransomware from your computer or system. However, removing the ransomware won't decrypt any files that have been encrypted by the ransomware virus that infected your system.
How does ransomware get on your computer?
Ransomware can get on your computer the same way as other malware. Common attack vectors include:
- Installing infected files
- Clicking on links on infected websites
- Plugging unknown USB drives into your computer
- Account or system hacking due to poor password security
- Brute-force attacks from hackers
Should you pay ransomware?
No, you should not pay the ransom if you get infected with ransomware. Paying ransoms not only encourages this type of attack but may not result in the release of your files. It can also be exceptionally expensive.
What happens if I get ransomware?
If you get ransomware on your computer or network, the malware will begin encrypting your files. Ultimately, it will lock you out of your system and demand payment (often via cryptocurrency) for the release of the files via a decryption key.
|
<urn:uuid:db444541-cb56-4189-be59-a580c7df8217>
|
CC-MAIN-2022-40
|
https://www.comparitech.com/blog/information-security/ransomware-removal-handbook/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00637.warc.gz
|
en
| 0.920878 | 6,858 | 2.78125 | 3 |
Symmetric encryption algorithms are categorized into two: block and stream ciphers. This article explores block cipher vs. stream cipher, their respective operation modes, examples, and key differences. However, before delving into these in detail, let us understand the basics first.
Symmetric cryptography (key cryptography/private key cryptography) involves using a shared key/secret to access an encoded message between two entities. The sender shares the key cipher with the receiver to decrypt the message. The encryption algorithm emulates a one-time pad system to protect the original message from unauthorized access. The cipher algorithms generate a truly random key cipher used only once with the one-time pad system. Anyone who does not possess the secret/key cannot interpret the encrypted message.
Let’s see the differences (short introduction)
Block ciphers encrypt data in blocks of set lengths, while stream ciphers do not and instead encrypt plaintext one byte at a time. The two encryption approaches, therefore, vary widely in implementation and use cases.
What are Block Ciphers?
Block ciphers convert data in plaintext into ciphertext in fixed-size blocks. The block size generally depends on the encryption scheme and is usually in octaves (64-bit or 128-bit blocks). If the plaintext length is not a multiple of 8, the encryption scheme uses padding to ensure complete blocks. For instance, to perform 128-bit encryption on a 150-bit plaintext, the encryption scheme provides two blocks, 1 with 128 bits and one with the 22 bits left. 106 Redundant bits are added to the last block to make the entire block equal to the encryption scheme’s ciphertext block size.
While Block ciphers use symmetric keys and algorithms to perform data encryption and decryption, they also require an initialization vector (IV) to function. An initialization vector is a pseudorandom or random sequence of characters used to encrypt the first block of characters in the plaintext block. The resultant ciphertext for the first block of characters acts as the initialization vector for the subsequent blocks. Therefore, the symmetric cipher produces a unique ciphertext block for each iteration while the IV is transmitted along with the symmetric key and does not require encryption.
Block encryption algorithms offer high diffusion; that is, if a single plaintext block were subjected to multiple encryption iterations, it resulted in a unique ciphertext block for each iteration. This makes the encryption scheme relatively tamper-proof since it is difficult for malicious actors to insert symbols into a data block without detection. On the other hand, block ciphers have a high error propagation rate since a bit of change in the original plaintext results in entirely different ciphertext blocks.
Block Cipher Operation Modes
Several block cipher modes of operation have been developed to enable the encryption of multiple blocks of long data. These modes fall into two categories: Confidentiality-only and Authenticated encryption with additional data modes.
The Confidentiality-only cipher mode of operation focuses on keeping communication between two parties private. These modes include:
- Electronic codebook (ECB) – In this mode, plaintext messages are divided into blocks where encryption is applied to each block separately. The ECB cipher mode does not hide data patterns well since it lacks diffusion and is usually not recommended for security frameworks.
- Cipher block chaining mode (CBC) – This mode combines ciphertext from the previous block with current plaintext blocks using an XOR (exclusive disjunction) operation before performing the encryption. An IV is applied to the first plaintext block in a CBC mode to ensure uniqueness.
- Propagating cipher block chaining (PCBC) – In this mode, the encryption scheme performs an exclusive disjunction between the current plaintext, the previous plaintext, and the previous ciphertext before running the encryption algorithm. This causes minor changes in the ciphertext to propagate indefinitely during encryption and decryption.
Authenticated encryption with additional data – This mode of operation for block ciphers ensures data authenticity and confidentiality. This mode can be further sub-divided into:
- Galois/counter mode (GCM) – Uses an incremental counter that generates a universal hash over a finite binary field (Galois field) to generate message authentication codes before encryption and decryption.
- Synthetic initialization vector (SIV) – This type of block cipher uses an encryption key, plaintext input, and a header (authenticated variable-length octet strings) to enable authenticated encryption. SIV produces a deterministic ciphertext that keeps the plaintext private while ensuring the authenticity of both the header and ciphertext.
Examples of Block Ciphers
Block ciphers form the basis of most modern cipher suites. Some commonly used block cipher encryption standards include:
Data Encryption Standard (DES)
A 56-bit symmetric key algorithm was initially used to protect sensitive, confidential information. DES has since been withdrawn due to short key length and other security concerns but is still viewed as a pioneer encryption standard.
Advanced Encryption Standard (AES)
A popular block cipher that encrypts data in blocks of 128 bits using 128, 192, and 256-bit symmetric keys. The underlying block cipher uses substitution-permutation and transposition techniques to produce ciphertext by shuffling and replacing input data in a sequence of linked computations. AES is a globally accepted encryption standard since cryptanalysis efforts against its algorithms have been unsuccessful.
Twofish is an encryption standard that uses a Feistel network, a complex key schedule, and substitution techniques to separate the key and ciphertext. The standard encrypts plaintext data in blocks of128 bits, with flexible key sizes between 128 and 256 bits long.
Other encryption schemes that use block ciphers include 3DES, Serpent, and Blowfish, among others.
What are Stream Ciphers?
A stream cipher encrypts a continuous string of binary digits by applying time-varying transformations on plaintext data. Therefore, this type of encryption works bit-by-bit, using keystreams to generate ciphertext for arbitrary lengths of plain text messages. The cipher combines a key (128/256 bits) and a nonce digit (64-128 bits) to produce the keystream — a pseudorandom number XORed with the plaintext to produce ciphertext. While the key and the nonce can be reused, the keystream has to be unique for each encryption iteration to ensure security. Stream encryption ciphers achieve this using feedback shift registers to generate a unique nonce (number used only once) to create the keystream.
Encryption schemes that use stream ciphers are less likely to propagate system-wide errors since an error in the translation of one bit does not typically affect the entire plaintext block. Stream encryption also occurs in a linear, continuous manner, making it simpler and faster to implement. On the other hand, stream ciphers lack diffusion since each plaintext digit is mapped to one ciphertext output. Additionally, they do not validate authenticity, making them vulnerable to insertions. If hackers break the encryption algorithm, they can insert or modify the encrypted message without detection. Stream ciphers are mainly used to encrypt data in applications where the amount of plain text cannot be determined and in low latency use-cases.
Types of Stream Ciphers
Stream ciphers fall into two categories:
Synchronous stream ciphers
The keystream block is generated independently of the previous ciphertext and plaintext messages in a synchronous stream cipher. The most common stream cipher modes use pseudorandom number generators to create a string of bits and combine it with the key to form the keystream, which is XORed with the plaintext to generate the ciphertext.
Self-synchronizing/asynchronous stream ciphers
A self-synchronizing stream cipher, also known as ciphertext autokey, generates the keystream block as a function of the symmetric key and fixed size (N-bits) of the previous ciphertext block. Altering the ciphertext alters the content of the next keystream so that asynchronous stream ciphers can detect active attacks. These ciphers also offer limited error propagation since a single-digit error can affect N bits at most.
Examples of Stream Ciphers
Popular encryption schemes that use stream ciphers include:
Rivest Cipher (RC4)
RC4/ARC4/ARCFOUR is a fast, simple encryption algorithm developed in 1987 to implement byte-by-byte encryption using 64 or 128 bits long keys. RC4 is widely used in Transport Layer Security, Secure Sockets Layer, and the IEEE 802.11 WLAN standard. The popular encryption scheme comes in various flavors, including SPRITZ, RC4A, and RC4A+, among others.
Salsa20 is an efficient, modern encryption cipher that relies on an expansion function to produce the encryption keystream. In addition, Salsa20 depends on a core function that maps the key, a nonce digit, and constant vectors extracted from the expansion function to the keystream using add-rotate-XOR (ARX) operations.
Software-optimized Encryption Algorithm (SEAL)
SEAL is an additive binary stream cipher optimized for machines with 32-bit CPUs and sufficient memory resources. The encryption standard relies on a pseudorandom family that uses a length-increasing function and a 160-bit key to map the 32-bit string to a string of any length.
Other examples of stream ciphers include PANAMA, Scream, Rabbit, HC-256, and Grain, among others.
Key differences Between Block and Stream Ciphers
Block ciphers transform plaintext 1 block (64/128/256 bits) at a time, while stream ciphers convert plaintext to ciphertext 1 byte at a time. This makes block ciphers slower since an entire block has to be accumulated before the data is encrypted/decrypted. In contrast, stream ciphers encrypt bits of data into individual symbols one at a time.
Stream ciphers utilize only the confusion principle to transform data, ensuring data confidentiality. On the other hand, block ciphers use data diffusion and confusion to encrypt plaintext. Block ciphers can, therefore, be used to implement authenticated encryption for enhanced security.
Stream ciphers use an XOR operation on the plaintext to create ciphertext. Stream-based encryption is easily reversed by XORing the ciphertext outputs. Block ciphers encrypt more bits at a time, making the decryption comparatively complex.
|
<urn:uuid:717d16cb-a939-4908-87d1-253baa14da81>
|
CC-MAIN-2022-40
|
https://crashtest-security.com/block-cipher-vs-stream-cipher/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00637.warc.gz
|
en
| 0.861995 | 2,213 | 4.03125 | 4 |
The global pandemic has created an added dimension of complexity where access is concerned. Company heads, across every department, are now faced with the challenge of keeping their workforce healthy, as well as protecting them from security threats. As a result, access control has taken on even greater importance, because the wellness of an organisation’s employees, staff and visitors can quite literally depend on the strength of its access control system.
Stakeholders and clients alike expect security to not only be tighter, but also to be digitally controlled and cybersecure. And forward thinking companies are leveraging state-of-the-art technology to ensure more robust security architecture.
This guide discusses what access control is, how it works, and why it’s a vital part of your company’s security infrastructure. In other words, how it will keep your people, your property, and your assets safe.
1. The basics of access control
If you think back to movies we’ve watched through the years, it’s now comical to think that most action films started out with armed guards being taken over by the ‘bad guys’. Gone are the days when simply posting a security guard at the entrance to your premises was one of the main ways to safeguard against unwanted intrusion.
The notion of controlling access in and out of your business sites has evolved tremendously.
a. What is access control?
Here’s a short primer on access control: essentially, it means controlling who enters a location and when (including both days and times). The location may be an entire office building, a manufacturing location, a supply area, a building site or even just one room. And the people gaining access may be employees, contractors, maintenance personnel, or visitors.
When we talk about access control, it’s important to differentiate physical access from digital access. Physical access involves people or vehicles being allowed into a location. While digital access involves gaining entry to internal computer systems, databases or other digital systems. Both are incredibly important from a security perspective, but this guide will focus on physical access – specifically in offices or business spaces, as a component of effective space management.
b. How does access control work?
There are different types of physical access control systems, each with their own technical specifications. But there are five main ‘steps’ that apply generally to all such systems.
- Authorisation – In this initial stage, people are given permission to enter the premises, or specific locations on the premises, at specific times. A system administrator gives them access permissions based on a variety of criteria, including whether they’re an employee, contractor or visitor, and their role, department and more. These permissions (also known as authorisations or access rights) can be adjusted in the system for individuals or groups of people, as and when needed.
- Authentication – When someone approaches the premises, they present a credential, which could be a card, pin code, smartphone, QR code or key fob, for example. This credential (if activated) allows them to be recognised in the system and, ideally, validated as an authorised user. At this stage, the system also collects data on who is attempting to access the premises.
- Access – If the credential is validated, and the person has the correct access permissions, an electronic output signal is sent to the door, gate, elevator or other point of entry, so it unlocks and allows them to enter.
- Managing/Monitoring – System administrators may continually add, remove or alter permissions based on their company’s changing needs, and who is expected to be on the premises at what times. These administrators also monitor electronic entry logs to ensure that only authorised users are gaining access to the premises, and to stay abreast of any security threats.
- Auditing/Reporting – If there is a security threat (or even just suspicious activity), it’s vital that administrators and security personnel examine access logs closely. And, if necessary, share data with authorities. Companies must decide how long they should store access logs and other related data for, based on their security needs and any regulations they need to comply with.
c. Why is access control a must-have?
Traditional access control methods, such as posting a guard at the door or giving employees metal keys, have become outdated. And they’re woefully insufficient for today’s security needs. Aside from the potential for forced entry or human error, a reliance on keys (which can be lost, shared, copied, or worn down) presents a host of potential problems.
Also, traditional keys leave no data trail, presenting further security concerns and missed opportunities to collect meaningful information about building access and occupancy. They also don’t allow for any customisation or adjustments, such as allowing entry only some of the time, or on a temporary basis. And, unless one key opens every door the holder’s authorised for (including offices, bathrooms, conference areas, etc.), people end up carrying multiple keys – each with the same pitfalls described above.
In short, using a digital system for physical access control is a far superior way of achieving a secure work environment. And it signals more professionalism, discretion and legitimacy to potential clients.
d. What is an access control system?
As a working professional, you’ve probably already used a variety of access control systems already. Think about the last time you used a keycard at a hotel or scanned a QR code to get through a turnstile. Or you might have a fob to open your locker at work.
A physical access control system is essentially any electronic security system that uses identifiers to authorise entry and exits for people. These systems also record who’s accessed specific areas of a site. And this information can be critical when forecasting for facilities management and staffing, or keeping records for compliance and risk-management measures.
e. What is a visitor experience?
When we talk about visitor experience, we’re referring to someone’s personal, subjective response to time spent on unfamiliar premises – perhaps during a meeting, conference or scouting expedition – both during the event and afterwards. In corporate culture, visitor experience is getting lots of attention. It’s an indicator of the hosting company’s professionalism and can have a significant impact by helping to improve brand loyalty and trust. It can even play a part in sealing deals.
Aspects of visitor experience that make a difference include the:
- Ease of accessing the premises.
- Personal ‘welcome’ received upon arrival.
- Ability to gain access to the locations or people the visitor has come to see.
- Ability to arrive and leave promptly, without lots of waiting around.
- Sense of safety and security provided.
Much this can be achieved by using integrated systems that allow instant recognition, authorisation and access to locations – all tailored to each visitor’s individual needs during a scheduled visit.
f. Who should use an access control system?
Almost any organisation concerned with securing their people, premises and assets could benefit from a physical access control system. In some sectors, access control is a necessity due to more complex security needs. These include government and defence, chemical and pharmaceutical companies, oil, gas and other utilities, manufacturing, finance, logistics, aviation, healthcare and data. But there is almost no organisation that wouldn’t benefit from a robust, carefully managed security infrastructure. And an access control system is a crucial component of this.
2. Types of access management systems
Access management systems have evolved significantly over the past decade alone. In part, due to the advancement and adoption of integrated digital systems that allow companies to align new software with legacy technologies. This has paved the way for more modern access control systems and top-of-the-line security. Let’s take a look at what’s currently available:
a. Traditional keys and keypads
Keys and keypads are ubiquitous methods of securing buildings, but they come with problems. Traditional metal keys can be lost, duplicated, shared or worn down, so that they don’t work anymore. All of which presents security and access concerns. As a result, there’s some speculation that keys may soon be obsolete.
Keypads, which generally rely on alpha-numeric codes, come with some of the same pitfalls as metal keys. Although there’s no physical item to misplace, a keypad entry code can be shared – potentially with people who may have no legitimate reason or authorisation to enter a building. Codes can also be forgotten, or present problems such as having been reset without all authorised users being made aware of this change. As a result, these too are increasingly being seen as outdated security measures.
b. Physical security escorts
Many companies still rely on security personnel to escort visitors to their destinations within a building. There are several problems with this. Visitors may, for example, find themselves waiting around in the lobby until an escort has returned from showing another visitor to their destination. It also means physical security is only as strong as the person on the job. Which leaves security vulnerable to human errors, lapses in judgment (a security escort allowing an unauthorised friend to access the premises, for instance) or brute force.
In a health crisis such as a pandemic or epidemic, reliance on physical security escorts presents additional problems as it requires face-to-face contact and more people to be present in the office.
While physical escorts still have a role in many companies’ security protocols, they’re better employed as human contact points alongside a robust digital system.
c. On-premise software
Many companies are shifting to on-premise software for their access control, which solves a host of security concerns beyond people forgetting keys or codes. These systems facilitate safe, secure people-flow through designated entry and exit points by offering identification, authorisation and guest-tracking capabilities.
A software-based physical access control system is fully digital, and this brings a range of benefits. It enables you to:
- Easily manage and change authorisations and permissions.
- Integrate access control with other systems (including those used for HR or visitor management, for example).
- Make regular updates to ensure you have the latest features and technology.
- Scale your system to your company’s changing needs and size – from small to medium to enterprise-level.
- Establish end-to-end encrypted security – the highest level of protection available against cyberthreats.
Software-based access control systems can be compatible with many devices, including card readers, biometric readers, and both wired and wireless locks. They often integrate easily with existing hardware or legacy systems and their on-premise software typically enables easy adaptation to specific needs.
d. Cloud-based access control management
Cloud-based systems of access control offer the functionality of on-premise software, without the users or managers necessarily needing to be in that physical workspace to utilize the system. These systems can be remotely controlled and managed.
Thanks to the convenience of remote operability, cloud-based systems are widely regarded as being easy and quick to deploy, scale, update and operate from afar. As a result, they’re efficient and often economical solutions to access control, which became particularly relevant to companies seeking to leverage off-premise capabilities during the covid-19 pandemic.
For enterprises needing to comply with stringent cybersecurity and data protection standards, however, an on-premise system may be preferable – or sometimes even required.
3. Benefits of using access control systems
There are numerous benefits to upgrading an outdated security system and implementing a software-based access control system. These include superior protection for people, space and assets, as well as less obvious improvements in costs, ease of operation and overall value.
We’ll discuss some of these, below.
a. Physical security
Many companies already use encrypted systems for data storage or to safeguard proprietary information, but physical security systems are often woefully vulnerable. They can be at risk of people gaining more access capabilities than they should have, enabling them to infiltrate premises unchecked. And they can also be at risk of hackers taking advantage of holes in a weakly protected IT network to override a physical security system. For this reason, IT and security teams should work in close collaboration rather than in parallel.
A software-based security system can close these security gaps by leveraging end-to-end encryption so the network remains safe from all types of threats – both in-person and cyber-related. By using encryption, identification and secure communication, a company’s security system can be as well-fortified as its proprietary data.
b. Health and safety
During the pandemic, many companies were rightfully concerned with minimising the number of people in buildings and checking who had been in contact with whom and when. As well as creating a touchless environment wherever possible.
Software-based access control system can help to facilitate all of these goals. They allow system administrators to carefully manage who has access to the premises at a given time (and who doesn’t). They keep accurate data about who has been on the premises, and where specifically they’ve been and when. And, perhaps most comfortingly for people entering the building, they can ensure there’s no need to touch doorknobs, keypads or security gates.
Given the concentration of germs on high-traffic door handles, the focus on touchless security systems seems likely to persist well beyond the pandemic.
c. Visitor experience
When a visitor, contractor or customer enters a workspace, their experience during the first few seconds can make a lasting impression. Unfortunately, this is likely to be a negative impression if, for example, they receive a lacklustre welcome or a disorganised or time-consuming signing-in process. And if they’re left waiting in a crowded lobby (particularly during a pandemic or epidemic) this can be read as a sign the company is unconcerned with their safety.
Instead, companies can create a great first impression and ensure a positive visitor experience by authorising people in advance. They can then authenticate their credentials in a fraction of a second on arrival and allow them to get on their way and easily access the locations they need to visit.
d. Compliance and audit trail
Organisations such as healthcare companies, certain government sectors, military, finance, accounting, or legal entities, need to ensure sensitive information is only accessible to authorised individuals. Encrypted, software-based access control systems protect important visitor data, keeping companies compliant with privacy laws such as HIPAA.
In the event of an audit, the logs kept automatically by a software-based system can prove compliance with privacy laws. They can clearly show that sensitive data was kept secure from everyone except those with the authority to access or share it.
e. Operational efficiency
The benefits of a software-based access control system aren’t only in security at the door. Software systems run backups and updates automatically, so data is stored safely and authorisations are always up to date. Support for all visitors can be improved by leveraging data collected by the system, so receptionists, hosts and administrators can welcome and direct them efficiently.
This is especially important for business continuity in case of emergencies or other disruptions to daily operations. And, without the responsibility of manually signing individuals in and out, and maintaining records of visits by hand, overworked employees have fewer tedious tasks to occupy their time. They can devote their attention more productively, and security administrators can rest easy knowing everything is under control.
f. Cost efficiency and commercial value
The cost of a security breach and consequent issues with compromised data or assets can be staggering. Which means the best defence is a good offence, and a robust, securely encrypted access control system is the best way to protect against risks.
A good physical access control system will also pay for itself in the long run by providing superior functionality, scalability and adaptability. Its software should integrate well with existing systems and yield lower maintenance costs over the years. And, perhaps most significantly, it will remain relevant and useful due to its continued adaptability, without needing to be replaced.
g. Cybersecurity and data protection
A software-based physical access control system is another IT system. And, if it’s not protected from cyberthreats such as hacking, it can leave your entire organisation vulnerable, along with your assets and data. Software-based access control systems must be protected – ideally, with end-to-end security. This can include complete encryption, rigorous authentication requirements, and regular software updates, so you’re prepared for threats and can avoid them becoming problematic.
As specific security standards are legally required across Europe (and within specific industries such as healthcare, finance, and defence), having a state-of-the-art access control system is becoming not only desirable, but essential.
4. How to choose the best access management solution
So, how do you vet and select the best access control system for your company’s particular needs? We recommend considering a few different factors:
a. Think about your end users
When considering an access control system, it’s important to consider several questions, including who constitutes a ‘visitor’. For example:
- Is the term visitor restricted to anyone unaffiliated with the company who may be stopping by?
- Do visitors include maintenance workers, contractors, and outside shareholders?
- Does the definition extend to anyone who enters the premises, such as regular employees?
- How often do most visitors frequent the secured premises?
- Are there a lot of repeat visitors or do most people visit for one time only?
- Are visitors usually announced or is their arrival spontaneous?
- What do visitors need to access relevant locations smoothly and safely, conduct their business, and be on their way?
Questions such as these are all important in determining what solution would work best for you.
b. Assess your access management needs
Also consider what else is important for your business in terms of managing access control. For example:
- What processes need to be in place to enable your security management team to operate effectively?
- Does your company need to comply with security or data privacy regulations?
- Will you need to verify visitors’ credentials before authorising their entry?
- Do visitors’ names need to be checked against global watchlists or other security databases?
- Should visitors sign an NDS or other legal agreement on arrival?
Requirements can vary according to your industry, individual company needs or area of the world. But a good access control system will ensure every access control need is met at every location.
c. Create an access control policy
An access control policy is essential, so everyone is clear what protocols must be followed and what processes the access control system needs to support. Consider, for example:
- How will a visitor gain authorisation and have their credentials verified?
- What should employees do upon arrival, and is the process different for employees visiting from another office?
- How do you ensure contractors watch essential safety videos and have their certification checked before getting on with their day?
- Should visitors be escorted or allowed to move about freely?
- How should hosts be notified?
- Should visitors wear badges?
- What is the procedure for people exiting the building after their visit is complete?
A good access control system will support every facet of your access control policy and facilitate whatever steps are necessary for each person to have a smooth, secure visit.
d. Ask the tough questions
With any situation where a vendor is being vetted, it’s important to establish their competence and legitimacy. For access control system providers, scrutiny is even more crucial.
Some important questions include:
- How long have they been in business?
- What experience do they have in supporting companies with stringent privacy concerns?
- Have they passed any security penetration tests?
- Are they compliant with GDPR or other data privacy regulations?
- What do past and current clients say about them?
It’s also important to ask questions about the implementation process:
- What’s their process for integrating existing network architecture with a new software system?
- What support will they provide during implementation and beyond?
- How can the system be scaled and adapted in the future?
- Does their system use open standards?
This last question relates to product interoperability, integration capabilities and implementation flexibility. An access control system based on open standards offers more flexibility to adapt to your needs; connect with your existing or chosen hardware; and integrate with other technologies and systems, such as your HR database or visitor management system. The Open Supervised Device Protocol (OSDP) has become the global standard and is supported by AEOS.
e. Design the right system
You’re the expert on what your company needs from its access control system. It’s crucial your access control provider collaborates with your security and IT teams to determine the best system to meet your specific needs. Consider, for example:
- The size of the premises and the number of employees and entrances and so on.
- What physical infrastructure is in place?
- Do any other spaces, such as car parks, need to be secured?
- What level and frequency of access do the various groups of people need?
- What credentials must people show to gain authorisation?
- What precautions need to be place relating to pandemics or epidemics?
f. Test solutions with small focus groups
A great way to assess how well a system is working is to test it with small groups and ask for feedback. We recommend using members of different groups –general employees, senior management, and people from HR, security and IT teams, for instance. This gathers a variety of perspectives and helps to ensure all needs are met.
Focus groups can be useful for gauging user interface and experience within a technology platform, and for identifying problems early so they can be addressed. This can also help narrow down what users truly want, so the system can better meet their needs.
g. Convince internal stakeholders
When implementing a software system that will impact everyone in the company, it’s important to get people on board. But the input and support of certain stakeholders is particularly crucial when rolling out a new access control system. Obviously, this includes c-suite executives and HR – but it also includes IT, security personnel, and receptionists.
If your company’s work involves sensitive data, you’ll probably also need to convince the legal department too to alleviate concerns about compliance with data privacy regulations.
Lastly, the experience of employees, who would regularly interact with this new software, should always be top of mind.
5. The future of physical access control
What do access control, and visitor management in general, look like for the future? Here are some of the innovations we expect to see (more in our Physical Access Control Benchmark Report 2022):
a. Adaptive access control
Access control systems that are adaptive can change easily to meet the needs of the moment – not just the user’s needs but needs relating to emerging security risks and new legislation, for example.
The technology allows the system to be scaled easily and provides a high degree of control to easily change permissions based on departments, roles, dates, times and sites. It integrates seamlessly with existing technologies and offers a user-friendly interface and experience. And it can adapt to different types of security threats, so the level of protection can be instantly dialled up or down depending on known risks – all while keeping the system easy to use.
The result is an access control system that offers long-term value by suiting present and future needs.
b. Biometric technology and facial scanning
i) What is it?
Biometric security involves using people’s biological templates – their fingerprints, palm prints, eyes (usually irises), facial features or even palm or finger veins – to identify them. Sometimes, biometric scans are the sole means of authorising access. Other times they’re used as part of a multi-factor authentication system, where the biometric scan accompanies the user swiping a card, typing in a pin code or using another method of identification.
ii) What are its advantages?
Biometric security is highly accurate. It also offers a smooth user experience in that authentication can happen quickly, and without using objects that are often misplaced such as keys, cards, fobs or QR codes.
Biometrics also offer a high level of security, in that it’s significantly more difficult to copy a fingerprint than a card or other identifier. And a fingerprint can’t be shared with someone who isn’t present in the same way a pin code can, for example.
iii) What are its disadvantages?
There are some downsides to biometric technology. One issue is that access can’t be assigned in advance, as it can with a pin code or QR code. The person must be present to have their fingerprint or other body part scanned.
Biometric technology can also be slower to use. If lots of people are trying to enter an area simultaneously, it can take longer for everyone to have their faces or fingerprints scanned than it would for them to swipe cards or fobs.
iv) Privacy issues
Biometric systems are also subject to greater scrutiny as sources of potential data vulnerability. If a system is breached, people’s biometric templates can be stolen. Which is why GDPR has special regulations relating to the collection and storage of biometric data.
v) The bottom line
Biometric scanning systems are certainly gaining popularity due to the high level of accuracy and convenience they offer. And also the fact that some options, such as facial scanning, are touchless and so offer improved hygiene. But technology will need to catch up to reduce some of the concerns these systems present, particularly regarding data privacy.
c. The implications of evolving IoT
The progression of IoT (internet of things) means that increasingly more devices are gaining wireless connectivity. Which is great news, as an even wider array of devices can be integrated with your physical access control system. This enables greater control and convenience and can help fulfil ambitions for creating an efficient smart building. When someone uses their access control card to enter a meeting room, for example, the thermostat and lighting can automatically adjust to their pre-selected levels. A fridge with IoT capability can send alerts if its stocks don’t match up to the number of people in the building that day. And a lock connected wirelessly to your access control system can send error messages – and be easily fixed or updated – without a technician stepping foot in the building.
6. Continue your research and discovery
No matter what access control solution you settle upon for your company, deciding on a system and implementing it is an evolving process. It requires ongoing assessment of your company’s needs, and reflection on how any given solution can best supply your access control and security must-haves. Remember that:
- Concerns about data privacy and security should always be top of mind.
- Providing a good user experience is important.
- The ability to integrate your access control system with other technologies will increase its usability and the value it brings to your organisation.
- Your system must be able to adapt and scale to your company’s needs.
- Due to the pandemic, health concerns remain a priority, in addition to physical safety and cybersecurity.
We invite you to continue your research, and to discover what type of access control system best supports your mission.
Please reach out if we can answer any questions. Safety and security are our goals, too, and we look forward to partnering with you.
|
<urn:uuid:5b0d4fe7-0aa8-49f3-bc50-2893fe602793>
|
CC-MAIN-2022-40
|
https://www.nedapsecurity.com/the-ultimate-guide-to-physical-access-control-systems-in-2022/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00637.warc.gz
|
en
| 0.941058 | 5,760 | 2.640625 | 3 |
To minimise user friction, many apps cache the user’s username and password. However, storing passwords in a mobile device poses a security risk. To mitigate this risk, developers are looking at OIDC and OAuth 2.0 tokens. But do these make sense?
Read more about user and app authentication in our previous blog post.
OIDC and OAuth 2.0
OpenIDConnect (OIDC) and OAuth 2.0 are standards that are based on tokens. In its simplest form, a token can be viewed as a randomly generated string (i.e., password) that – when presented to the backend – will grant access to specific resources.
Contrary to popular belief, OIDC and OAuth 2.0 are not user authentication standards, neither specifies how users should authenticate. These standards only define how – after authentication – tokens are used to gain access to resources. Still, OIDC and OAuth 2.0 are heavily used in mobile apps for authentication purposes.
Replacing cached credentials by tokens
The classical implementation of OIDC/OAuth 2.0 in mobile apps, is to replace cached credentials (e.g., password storage). After performing user authentication, tokens are obtained:
- An access token is used to authenticate towards the backend. Access tokens typically only have a limited lifetime.
- A refresh token is used to request new access tokens. These refresh tokens usually have a long lifetime (sometimes indefinite) and allow for de facto creation of long (sometimes infinite) sessions.
The risk of passwords being stolen on a mobile device is exactly the same as the risk of refresh tokens being stolen. Both need to be accessible to the app, in order for the app to present it to a backend. That access tokens only have a limited lifetime is irrelevant, once you obtain the refresh token, you can keep on requesting new access tokens for a long time (sometimes forever).
JWT tokens contain cryptographic signatures, so they must be more secure than ‘plaintext’ tokens? Unfortunately, this is not the case. A JWT token is digitally signed by a backend, not by the app or device (hence it is the backend that authenticates the token, not the app nor the device). Presenting the token is sufficient to gain access. The main reason for using JWT tokens, is to reduce the storage requirements on the backend, allowing for stateless authentication. Instead of storing a list of valid tokens, one can simply validate the signature instead.
Accessing resources and API's
The primary purpose of OIDC and OAuth 2.0 is federated authentication, situations in which you authenticate towards a different organisation as the organisation that holds the resources you are trying to access. Imagine situations where you use a central login (e.g., your work login or a social login) to access a SaaS platform operated by another party, potentially with a client or intermediate system operated by yet another party.
Similarly, this could also be used in mobile apps, where the user – after user and/or app authentication – obtains tokens for accessing (third-party) resources and API’s. After authentication, the app will receive an authorization code, which it can then exchange for the required tokens.
Depending on your mobile app’s architecture, it could very well be that the mobile app does not need to concern itself with OIDC/OAuth 2.0 flows. Typically the app will communicate with one specific backend, which doubles as OAuth 2.0 Client and access the resources and API’s on behalf of the mobile app. In this case, the backend stores and handles the tokens, and not the mobile app itself.
In most cases a regular cookie suffices for keeping short-term sessions alive, to avoid continuously asking the user to authenticate. Regardless of the usage of tokens and/or cookies, the crucial question still remains: how to obtain one?
Did this article spark your interest in better user and app authentication? Learn more about the nextAuth approach and get deep technical insights.
|
<urn:uuid:0ad5ded3-484d-493b-a5a6-646dd5c640c0>
|
CC-MAIN-2022-40
|
https://www.nextauth.com/mobile-app-authentication-tokens/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00637.warc.gz
|
en
| 0.908435 | 850 | 2.953125 | 3 |
Consumers and network administrators often become complacent about security, trusting their anti-intrusion efforts to security product self-updates and older security technology.
RSS reader software can heighten the potential for intrusion, warn some security experts. IT managers often fail to ensure that their networks are not at risk from the use of RSS feeds linked through Web browsers and e-mail clients.
That oversight can punch a gaping hole in the security barriers. One of the biggest problems for users of RSS reader software, according to Ray Dickenson, senior vice president of data-security firm Authentium, is exposure of their data from cross-site linking.
“RSS is the largest possible market hackers have available. Even Mac and Linux users can be susceptible when surfing,” Dickenson told TechNewsWorld.
What Is RSS?
As with much of the terminology behind computer innovations, RSS has multiple meanings that describe the same function of delivering news quickly. RSS usually stands for Really Simple Syndication or Rich Site Summary.
Either way, the term RSS names the technology that allows a Web site owner to share content among different Web sites in XML format. Web publishers can post a link to the RSS feed so users can read the distributed content on the site displaying the RSS link.
RSS allows a computer user with a browser add-on or standalone software reader to find and view information. Computer users can subscribe to specific types of information, such as news categories or product information, all delivered in one viewing window.
RSS technology is an alternative means of accessing the vast amount of information on the Web. Instead of users browsing Web sites for information of interest, RSS pushes that information directly to the users.
Convenience Carries Caution
RSS technology is part of the growing Internet phenomenon known as Web 2.0, which allows computer users to enjoy enhanced connectivity to information and computer services hosted on remote Web sites.
The problem, however, is that Web 2.0’s interconnectedness makes it easier for little snippets of information in an RSS feed to slip malware into computers.
“RSS is hot technology. It is very easy to set up RSS feeds. Security takes a back seat to convenience,” said Dickenson.
RSS syndication itself is not the culprit. The speed and ease of distribution make RSS an ideal delivery vehicle for malware along with information, he warned.
High-Tech Hiding Places
One of the basic premises of best practices for safe computing, according to Dickenson, is to block dangerous code. However, RSS feeds make it possible for hackers to engage in cross-site scripting using forms on a Web site that users fill in with personal information.
“It is very hard to trace this type of malware back to its source,” Dickenson said. “If a user clicks on links while accessing e-mail at same time, cross-site scripts can allow a hacker to peak into files of even Mac and Linux computers.”
The potential for cross-site scripts already existed before RSS technology became popular, but RSS gives hackers more ready access, he said.
Service providers and IT managers have to be sure that their security tools are able to deal with these threats, said Dickenson.
RSS technology and cross-site scripting are worrisome factors, acknowledged Paul Henry, vice president, technology evangelism for Secure Computing. RSS has potential for a great attack vector, he added.
“Most RSS readers don’t validate the content. Only a handful of the reader products let users specify the types of downloads to permit,” Henry explained.
When the cross-site scripting element is factored into the equation, the seriousness of the attack potential is very evident. All someone has to do is put a button for free sign up on an infected Web site. The RSS reader does not check on the contents, he said.
Threat vs. No Threat
However, some security gurus disagree with the view that RSS feeds are the ultimate threat. Andrew Jaquith, program manager for security research at the Yankee Group, countered that RSS is not even close to the largest possible addressable market.
“RSS is still a niche feature used by not more than 10 to 20 percent of users, although that number is bound to increase. General Web surfers are far easier to get to. A far bigger target is the Web mail providers (Gmail, MSN, etc.),” Jaquith said. “This has been a fertile area for security research of late.”
While Jaquith agrees that the newness of RSS technology does position it as a source of potentially higher risks, he does not see the threat as being severe enough yet to say the sky is falling. He does see a basis for concern because the new technology reflected in RSS readers is plugged into older technology that already has security flaws.
“The operating systems, browsers and third party packages just haven’t been around that long. So they are likely to have had less scrutiny by outside researchers. That, in itself, presents heightened risk,” he explained.
Some security vendors are beginning to address the threat posed by RSS technology. One solution is for service providers and network administrators to protect their users from these data dangers.
Dickenson said Authentium’s Extensible Service Platform (ESP) for Enterprise addresses that concern by managing antispyware, antivirus and content-filtering solutions. It also integrates other end-point software applications from multiple vendors through a single management interface.
Henry added that Secure Computing’s Webwasher product takes a proactive approach that is able to identify RSS threats by analyzing the entire traffic stream entering a computer or network from the Internet. He said Webwasher adds another layer of security on top of the company’s reputation-based filtering service.
Matter of Degrees
Jaquith said an even bigger threat than RSS feeds would be the ad networks that are compromised. For instance, imagine a hacker owning a company such as Internet advertiser DoubleClick and installing a rootkit on it.
“That’s instant infection for 100 million users. It makes RSS looks like a kid’s toy, which, in comparison, it is,” he concluded.
|
<urn:uuid:35b0f787-3a6a-4d1a-bffa-706962c03f73>
|
CC-MAIN-2022-40
|
https://www.ecommercetimes.com/story/is-really-simple-syndication-really-secure-56134.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00637.warc.gz
|
en
| 0.91987 | 1,346 | 2.703125 | 3 |
The FBI has issued a warning that cybercriminals have been applying for remote IT positions using deep fakes and personally identifiable information (PII) obtained from the internet to access systems, passwords, and sensitive data. According to a warning from the FBI, complaints about using deep fakes and stolen PII in remote job applications have grown significantly from last year.
Deepfakes are false computer-generated audio or visual representations of a real person, which are increasingly used in scams. The occupations most frequently targeted in this new scam are those in IT and computer programming. The imposters intend to get employed for the remote position to gain access to customer financial information, corporate IT databases, and proprietary data, all of which can subsequently be stolen.
To generate their deepfakes, criminals use credentials and pictures they have acquired on the internet. They can pose as a legitimate applicant in a virtual interview using fake video or audio. The FBI explains that fraudulent interviews contain specific telltale indicators that the interviewee might not be accurate.
“The actions and lip movement is seen when interviewed on camera do not completely coordinate with the audio of the person speaking,” the advisory notes. “At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually,” it continues.
Pre-employment background checks have also flagged that the information used to apply for the job belongs to another individual. Because it is simple to recreate an identity using leaked information, it is crucial to report stolen PII as soon as it occurs. According to the UK Information Commissioner’s Office (ICO), “your name, address, and date of birth provide enough information to create another ‘you.’”
Deepfakes are increasingly being used in cybercrime. Elon Musk, the CEO of Tesla, was used in a scam circulated on YouTube last month to defraud individuals of their cryptocurrencies, Bitcoin and Ethereum. Online thieves were taking over channels and accounts, changing the design to look like they were from Tesla, and publishing fraudulent deepfake videos of Musk encouraging viewers to take part in fake bitcoin offers. According to the BBC, the scammers made $243,000 in just over a week. YouTube has come under fire for its slow removal of false content.
Many people have expressed worry that artificial intelligence capabilities may be exploited for criminal objectives and have advised against their usage and development. While deepfakes can be used for lighthearted entertainment, cybercriminals also exploit the technology to cause significant harm. However, deepfake risks can impact anyone. Famous people are the easiest targets for deepfake fabrication, mainly if they are used to promote false information. Being skeptical of everything you see online is crucial to online safety and using the internet. Now that you know about the existence of deepfakes, you might think twice before you believe a video of a government official making an outrageous claim.
|
<urn:uuid:98e92097-950a-4580-9f8c-bf48172818bc>
|
CC-MAIN-2022-40
|
https://www.harbortg.com/blog/deepfake-scams-are-on-the-rise?hsLang=en
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00637.warc.gz
|
en
| 0.945233 | 596 | 2.609375 | 3 |
A new study of privacy concerns from the University of Notre Dame upends the notion that traditional “reasonable expectation of privacy” models also apply to digital tracking and data collection. Respondents had nuanced views of how location data collection and data privacy should work in public spaces, views that run counter to traditional notions that privacy is very limited when in a public area.
The study also revealed significantly varying levels of trust depending on the recipient of the data, and that privacy concerns also vary depending on how the location data collected through these methods is presented when consent is requested e.g. as GPS coordinates versus the name of a place.
Attitudes about location data and collection of sensitive personal information
In the broad strokes, people’s level of trust in location data collection tends to correspond with their trust in the source. For example, one of the most trusted sources would be a parent or family member using tracking to find a relation. Third-party data brokers and aggregators have the lowest levels of public trust.
Comfort with the source of data collection is not the sole determining factor, however; circumstances also come into play. For example, people tend to trust an employer to use location data for real time tracking while they are on the clock. They do not want that tracking extended to off-work hours, taking a strong stand against employers knowing where they go in their personal life.
The potential sensitivity of individual actions also matters. People are much more resistant to tracking that identifies them as being at a protest or political rally as opposed to revealing that they visited a particular restaurant or store.
Factors that did not seem to be as important were the duration of data collection, and the device or means by which it was collected. So while people are generally opposed to the collection of biometric information, they do not put a particular distinction on it being collected by a public CCTV camera versus a social media app that they opt to use. Voluntary supply of such information by a data subject does not seem to correlate with an increased willingness to have it traded or sold for other purposes.
Drilling down on privacy concerns
The paper opens by mentioning the relative lack of study of privacy concerns and attitudes toward use of location data and the collection of location histories. In terms of things like public policy and systems design, there has been something of an assumption that the social terms of personal physical privacy equally apply in the digital space.
Part of the issue with this thinking is that the long-held conception of private and public spaces does not account for the very rapid technological developments of the past decade or so. Public spaces are now laden with cameras, microphones and various means of tracking movement and location data. The traditionally private home has seen incursions as well, primarily “smart” technologies that phone home with all sorts of data and may even include always-on microphones.
The information presented in the study of privacy concerns thus challenges the stock conception of the “reasonable expectation of privacy” in legal terms. The responses given by the 1,500 survey subjects indicate that people have more of an expectation of privacy in both areas than devices are currently affording them.
In terms of privacy concerns about government use of location data, the survey presented respondents with various scenarios in which the FBI and local city government was tracking people’s movements. The respondents were more favorable toward the FBI when the tracking was limited to places only, with favorability dropping off the longer a period of time the tracking was maintained. City services followed a similar track with a lower initial favorability rating. Respondents had very little patience for employers, commercial interests and data aggregators doing any sort of location sharing for any amount of time.
Mobile device types and operating systems all had very similar arcs that descended in favorability with the duration of tracking. Respondents were most willing to let social networks have access to location data and least willing to provide it to Fitbit or license plate recognition systems.
In terms of privacy concerns about data gathering and tracking in the home, respondents tended to reject any sort of outside element having access to location data while they or their family were in their residence. Respondents only showed favorability in this scenario when mobile app data was used by one family member to determine if another was at home.
The study interprets the results as showing that data privacy regulation is presently too focused on notification and consent during the handoff of data; regulators need to pay more attention to what is done with location data after it is handed off. The study also notes that the collection of personal data by law enforcement might meet with less friction if the focus is put on the legitimate agency doing the collection rather than the method by which it is collected.
|
<urn:uuid:34d71183-7a5e-4fe9-9875-53d6c1562f77>
|
CC-MAIN-2022-40
|
https://www.cpomagazine.com/data-privacy/consumer-privacy-concerns-vary-with-location-social-circumstances-expectations-of-privacy-do-not-necessarily-mirror-offline-models/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00637.warc.gz
|
en
| 0.965741 | 955 | 2.578125 | 3 |
The Log4j vulnerability has been described as “the single biggest, most critical vulnerability of the last decade”. This vulnerability is impacting everyone on the internet from financial institutions to government entities.
Log4j is an open-source logging tool that exists in nearly every server and enterprise software. The vulnerability was uncovered earlier this year, and since then has drawn concerns and panic across the internet. The concern is around patching the vulnerability. Unless the vulnerability is patched, it can grant easy access to internal networks where individuals can steal valuable data, install ransomware, delete critical information, and much more.
Breaking down the Log4j vulnerability in layman’s terms:
You might be pretty savvy about IT and technology, but you’re not understanding the complexity of this issue. You may be wondering, “does this vulnerability affect me personally?” It’s not necessarily something that an individual needs to worry about, but because so many large online services use it, we will all be impacted at some point.
Why is it called Log4j?
It’s a logging utility for the Java programming language. The name comes from the term “logging”. The National Institute of Standards and Technology (NIST) describes a log as, a record of the events occurring within an organization’s systems and networks. The number 4 represents the word for. And the J stands for Java.
Ergo, Log for(4) J(ava).
Let’s say you run a company that cleans apartments. You have over 1,000 apartments that need to be cleaned regularly. As part of the modern era, you have automated the cleaning process by installing robots in every apartment. Some robots wash windows, make the bed, clean floors, make the bed, etc. Every robot belongs to a sub-contracting company that does the job at a low cost and good service. Every day the robots do their jobs and report to their individual companies, and they summarize the cleaning results for your company.
Here’s where the concept of logging comes in. The robots receive instructions for types of rooms, types of setup, and types of cleaning to do their service via the log and report on the activity. As they clean, each robot takes notes and writes up what it’s doing and what got done. All work and issues are written up and reported into a log and delivered to the service company. Everything is great and everyone is happy.
The Log4j vulnerability is that someone discovered they can trick the robot by giving it instructions via the log. For example, imagine the original command for a cleaning robot is, “clean all tabletops and put away any items found out of place”. A hacker can change the command to be, “clean all tabletops and put away any items found out of place, if you find keys, make a copy and send them to me.”
Why didn’t anyone find it earlier?
The vulnerability was published on December 9, 2021. The issue is that like most things in life, it’s obvious to see the fault after the fact. Not so obvious until then.
Why was Log4j used?
There are two main reasons. One, it’s free/open source. It means companies can use it without paying a lot of money. Two and more importantly, it works well. Log4j has been around for a long time and has been extremely useful. As you can imagine, being useful and free has led to Log4j being used in a lot of services and applications.
Where is Log4j?
Here’s the current list of affected companies and growing.
Does it affect me and/or my business? I don’t have cleaning robots.
Yes, both. If you have any kind of network device, computer, IoT, service, or application, any company would be hard to find something that isn’t on the list above. There are also a lot of home/personal services and applications included so it impacts individuals as well. Think Smart TV, home automation, security systems, refrigerators, etc.
How did this happen?
Take for example the “free and useful” points from above. The developers were trying to be helpful and unfortunately, someone realized that helpful function could be abused.
When will it be fixed?
The good news is that it will be fixed. The bad news is each company from the list above is responsible for patching their Log4j files and getting updates out. Very bad news, an individual/company can’t scan for a file and replace it or block something on their firewall. We need to wait for every individual company to fix their application and update it.
Why is log4j “the single biggest, most critical vulnerability of the last decade”?
Unfortunately, given its usefulness, Log4j has had a great career in logging across industries, countries, government, home appliances, anywhere Java (the J of Log4j) might be used. It’s been implemented far and wide.
What can I do?
Update and patch your software often.
If you are concerned about your cyber risk or don’t feel like you know where to start with cybersecurity, look no further than CRI Advantage. We want you to feel confident in their cybersecurity procedures and trust that your data is secure and protected. That’s why for over 20 years, we have provided governmental and private organizations with cybersecurity experts to help guide their processes, identify weaknesses, and protect them from cybersecurity threats.
Contact CRI Advantage to book your cybersecurity consultation.
More about the Author:
As an experienced Information Security professional, Leo has focused on IT Security Operations, IT Governance, Secure Development, Compliance, Risk, and Privacy. His experience blends a diverse mix of small and Fortune 100 companies and a real-world understanding of the challenges and opportunities of PCI, SOX, PII, HIPAA, NIST, and International regulatory requirements.
|
<urn:uuid:f843dd24-58ef-4e7c-b777-468e8b78a756>
|
CC-MAIN-2022-40
|
https://criadvantage.com/log4j-vulnerability-explained-by-a-cybersecurity-expert/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00037.warc.gz
|
en
| 0.951065 | 1,260 | 2.90625 | 3 |
You thought you had the basic concepts of edge computing down pat, and then someone had to come along and throw the word “mobile” in front of it.
[ What's the latest in edge and 5G? See Red Hat's news roundup from Mobile World Congress 2022. ]
While we’re well familiar with mobile and might be able to deduce its meaning when paired with edge, it actually gets a little more confusing: Mobile edge computing (MEC) has given way to a more current term with the same acronym: Multi-access edge computing. So what is it?
One way to think about MEC is that it’s like the edge of the edge – the outermost edge being our ubiquitous mobile devices and applications and the copious amounts of data they both consume and produce. And this outermost edge is going to grow exponentially (again) with the advent of 5G.
[ Get a shareable primer: How to explain edge computing in plain English. ]
“MEC is all about the intersection of the wireless edge and the infrastructure edge – essentially where mobile networks and the internet meet and hand off traffic,” says Jacob Smith, VP of strategy at Equinix Metal.
“By placing traditional digital infrastructure next to mobile networks (and working out all the technical details to translate between the two), operators are able to drive substantial improvements in performance and latency for new use cases like mobility, gaming, video streaming, and IoT – all of which are increasingly enabled by a combination of wireless and traditional internet infrastructure,” Smith says.
MEC is not just a concept; it’s also a standards framework developed by the nonprofit group ETSI. Smith notes that ETSI pivoted the term from “mobile” to “multi-access” around five years ago to reflect the dramatic expansion of connected devices and applications. This isn’t just a smartphone story. MEC is essentially about the relationship between edge computing and mobility in the broadest sense, meaning video cameras, mobile or remote medicine, IoT in all its iterations (including industrial IoT), gaming (including AR/VR), connected vehicles, and many other contexts.
[ Want to learn more about implementing edge computing? Read the blog: How to implement edge infrastructure in a maintainable and scalable way. ]
The MEC and 5G connection
Put another way: MEC basically means colocating edge devices with mobile network infrastructure, says Dan Florence, a senior manager for the networking segment at Micron. Florence notes that MEC shares the same goals as edge computing in general: moving more computing functions closer to where data is generated and decisions are made to enable faster, more efficient responses. “This in turn is expected to open up new applications requiring low latency and/or high bandwidth,” Florence adds.
Mention “low latency and/or high bandwidth” in an IT conversation these days, and you’re almost certainly going to hear about 5G.
[ Related read: 5G: What IT leaders need to know. ]
Enter Raj Radjassamy, 5G wireless segment leader at ABB Power Conversion, with a particularly focused definition of MEC: “It is an edge server in a telecom setting – specifically for 5G.”
5G is commonly seen as the technology, paired with edge computing architecture, that will solve one of the most pressing challenges of an increasingly connected world – latency – while simultaneously delivering high bandwidth (i.e., lightning-fast speeds).
“At a basic level, latency is the time delay between a user taking an action and then seeing a response,” says Dan McConnell, chief technologist at Booz Allen Hamilton. “It is the difference between clicking a button and waiting five seconds for a response versus half a second. And with the explosion of data being collected and processed for high-intensity AI/ML algorithms, there is a high premium on ensuring that data-driven insights are able to reach users as fast as possible.”
Edge servers in any context can boost performance and reduce latency by bringing compute and other resources as physically close to where they’re needed as possible. Since 5G is all about high speed and low latency, the two technologies pair well. Actually, the relationship is deeper than that: Widespread availability of 5G networks – and the massive amounts of hardware and software that will connect to those networks – will actually require MEC.
“We know 5G could improve everything from simple video conferencing to telemedicine to operations on a remote oil field,” says Jeffrey Ricker, CEO and co-founder of Hivecell. “However, 5G does nothing for the network backbone from the towers to the data centers. As such, 5G could easily overwhelm the fiber networks, the data centers, and the cloud.”
MEC is focused on bringing the power of those resources closer to where they’re needed.
“MEC is the culmination of the trend where everything moves to the cloud, and then the cloud in turn moves closer back to everything,” says Amit Bareket, CEO and co-founder of Perimeter 81. “The increasingly sophisticated and critical applications that we run must connect to hardware somewhere, which hosts it and serves it to us.”
[ Get a primer in edge servers: How edge servers work. ]
A beneficial relationship between edge and hybrid cloud
If that fundamental process takes too long, you could argue that the promise of 5G (and intersecting trends such as IoT) will be broken. This is yet another connection point between edge computing and the hybrid cloud model. As Rosa Guntrip, senior principal marketing manager, cloud platforms at Red Hat, explains, the former complements the latter because “centralized computing can be used for compute-intensive workloads while edge computing helps address the requirements of workloads that require processing in near real time.”
While MEC has wide-ranging possibilities across business, government, and consumer contexts, it is particularly important in any high-stakes scenario, such as medicine, public safety, the military, or autonomous vehicles.
“One of the use cases of 5G is ultra-reliable low latency communication (uRLLC) that involves self-driving vehicles, remote medical surgeries, EMS, and more,” Radjassamy says. “All those applications are mission-critical and cannot tolerate long latencies. uRLLC targets less than 1ms of latency in communication between the end user and server, and it is the MEC that would help to make it a reality.”
So while MEC might improve, say, your augmented reality video game experience, there are some settings where it might literally be a matter of life or death.
“These emerging capabilities will have transformative effects on what’s possible for defense, humanitarian assistance, and disaster recovery missions, and other areas where edge computing and processing can save lives,” says McConnell from Booz Allen Hamilton.
MEC extends the edge
MEC is fundamentally about extending the edge: It pushes compute power even closer to end users, notes Ryan Shultz, enterprise architect at Involta. This has significant implications for individuals and organizations alike. It could help bridge the gap between reality and future possibility in another key area, too: Rural connectivity.
“The business need for MEC continues to evolve as technology and IoT requires more compute closer to the field, specifically in the agriculture, wind energy, and oil and gas industries, often located in rural areas,” Shultz says. “Increased usage of sensors for the collection of massive amounts of data in locations with limited high-speed connectivity will drive the demand for MEC to allow for spot analysis and aggregated return of the data.”
In this manner, MEC could even have an equalizing effect: The promise of IoT, AI, 5G, and other emergent technologies need not be the sole purview of massive tech companies or urban/suburban residents.
“MEC not only provides flexibility to the user, but it also provides a new revenue stream for the cell tower owner – a win-win which could enable more AI, data analysis, and aggregation business opportunities in rural America,” Shultz says.
[ Want to learn more about edge and data-intensive applications? Get the details on how to build and manage data-intensive intelligent applications in a hybrid cloud blueprint. ]
Subscribe to our weekly newsletter.
Keep up with the latest advice and insights from CIOs and IT leaders.
|
<urn:uuid:d4772726-6fe6-4f14-89ef-49a00d6c4064>
|
CC-MAIN-2022-40
|
https://enterprisersproject.com/article/2021/2/what-mobile-edge-computing-mec
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00037.warc.gz
|
en
| 0.934117 | 1,791 | 2.734375 | 3 |
Ransomware can wreak complete havoc across your systems, compromising key data and files, and leaving your network at the mercy of the attacker. New types of ransomware are constantly cropping up, making it difficult to keep on top of, let alone protect against, the latest strains. Although new variants present different characteristics, fundamentally all ransomware leverage similar techniques and operate with the same aim: extracting a ransom payment from the victim by holding their key data or devices ‘hostage’.
What is Ransomware?
It’s a type of malware attack that holds a victim’s data or device ‘hostage’, preventing access until a ransom payment is made or the perpetrator’s demands are met.
In the first instance, attackers often employ social engineering techniques to gain access to their victim’s systems before unleashing their ransomware attack.
How does Ransomware spread?
Ransomware can spread into systems through a variety of different infiltration methods, including:
- Remote desktop protocols whereby the ransomware infiltrates through exposed remote desktop software connections.
- Malicious URLs where the victim visits a compromised website via the malicious URL
- Malvertising where perpetrators use online advertising as a vehicle for spreading ransomware
- Drive-by-downloads where the victim unintentionally or unknowingly downloads ransomware onto their device
- Email attachments where the user opens a malicious phishing email with a dangerous attachment
6 Common types of ransomware
Crypto ransomware or encryptors
Crypto ransomware locks you out of crucial files and data on your computer by encrypting them. You’re still able to see the files that are stored on your device, but you are unable to access them. In order to recover your files, ransomware perpetrators demand a ransom payment.
Rather than targeting specific files, locker ransomware locks you out of your entire device, meaning you are unable to use it or access any information stored on it. The attacker will demand a ransom payment in order to unlock your device.
Scareware in itself is not malicious, instead it is a front that persuades the victim to inadvertently download ransomware onto their device. Scareware convinces their victims that their device has already been infected with a virus and that they need to download a software to clean their device. The software they then download contains the ransomware.
Doxware attacks threaten a victim with the release of their personal information or data unless they pay a ransom. The victim is often a private individual, targeted primarily through phishing campaigns.
Leakware is a mutation of doxware. It differs from other types of ransomware in that its primary threat is not to hold key information or devices ‘hostage’, but rather to leak confidential or sensitive data. Leakware tends to target high-value victims such as hospitals, financial services, and legal services firms. The attacker then demands a ransom payment in exchange for the promise that they won’t release the data in question.
Ransomware as a Service (Raas)
RaaS is essentially a partnership in which a ransomware developer sells ready-to-use ransomware attacks to affiliates, who often lack expertise or experience. For obvious reasons, RaaS threatens to make ransomware attacks far more frequent. RaaS can be structured differently, with some developers simply selling their ransomware for a flat one-time fee, whilst others operate on a subscription or profit sharing basis.
Well-known strains of ransomware
Bad rabbit is encryptor ransomware that aims to encrypt and lock you out of your files. It spreads into devices through what’s known as ‘drive by attacks’, meaning through credible websites that have been compromised. It appears to the victim as an Adobe Flash update, which downloads malware onto their system.
GoldenEye attacks systems using a two-pronged strategy in which two viruses are downloaded simultaneously which then encrypt the system’s data and file system.
Jigsaw infiltrates a device, encrypts its files, demands a $150 payment within the first hour, and gradually deletes them until the ransom is paid. At the 72nd hour, all remaining files are deleted.
Cerber is a type of RaaS that mass targets Microsoft 365 users, locking them out of their devices and encrypting their data.
TorrentLocker is a type of locker ransomware that infiltrates systems via spam emails.
How to protect against Ransomware attacks
Ransomware attacks are one of the most critical threats that MSPs face. Preparation and mitigation are absolutely essential to protect the systems that you are responsible for managing and monitoring. An effective security infrastructure against ransomware attacks combines effective management of systems and people:
- Making use of the most up-to-date and watertight security software
- Thorough preparation. Designing a reliable and robust disaster recovery pain that has been tried, tested, and refined.
- Continual employee training and education, especially around social engineering techniques.
- Ensure to keep updated and encrypted offline backups
- Effective patch management and frequent system updates
- Implement safe IT practices, such as healthy skepticism around suspicious email attachments or URLs
See Atera in Action
RMM Software, PSA and Remote Access that will change the way you run your MSP Business
|
<urn:uuid:48d426eb-740c-4d5e-8eb4-c317548bf33e>
|
CC-MAIN-2022-40
|
https://www.atera.com/blog/what-are-the-different-types-of-ransomware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00037.warc.gz
|
en
| 0.934171 | 1,112 | 2.921875 | 3 |
Researchers call for 'improved training in nuclear materials security and enhanced end-user accountability.'
Negligence was involved in all 73 incidents last year in which radioactive substances reported went missing, concludes a new expert report on nuclear trafficking.
The report finding by the James Martin Center for Nonproliferation Studies could suggest there is much work yet to be done in international efforts to improve security around radiological substances that might be seized by terrorists and used to construct a so-called "dirty bomb." This type of device could combine radiological materials and explosives to contaminate populated areas.
The study, published on Wednesday, examined incidents in which both atomic and non-nuclear radioactive materials went unaccounted for. Of the 153 documented incidents last year, 92 percent involved non-nuclear radioactive substances utilized in the medical and industrial fields, according to a summary of the report's findings.
"Few incidents involved the most dangerous materials, and none were reported to have involved material that was nuclear weapons-usable in form or quantity," the summary states.
To reduce the prospects of future incidents stemming from negligence, the report recommends "improved training in nuclear materials security and enhanced end-user accountability."
Leaders from 53 nations are gathering in The Hague, Netherlands, on Monday and Tuesday to review the current status of global efforts to better lock down vulnerable radioactive and nuclear materials. Some experts have criticized the biennial Nuclear Security Summit process -- which began with President Obama hosting the first such gathering in 2010 -- for focusing too much on atomic substances at the expense radiological sources.
While a nuclear terrorism attack could result in a much greater loss of life than a radiological strike, most analysts agree it would be easier for extremists to acquire the ingredients they need to build a radiological dirty bomb than get a hold of a nuclear weapon.
The Center for Nonproliferation Studies analysis relied on a database it built that collected information drawn from foreign regulatory agencies, specialized Internet search engines and international news reports. It is separate from a database kept by the United Nations' International Atomic Energy Agency, which also tracks incidents of lost or stolen plutonium, uranium and other radiological sources.
The U.N. nuclear watchdog documented roughly 140 incidents last year of lost or unauthorized utilization of atomic and radioactive substances, Reuters reported on Friday. It is not clear if the IAEA database and the CNS database were using different methodology for collecting or assessing information.
NEXT STORY: What Should Scare Us About Health IT
|
<urn:uuid:d04e2a84-a4d5-4e3f-ae6e-5691fc34f327>
|
CC-MAIN-2022-40
|
https://www.nextgov.com/cxo-briefing/2014/03/negligence-blame-all-73-incidents-missing-radioactive-materials-2013-report-finds/81052/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00037.warc.gz
|
en
| 0.928996 | 500 | 2.515625 | 3 |
NumPy supports many statistical distributions. This means it can generate samples from a wide variety of use cases. For example, NumPy can help to statistically predict:
- The chances of rolling a 7 (i.e, winning) in a game of dice
- How likely someone is to get run over by a car
- How likely it is that your car will breakdown
- How many people will be in line at the checkout counter
We explain by way of examples.
(This tutorial is part of our Pandas Guide. Use the right-hand menu to navigate.)
Randomness & the real work
The NumPy functions don’t calculate probability. Instead they draw samples from the probability distribution of the statistic—resulting in a curve. The curve can be steep and narrow or wide or reach a small value quickly over time.
Its pattern varies by the type of statistic:
Most phenomena in the real world are truly random. For example, if we toss out nearsightedness, clumsiness, and absentmindness, then the chance that someone would get hit by a car is equal for all peoples.
The normal distribution reflects this.
When you use the random() function in programming languages, you are saying to pick from the normal distribution. Samples will tend to hover about some middle point, known as the mean. And the volatility of observations is called the variance. As the name suggests, if it varies a lot then the variance is large.
Let’s look at these distributions.
The arguments for the normal distribution are:
- loc is the mean
- scale is the square root of the variance, i.e. the standard deviation
- size is the sample size or the number of trials. 400 means to generate 400 random numbers. We write (400,) but could have written 400. This shows that the values can be more than one dimension. We are just picking numbers here and not any kind of cube or other dimension.
import numpy as np import matplotlib.pyplot as plt arr = np.random.normal(loc=0,scale=1,size=(400,)) plt.plot(arr)
Notice in this that the numbers hover about the mean, 0:
Weibull is most often used in preventive maintenance applications. It’s basically the failure rate over time. In terms of machines like truck components this is called Time to Failure. Manufacturers publish for planning purposes.
A Weibull distribution has a shape and scale parameter. Continuing with the truck example:
- Shape is how quickly over time the component is likely to fail, or the steepness of the curve.
- NumPy does not require the scale distribution. Instead, you simply multiply the Weibull value by scale to determine the scale distribution.
import numpy as np import matplotlib.pyplot as plt shape=5 arr = np.random.weibull(shape,400) plt.hist(arr)
This histogram shows the count of unique observations, or frequency distribution:
Poisson is the probability of a given number of people in the lines over a period of time.
For example, the length of a queue in a supermarket is governed by the Poisson distribution. If you know that, then you can continue shopping until the line gets shorter and not wait around. That’s because the line length varies, and varies a lot, over time. It’s not the same length all day. So, go shopping or wander the store instead of waiting in the queue.
import matplotlib.pyplot as plt arr = np.random.poisson(2,400) plt.plot(arr)
Here we see the line length varies between 8 and 0, The number function does not return a probability. Remember that it returns an observation, meaning it picks a number subject to the Weibull statistical cure.
Binomial is discrete outcomes, like rolling dice.
Let’s look at the game of craps. You roll two dice, and you win when you get a 7. You can get a 7 with these rolls:
So, there are six ways to win. There are 6*6*36 possibilities. So, the chance of winning is 6/16=⅙.
To simulate 400 rolls of the dice, use:
import numpy as np import matplotlib.pyplot as plt arr = np.random.binomial(36,1/6,400) plt.hist(arr)
In the 400 trials, two 6s were rolled about three times.
Uniform distribution varies at equal probability between a high and low range.
import numpy as np import matplotlib.pyplot as plt arr = np.random.uniform(-1,0,1000) plt.hist(arr)
|
<urn:uuid:9d1af34d-46c5-461c-9a0b-8e5350001ae5>
|
CC-MAIN-2022-40
|
https://www.bmc.com/blogs/numpy-statistical-functions/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00237.warc.gz
|
en
| 0.875866 | 1,084 | 3.796875 | 4 |
Coronavirus and COVID-19 have people around the world rethinking their hygiene habits. In 2018, a CDC study revealed only 31% of men and 65% of women wash their hands after using the restroom. Flash forward to today and you’ll be lucky to find any soap left in bathroom soap dispensers.
This social shift has also alerted us to another dangerous vector for germs: our smartphones. Since the outbreak, more people are taking the time to sanitize their devices, which are undoubtedly among the items we touch most each day. Tap or click here to see how to clean your phone.
Smartphones aren’t the only frequently-touched items that need to stay clean. Smartwatches are also exposed to coronavirus particles. Here’s how you can keep yours clean and germ-free.
A vector for disease?
Smartphones and smartwatches are handled throughout the day with ordinary use. This means you transfer germs every time you touch them.
The National Institutes of Health recently found the SARS-COV2 virus, which causes COVID-19, can cling to surfaces for a longer period of time than originally thought. The virus can be found in aerosol particles for up to three hours, can stick to cardboard for up to 24 hours and can cling to plastic and stainless steel for up to three days.
Knowing this, it’s logical to conclude that glass is an equally dangerous surface. According to a USA today interview with Dr. Blanca Lizaola-Mayo, an internal doctor for the Scottsdale, Arizona Mayo clinic, screens provide an easy way for viruses to “reinfect” your hands after you’ve washed them.
She added that “you need everything possible to be as clean as possible, including your watch and Fitbit.”
What can I do about it?
Just like with smartphones, you’ll need to use chemical santizers like Clorox wipes, alcohol-based sanitizers or soap to disinfect screens. These chemicals destroy the fatty envelope that surrounds the coronavirus and kills germs before they can cause any harm.
Even big companies like Apple have changed their tune on disinfecting screens during this time of crisis.
While Apple used to advise against using bleach-based cleaners and similar agents on its products’ screens, it’s now advising customers use them to stay safe. Tap or click here to see a list of chemicals you can use to disinfect your devices.
To clean your device, the first thing you should do is make sure your hands are clean. Clip long fingernails and wash your hands with soap and warm water for a minimum of 20 seconds. This prevent your hands from transferring additional germs and bacteria.
Next, you’ll want to gently clean your smartwatch screen with a sanitizing wipe or even a soft-bristled toothbrush with rubbing alcohol. Avoid any holes or openings on the product, since this can harm your device.
Just make sure to avoid touching the wristband with harsh chemicals, as this can cause discoloration. That said, it will still clean the band, so the choice is yours. Metal bands should be cleaned the same way as screens.
Once you’re finished, dry off your watch with a soft, sterile cloth and wash your hands up to your wrist, just above where the watch rests, you can put your tech back on.
As you can see, keeping your gadgets clean isn’t all that difficult. All it takes is knowing the right steps to sanitize your tech properly. And by doing so, you’re not just protecting yourself, your’re protecting others who may have greater risk if they’re exposed to the virus.
I guess that means your life isn’t the only one your smartwatch could save. Tap or click here to see how a smartwatch saved a man from heart failure.
|
<urn:uuid:fbe41ca8-0fae-4114-8a9d-6e1a5048d57a>
|
CC-MAIN-2022-40
|
https://www.komando.com/news/coronavirus-smartwatch-sanitize/711840/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00237.warc.gz
|
en
| 0.943098 | 824 | 2.921875 | 3 |
Extortion scams are increasing in frequency and sophistication. The criminal contacts potential victims by email with a threat or claims to have compromising information that will be released to the public if the victim does not pay to keep it quiet. As 'proof' that the criminal has access to this material, the email includes sensitive information that only the victim should know, such as passwords. These attacks are becoming a new form of ransomware.
Attackers harvest stolen email addresses and passwords from past data breaches and use them in threatening email messages to add to the victims’ fears. They will either spoof the victim’s email address pretending to have access to it or claim to have personal or compromising information that they will use against the victim. Each email will contain payment demands with Bitcoin wallet details included inside the message.
Usually, extortion email messages are part of larger spam campaigns and are sent out to thousands at a time. Most of these emails will be caught in spam filters. However, like with many other types of email fraud, scammers are evolving their techniques to bypass email security and land in users’ inboxes. These attacks are becoming personalized and get sent out in smaller numbers to avoid detection. Attackers will use reputable email services like Gmail, vary and personalize the content of each message, and avoid including links or attachments — all in an effort to slip through security.
Extortion makes up about 7% of spear-phishing attacks, the same percentage as business email compromise. Employees are just as likely to be targeted in a blackmail scam as a business email compromise attack.
According to the FBI, the cost of extortion attacks was more than $107 million in 2019. On average, attackers ask for a few hundred or a few thousand dollars, an amount that an individual would likely be able to pay. Due to the large volume of attacks, the small payments add up substantially for attackers.
Extortion scams are under-reported due to the intentionally embarrassing and sensitive nature of the threats. IT teams are often unaware of these attacks because employees don’t report the emails, regardless of whether they pay the ransom.
There are a number of steps you can take to protect your users against extortion:
AI-based protection — Attackers are adapting extortion emails to bypass email gateways and spam filters, so a good spear-phishing solution that protects against extortion is a must. Artificial intelligence-based protection can identify attacks based on what normal communication looks like, including the tone of voice used by individuals. This allows it to recognize the unusual and threatening tone of extortion attacks, in combination with other signals, to flag it as malicious email.
Account-takeover protection — Some extortion attacks originate from compromised accounts. Be sure scammers aren’t using your organization as a launchpad for these attacks. Deploy technology that uses artificial intelligence to recognize when accounts have been compromised and used in fraudulent activities.
Multi-factor authentication — With multi-factor authentication (MFA) apps and hardware-based tokens, hackers will need more than just a password to access your accounts. While non-hardware-based MFA solutions remain susceptible to phishing, they can help limit and curtail an attacker’s access to compromised accounts.
Proactive investigations — Conduct regular searches on delivered mail to detect emails related to extortion. Search for terms like ‘Bitcoin’ to identify potential attacks. Many extortion emails originate from outside North America or Western Europe, so evaluate where your delivered mail is coming from, review any of suspicious origin, and remediate. Deploy technology that will automate threat hunting and remediation to stay ahead of hackers.
Security awareness training — Educate users about extortion fraud. Make it part of your security awareness training program. Ensure your staff can recognize these attacks, understand their fraudulent nature, and feel comfortable reporting them. Use phishing simulation technology to test the effectiveness of your training and evaluate the users most vulnerable to extortion attacks.
How Barracuda Can Help
Barracuda Email Protection is a comprehensive, easy-to-use solution that delivers gateway defense, API-based impersonation and phishing protection, incident response, data protection, compliance and user awareness training. Its capabilities can prevent extortion attacks:
Barracuda Impersonation Protection is an API-based inbox defense solution that protects against business email compromise, account takeover, spear phishing, and other cyber fraud. It combines artificial intelligence and deep integration with Microsoft Office 365 into a comprehensive cloud-based solution.
It’s unique API-based architecture lets the AI engine study historical email and learn users’ unique communication patterns. It blocks phishing attacks that harvest credentials and lead to account takeover, and it provides remediation in real time.
Barracuda Security Awareness Training is an email security awareness and phishing simulation solution designed to protect your organization against targeted phishing attacks. Security Awareness Training trains employees to understand the latest social-engineering phishing techniques, recognize subtle phishing clues, and prevent email fraud, data loss, and brand damage. Security Awareness Training transforms employees from a potential email security risk to a powerful line of defense against damaging phishing attacks.
Barracuda Incident Response automates incident response and provides remediation options to address issues faster and more efficiently. Admins can send alerts to impacted users and quarantine malicious email directly from their inboxes with a couple of clicks. Discovery and threat insights provided by the Incident Response platform help to identify anomalies in delivered email, providing more proactive ways to detect email threats.
Have questions or want more information about Extortion? Get in touch right now!
|
<urn:uuid:ef9f169e-08a1-416c-9bdf-ca5015fd32c4>
|
CC-MAIN-2022-40
|
https://www.barracuda.com/glossary/extortion?switch_lang_code=en
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00237.warc.gz
|
en
| 0.927936 | 1,141 | 2.53125 | 3 |
How can decentralised storage models prevent massive data breaches?
Is India's biometric database a massive achievement or a dystopian nightmare? Can blockchain technology transform the security industry?
How can decentralised storage models prevent massive data breaches? The Equifax hack has been cited as very possibly the worst leak of personal info ever due to the "breath-taking amount of highly sensitive data it handed over to criminals". As a consequence, "more than half of all US residents who rely the most on bank loans and credit cards" will be at a "significantly higher risk of fraud" for years to come. 😱
Writing for The Guardian, Jathan Sadowski provides an EXCELLENT overview of the "fundamental problem of the data economy as a whole: databanks like Equifax are too big":
As epic as Equifax’s hack was, things can get a lot worse. The credit reporting agencies Experian and TransUnion are data giants on par with Equifax and there are thousands of other data brokers that also possess large databanks. Data breaches like this one are not bugs, but rather features of a system that centralises immense amounts of valuable personal data in one place.
The vaults of these databanks are impossible to secure, in large part, because the wealth of information they hold is a beacon for hackers. Even the most impenetrable cybersecurity will eventually fail under the pressure of dogged hackers probing for weaknesses to exploit. Better cybersecurity is important, but it is not a solution. It only postpones catastrophic failure.
KEY TAKEAWAY 🔮 - The future of data storage is decentralised models and the US needs to introduce stronger data protection laws - Equifax would have be fined 4% of global annual turnover under the GDPR. 💸 This is a breach no one should get away with.
Is India's biometric database a massive achievement or a dystopian nightmare? ⚖ Writing for VICE News, David Gilbert explores India’s controversial Aadhaar database which sought to give an identity to the 400 million people in India which "did not exist in the eyes of the government". 👀 However:
What is emerging is that [Aadhaar] is being used to create a panopticon, a centralised database that’s linked to every aspect of our lives — finances, travel, birth, deaths, marriage, education, employment, health, etc.
Recently, the Indian government have also been granting private companies access to the system:
Microsoft, for example, already taps into the database to confirm the identity of people using a version of Skype designed specifically for the Indian market. And Airbnb confirmed to VICE News that it is looking into Aadhaar as a potential option for verifying hosts.
This move has been heavily criticised for increasing the government's spying abilities and for letting "private companies profit off valuable personal information". 🙀
CRUCIALLY, Gilbert emphasises how Aadhaar will be hacked - it's just a matter of time. ⏰ The government insists the data centre is "robust and uncompromised", but, similarly to Equifax, by "putting an entire country's information in one place, they've made one massive target for hackers". 🙈
Side note - intriguing article by Ron Miller examining the "promise of managing identity on the blockchain". ✅
Can blockchain technology transform the security industry? ⚔ Charlie Osborne discusses how, if implemented correctly, decentralised technology can improve security solutions - as there is "no central holding system":
In an age when trust in systems is critical, we may yet see the blockchain integrated into systems which handle sensitive data and financial transactions, or control IoT and mobile devices. The technology may also provide a trustworthy infrastructure for vendors to better retain control of enterprise networks, who does what on them, and as a means to tackle weak spots in security protocols.
🙌 Cognitive Logic has rebranded to InfoSum - check out our new site.
💰 Inside the store that only accepts personal data as currency.
📊 How to spot visualisation lies.
🚊 London underground wifi tracking - neat report.
🤖 Facebook AI learns human reactions after watching hours of Skype.
🏥 DeepMind: Working towards a Verifiable Data Audit to build trust.
✨ Visualising halt times of passenger trains at stations in Mumbai:
by Manasi Mankad
|
<urn:uuid:ae95f5f4-1731-4168-871a-36eb83ffc070>
|
CC-MAIN-2022-40
|
https://www.nickhalstead.com/single-post/datascanissue47
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00237.warc.gz
|
en
| 0.915464 | 920 | 2.59375 | 3 |
When creating a presentation using Google Slides, you may want to add quite a few elements beyond blocks of text to make your point. Graphics and photographs are important to creating a successful presentation.
A newer type of element that has become extremely popular for presentations is video. Adding video clips to explain a tricky point or to demonstrate the steps required to complete a task can be a fun way to break up the presentation while adding extremely important information.
Adding a Google Slides video is not an overly daunting task, but it can be a little tricky for the newcomer. You’ll need to have the video stored in the correct place before you insert it, for example. We’ll provide some tips to help you add a Google Slides video to your presentation.
How to Add a Video From YouTube
You will add a Google Slides video from YouTube by embedding it into the presentation. Start by opening your Google Slides presentation and open the slide where you want to add the video.
Select the Insert menu and click the Video command. You’ll see an Insert Video window that has three tabs across the top. We’ll discuss the two tabs that deal with YouTube videos here.
- Search: The left tab allows you to search through YouTube’s library of videos for a clip that you can add to your presentation. YouTube will return a list of videos related to your search criteria. When you find the video clip in the list that you want to use, click on it to highlight it and then click on the Select button in the lower left corner.
- By URL: If you already know the URL address for the YouTube video that you want to add to the slides, you can click on the By URL tab. Then paste the URL for the video into the text box. Google Slides will create a video preview window, so you can be certain that you have the exact video you want. Then click Select to insert it into the slide.
Adjusting the Size and Position
Once you have the video box on the slide in the presentation, you can click on and drag it to move it to any position on the slide. You also can drag the edges of the video box to make it larger or smaller on your Google Slides video page.
You will not have the ability to crop the video box’s dimensions inside the presentation, though, so you may want to insert the video box first, resize it, and place it on the page. Then, because the dimensions of the video box are locked in place, add your other elements to the slide around the video box.
How to Add a Video to Google Slides From Google Drive
If you want to add a video to the presentation that is not available on YouTube, you will need to have a copy of it stored in your Google Drive account. If the video is only stored on your computer’s hard drive currently, you will need to add a copy of it to your Google Drive account in the cloud.
To upload a video to Google Drive, click the New menu, followed by File Upload. Find the video stored on your computer’s hard drive. Highlight the file name and click Open to start the upload process. (Make sure you have enough free space in your Google Drive account to store the video, as video files can require quite a bit of storage space.)
Using Google Drive to Insert a Video
With your video file in your Google Drive account, you can take the steps to insert it into your presentation. Start by opening the slide in your Google Slides presentation where you want to add the video.
Click the Insert menu and the Video command. In the Insert Video window, click on the Google Drive tab on the right.
Once you click Google Drive, you’ll see some new tabs along the top of the Insert Video window. You can use the search box at the top of the window to look through your Google Drive account for the exact video or use the tabs to find the video you want to use. Once you find the video file, highlight it. Then click the Select button in the lower left corner of the window.
After adding the video box to your slide, you then can resize it by dragging the corner of the video box, or you can drag the entire box to a new location on the slide.
How to Play a Video in Google Slides
Google Slides allows you to select from a few different options for playing the video as you are running your presentation. To see your potential choices, click on and highlight the video that you’ve added to the slide. Along the right side of the screen, you’ll see a series of control options appear underneath Format Options.
Options for Video Playback
Underneath the video preview box in the Video Playback section of Format Options, you’ll see a drop-down menu from which you can pick one of three options for how you want to play the video when you are running the actual presentation.
- Play (on click): When you select to play the video on click, it means that the video will begin playing as soon as you click anywhere on the slide during the presentation. You don’t have to click directly on the video box to start playing the video.
- Play (automatically): To play the video as soon as the slide appears on the screen, use this option. This is the most convenient option, as it keeps your presentation moving along smoothly. However, you may want to have a second or two of silence or music at the start of the video so those watching the presentation aren’t caught off-guard by the sudden playing of the video as soon as the new slide appears.
- Play (manual): Selecting the manual option means you must click on the Play button on the video inside the presentation before it will begin playing.
Editing the Video Start and End Times
With the video highlighted in your slide, you also have an option to choose to start and end the video at a certain time, rather than playing the entire video. (This option will appear just below the playback drop-down menu that we just discussed.)
Google Slides will automatically set the time in the Start At box at 0:00 and in the End At box at the time for the full length of the video. If your video is 1 minute long, the End At box will be set at 1:00, for example.
You then can change the times in these boxes to alter how much of the video plays during your presentation. This can be helpful if you only want to play a short segment of a video that you have embedded from YouTube. For example, if you only want to play the middle 10 seconds of your 1 minute video, you could set the Start At box at 0:25 and the End At box at 0:35.
Another option is to click on and play the video in the preview window under Video Playback. Pause it at the position where you want the video to start playing in your presentation. Then click Use Current Time underneath the Start At box, and Google Slides will automatically enter the current time into the box for you. Find the ending position you want to use in the video and click on the Use Current Time text underneath the End At box.
Muting the Audio
There may be times where you only want the video to play without the accompanying audio. You then can talk over the video during your presentation, disseminating the information that you want, rather than playing the audio recording associated with the video. (This can be especially helpful if you are using an embedded YouTube video, where the audio recording may have little to do with your specific presentation.)
Under the Video Playback section of Format Options, click the Mute Audio checkbox to highlight it, and Google Slides will not play the audio associated with the Google Slides video.
The Best Way to Record a Video for Google Slides
If you would like to create your own video for your Google Slides presentation, this is a relatively easy process. You can record the video all by yourself, or you can use others to help you record a video.
Using a Webcam
When you want to record yourself in the video, a webcam makes this an easy process. You can attach the webcam to your computer. As it records the video, you will control the webcam through the software that’s on your computer screen.
Some webcams have an automatic upload feature that will record the video directly into your YouTube account. You then can perform editing functions within the YouTube software before embedding the YouTube video into your Google Slides presentation (as we described earlier).
You also could save the webcam video to your computer’s hard drive and upload it to YouTube or Google Drive later before embedding it into your Google Slides presentation.
Using a Digital Camera
If you already own a digital camera or camcorder, you can record your video onto a memory card that’s inside the camera. Then copy the video to your computer’s hard drive before choosing whether you want to upload it to YouTube or Google Drive. From there, insert it into Google Slides.
Almost any still image digital camera released in the past several years can record videos of excellent quality, so this is a great option. Consider placing the camera onto a tripod, so you can set up the perfect angle from the camera to your position. Set up good lighting, and you can use this camera to make a professional looking video all on your own.
A digital camera also works well if you are recording a video of other people, rather than of yourself, as you can hold the camera to do the recording. Most digital cameras allow you to connect a microphone to the camera, so you can record high quality audio as well.
Using a Mobile Device
One of the easiest ways to record a Google Slides video is to use the device you may have in your pocket or in your hand right now: A smartphone or tablet. You may already be familiar with how the video recording works with the smartphone or tablet, so you can make a quick video in almost no time.
YouTube has a mobile app that you can use to record and upload videos directly from your smartphone or tablet into YouTube as you’re recording them. You can then edit the videos inside the YouTube software before adding them to your presentation.
The video quality may not end up being as good as with a digital camera or webcam when you use a mobile device, but this may not be important for your presentation.
Hiring a Professional
Finally, if you have the budget for hiring a videographer to help with recording the video for your presentation, you will be able to make an extremely high quality video that will have the appropriate lighting and audio quality.
You can be certain the professional will have the equipment on hand to create an excellent video. You just have to decide whether undertaking the cost of this option is worth it for your presentation.
|
<urn:uuid:86fdccd0-35e4-4525-97fe-cf4f4ecd0383>
|
CC-MAIN-2022-40
|
https://nira.com/google-slides-video/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00237.warc.gz
|
en
| 0.897573 | 2,258 | 2.5625 | 3 |
One of the major steps to getting a new computer is transferring existing files to it. For Apple products, this is simpler as Macs offer the opportunity to do so from the startup. This isn’t the case with Windows computers however.
Instead, transferring data from PC to PC has to do be done through some extra steps. These extra steps aren’t difficult to perform thankfully, though some are better than others. From the quick-and-easy solutions to the best ones, here are some ways to transfer files.
1. Using An External Hard Drive
The easiest and most obvious option is through external hard drives. They’re able to contain large amounts of data and can even be used to back up a computer. For those using it for that purpose, the work is easier. Plug the hard drive into the other computer and copy the necessary files from there to the computer.
Alternatively, there is USB flash drives which can be helpful for those only needing to transfer a few files.
One other method that’s even faster is through the use of eSATA. First, check to see if the computer has an eSATA port or an available SATA slot. If that’s the case, when connecting the USB or hard drive into the new computer, there will be another drive appearing on the new PC.
Transferring files over SATA is faster than using a USB drive.
2. Sharing Over LAN or Wi-Fi
When computers are close to one another, there are some alternative methods for sharing files and folders beyond hard drives and USBs. These alternatives being through local area networks (LAN) or to use software to transfer files through Wi-Fi.
All major operating systems have built-in options for users to set up a home network. What this does is devices on the same router – that are connected via Ethernet or Wi-Fi – are permanently able to recognize one another.
In cases of transferring files between computers, this means that once there is a connection between two computers, that connection will always be there, allowing people to transfer files between both at any time.
This connection can even be used between Windows and Macs and vice versa. This also works between computers with the same operating system as well. For those on Linux, the menu system will differ based on operating system. However, once at the network settings, it’ll have a similar setup to setting up a home network on MacOS.
Similar to LAN, using Wi-Fi to transfer files requires the computers to be using the same Wi-Fi network. From there, software is needed to move files between computers. This method is ideal for temporary networks. An ideal software for this task is Send Anywhere.
Send Anywhere is an app that’s compatible with Windows, Mac, and Linux. There is even a web app and a Chrome extension too. The setup is also clean and simple and it’s free to use. Transfer files between computers or even send files to phones and tablets.
3. Transfer Cable
Similar to transferring files via USB or hard drives, this method involves a USB bridging cable or a USB networking cable to work. This method is significantly faster in transferring files because it removes the need for users to copy and paste specific files between computers. This transfer process happens automatically.
It’s also better than external hard drives as the method requires users to use three drives to transfer files. Transfer cables reduce that to two drives.
Between Windows Computers
When transferring between two Windows computers, plug the USB cable on both computers. Wait a bit until both computers recognize the cable. Once they do, they automatically install drivers.
Once the USB cable’s driver is installed, download and install data transfer software for both computers. Launch the transfer app on both computers and the transferring process will start.
Between Mac Computers
Macs have their own unique cable called the Thunderbolt cable. Once that is connected, both computers will detect each other. Transferring files from there is as simple as dragging and dropping them between both systems.
Between Different Computers
Whether it’s Windows, Mac, or Linux, the transferring between cables can still be done but requires more setup. This setup involves getting an Ethernet cable and building a LAN without a router. The Ethernet cable should also be a crossover Ethernet cable – the cable will have different colour patterns on both ends. After that is plugged in, set up network sharing on both computers and the transfer process will start.
4. Connecting HDD or SSD Manually
In some cases, users may be transferring files from an older computer to a more up to date one. This happens when the old computer is no longer functional, or there is a need to install a new hard drive to replace the old one.
When dealing with Hard disk drives (HDD) and Solid State Drives (SSD), transferring files between them can be done through standard SATA cables to connect to the motherboard. Users will need a spare SATA or eSATA port to connect the old hard drive to as well.
Once that’s done, the operating system will recognize it as a new drive and then start transferring. This is the fastest method of all options on this list.
5. Cloud Storage Or Web Transfers
The last option is simply using the internet. More and more users are moving to cloud storage for protection and to save files in general. As a result, this is the easiest way to sync files between computers.
The issue with that is it’ll take more time than the other methods. We’re talking several minutes to several days here. The range of it depends on the quality of the internet connection and how many files are being transferred.
Cloud storage providers are in no short supply either. Dropbox, Google Drive, or OneDrive are the first few that come to mind.
Cloud drives are a good option as well as size of the files are virtually unlimited. The only limitation would be the storage space of the computer files are being transferred to. What can also be nice is if both computers are syncing folders locally, then as soon as one uploads files, the other will download those files at the same time.
Any Method Will Work To Transfer Files
Any of the methods mentioned above can allow people to transfer files across any kind of computer without any issue. The only thing to keep in mind is what method is appropriate. When moving a lot of data, it’s better to use a wired connection between computers for security and speed. If it’s only a few gigabytes of data – or megabytes – then a wireless option will work or a traditional USB drive.
|
<urn:uuid:7d0f6490-80d8-4928-b105-ef7c961ec685>
|
CC-MAIN-2022-40
|
https://davidpapp.com/2022/09/05/5-ways-to-transfer-files-from-one-computer-to-another/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00237.warc.gz
|
en
| 0.938299 | 1,383 | 2.765625 | 3 |
Breakthrough in quantum sensing by Northeastern University researchers provides new material to make qubits
(Phys.org) Atomic defects in certain solid crystals may be key to unleashing the potential of the quantum revolution, according to new discoveries by researchers at Northeastern University. The defects are essentially irregularities in the way that atoms are arranged to form crystalline structures. Those irregularities could provide the physical conditions to host something called a quantum bit, or qubit for short—a foundational building block for quantum technologies, says Arun Bansil, university distinguished professor in the Department of Physics at Northeastern.
Bansil and colleagues found that defects in a certain class of materials, specifically two-dimensional transition metal dichalcogenides, contained the atomic properties conducive to make qubits. Bansil says the findings, which are described in a study published in Nature Communications, amount to something of a breakthrough, particularly in quantum sensing, and may help accelerate the pace of technological change.
“If we can learn how to create qubits in this two-dimensional matrix, that is a big, big deal,” Bansil says.
Using advanced computations, Bansil and his colleagues sifted through hundreds of different material combinations to find those capable of hosting a qubit.
The key finding of the study is that the so-called “antisite” defect in films of the two-dimensional transition metal dichalcogenides carries something called “spin” with it. Spin, also called angular momentum, describes a fundamental property of electrons defined in one of two potential states: up or down, Bansil says.
The challenge for researchers has been how to find qubits that are stable enough to use, given the difficulties in finding the precise atomic conditions under which they can be materially realized.
“The current qubits available—especially those involved in quantum computing—all operate at very low temperatures, making them incredibly fragile,” Bansil says. That’s why the discovery of transition metal dichalcogenides’ defects holds such promise, he adds.
|
<urn:uuid:60f1cfb0-181d-4a01-a5f4-70ecc764aca3>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/breakthrough-in-quantum-sensing-by-northeastern-university-researchers-provides-new-material-to-make-qubits/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00437.warc.gz
|
en
| 0.919229 | 426 | 3.140625 | 3 |
14 years ago
made its major public debut at the New York World's Fair in 1964,
located in Flushing Meadows Park in Queens. The Bell System provided
about 7,000 pay phones for the fair grounds of new Touch Tone
telephones in futuristic swirl phone booths. (The booths were
spacious and had thick dividers for good acoustics. However, they
were open at the top. A simple canopy gave some rain protection, but
not as much as the traditional phone booth did.)
I believe direct distance dialing (via TSP) was also available from
fair coin telephones.
Unlike many other enterprises involved with the fair, New York
Telephone charged its normal rates. Local calls from the fair were a
dime, like everywhere else.
Part of the publicity was providing the fair president, Robert Moses,
with a Touch Tone phone for his desk*.
This debut followed successful trials and offerings of service in the
towns of Carnegie and Greensburg, Pennsylvania in November of 1963.
In April 1964, residents of Queens, New York, where the fair was held,
could order Touch Tone for their homes. The rate was $1.90 per
residential line per month. (Pennsylvania subscribers could get it
for $1.50 per month).
Everything didn't go smoothly. In April 1964, before the fair opened,
there were problems with pilferage from exhibits. Bell Telephone
claimed 3,500 of its 6,500 phones were vandalized even though "the
phones won't work anywhere else". The newspaper report on this wasn't
too clear, but I find it hard to believe that 3,500 pay phones were
stolen or even vandalized. But behind the scenes, things weren't
going well at the Fair; a great deal of dissention.
I wonder how much extra switching capacity and trunking had to be
installed in the Corona-Flushing Central Office to accomodate all the
World's Fair traffic from both exhibitors and patrons. Obviously TT
converters were necessary, but they'd need more capacity overall, plus
long distance trunking and operator services for all the toll calls,
even with TSP.
If anyone attended the 1964-65 fair and visited the Bell Telephone
pavilion, perhaps you could share your experiences. One popular
demonstration at the fair was Picturephone, which is one prediction of
the future that never came to be.
Western Union had an exhibit where one could send a telegram home for
Ref: "The End of Innocence, The 1964-1965 New York World's Fair" by
Lawrence R. Samuel, 2007.
As an aside, do they still have World's Fairs? I never hear of them.
* Moses was a very busy and powerful man, holding numerous
governmental positions, building parks, highways, dams and power
stations, and housing projects. His personal office telephone system
was very simple. He also refused to have a mobile telephone in his
car, preferring to use that time for uninterupted work.
The address for email submissions has changed: if you submit posts via email, send them to
telecomdigestmoderator atsign telecom-digest.org.
If you submit posts via a newsreader or Google groups, you don't need to change anything.
|
<urn:uuid:30f04647-b417-4c67-b8e0-f5d6d084cd3b>
|
CC-MAIN-2022-40
|
https://forums.cabling-design.com/telecom/touch-tone-at-1964-world-s-fair-telecom-14434-.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00437.warc.gz
|
en
| 0.964307 | 715 | 2.96875 | 3 |
Have you ever thought about the process that it takes in order to obtain a passport? Get health insurance? Register to vote? In order to gain access all these government benefits (and more) you need to be identifiable to the government. This happens when you register for a national ID card or number, or any mandated identity verification method. These methods vary by country in approach and level of security. Biometrics can help ensure a more secure method to confirm that you are who you say you are in relation to civil identification.
Civil Identification (Civil ID) is the establishment of a unique identity of an individual through the verification, registration, management and conservation of personal data.
This could be a unique identity number, signature, photograph or biometric data—in addition to existing biographic data. By gathering this information, the goal is to establish a unique civil identity in the public sector space for each individual. The purpose of it is for verification and authentication of citizen identities and permission to access to a benefit or service.
Biometric identifiers have been implemented in citizen identification methods in many countries. This helps to combat anonymity and allows citizens to gain access to government services.
Biometrics & Civil ID Around the World
As mentioned above, many countries have mandatory identification systems—some of which have biometric identifiers that authenticate or verify an identity. These may include fingerprints, iris scans, photos of an individual’s face, etc. Biometrics-based Civil ID can be used for more than identification or verification of the identity of individuals when interacting with governments. Some examples of how else these systems are used include for banking services, payment of taxes and voter registration.
Let’s take a look at a few countries that include biometrics in their Civil ID process:
United States of America
In the US, citizens have a Social Security number (SSN) that is administered by the Social Security Administration (SSA). An SSN is a nine-digit number issued to American citizens and they are also given a card. It is a unique identifier that tracks citizens income and determines social security benefits. It is also used to obtain government benefits, when filing for taxes, applying for a passport, driver’s license and more.
The German parliament has implemented a National ID system that requires fingerprints and biometric images on National IDs.
In 2009, India launched its national ID system, the Aadhaar. This program issues citizens a unique, 12-digit identification number that stores their biometric and demographic data (fingerprints and iris scans). Citizens can also use these IDs to conduct varying types of government business, like filing taxes.
In Ghana, the national ID card is called the GhanaCard and is issued by the National Identification Authority (NIA) to Ghanaians. The GhanaCard holds personal information about a citizen’s identity, that can be always verified. The NIA uses three types of biometric technology: digitized fingerprints, facial templates and iris scans.
The digital ID for the Philippines (Phil-ID) and it collects personal information and biometric data. The biometric data that is collected are iris scans, fingerprint, and facial images. It allows for its citizens to access government services such as opening a bank account. The goal of this system is to enroll all citizens by 2022.
Many countries around the world today have implemented biometrics with their national ID cards to improve civil ID practices. Verifying and authenticating an identity with physical characteristics helps the government ensure safety and allow them to access government–type business. This could be filing taxes, applying for a drivers’ license, or registering to vote. Using biometrics not only helps maintain safety but makes government processes more convenient and streamlined — for a better experience of services for its citizens.
Many countries are already tying biometrics into their civil ID process. As governments can identify citizens throughout the country (including what types of government business they are doing) the bottom line is that; they are trying to ensure citizens safety while eliminating fraud and dangerous activity.
If you are considering biometrics-enabled citizen identification, Aware has a full breadth of offerings that can be the solution. Our innovative biometric offerings provide critical support to help ensure national security and protect citizen freedoms and liberties around the world. Contact us to learn more.
|
<urn:uuid:856762f5-de61-40f3-b933-631071b383cc>
|
CC-MAIN-2022-40
|
https://www.aware.com/blog-international-civil-id/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00437.warc.gz
|
en
| 0.92696 | 894 | 3.484375 | 3 |
The terms information security, cybersecurity and network security are nothing new. You might at least have a slight idea about what they mean, but don’t necessarily have a full grasp on the differences between them. One thing is for sure, though: they all secure business data. But does this mean that they are the same? Not quite. While all these security umbrellas protect information, each one serves a specific purpose. Not only are they instilled in security protocols and cultures, but they are all a part of overarching security practices and cloud computing. Above all, they are all crucial to the safety of your organization.
Below we define what each one is, how they differ and how all three are important to your business protection.
Information Security (InfoSec), secures both physical and digital data. It works off the CIA triad, which stands for maintaining the confidentiality, integrity and availability of IT systems and business information via the following:
- Confidentiality – enforced through encryption and ensures that unauthorized parties cannot access sensitive information.
- Integrity – upholds that data is accurate and trustworthy and ensures that sensitive information cannot be modified by unauthorized parties.
- Availability – ensures that authorized parties can access sensitive information at their leisure and update it when needed.
Information Security handles physical protection as well. It secures tangible items like paperwork from physical theft, unauthorized access, and natural disasters like fires and floods with locks, power supplies and shredders. As a result, Information Security is the responsibility of every employee to uphold. It is important to educate personnel on best practices for information security and have well-known protocols in place for the handling of sensitive information.
Why it’s Important to Your Business:
Although we live in a digitized world, businesses will always need to secure tangible items like employee paperwork, social security numbers and company credit cards. Whether this looks like locking up files with confidential information or shredding credit cards, physical items need protection just as much as digital data. Information Security secures your business information, both physically and digitally.
Cybersecurity is essentially a method of reinforcing Information Security. Cybersecurity works to ensure the proactive protection of your digital information, your network and your company’s endpoints via various Security Solutions. Cybersecurity often utilizes cloud computing, or Security as a Service (SECaaS), to prevent malicious activity from entering your system. Through a cloud service provider, like Cyber Sainik, cybersecurity can protect business networks from all types of cyberattacks, like phishing, smishing, malware, malvertising, ransomware, deepfakes and more.
Why it’s Important to Your Business:
According to Cybersecurity Ventures, a new ransomware attack occurs every 14 seconds, while a single cyberattack can cost an organization as much as $1.6 million, which makes cybersecurity solutions a must-have investment. 97% of cyberattacks could have been prevented and Cloud Security solutions guarantee 99.99% business protection. Not only is cybersecurity affordable, but Security as a Service solutions are completely customizable for your business model.
Finally, Network Security is a branch off cybersecurity. The two solutions are very similar and work simultaneously; but while cybersecurity is the overarching concept of protecting your business as a whole from cyber threat, Network Security specifically focuses on the Network and ensuring it is not compromised. Internal security practices like secure Wi-Fi, software updates, password protocols and multi-factor authentication also fall under network security and are essential to a strong cybersecurity culture. Contact us for more information
|
<urn:uuid:635c51aa-514f-47a1-a3eb-bebfcd67298b>
|
CC-MAIN-2022-40
|
https://cybersainik.com/information-security-vs-cybersecurity-vs-network-security-are-they-the-same/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00437.warc.gz
|
en
| 0.923636 | 727 | 2.90625 | 3 |
What is Ethernet and how does it work? Which type of Ethernet cable should I use? All these questions and more, answered below. Watch the videos or read the information below. Learn more about what Ethernet is and what it can do for you or your network, at home or at your business.
What Is Ethernet?
In the below video from Sunny Sun find out what Ethernet is (also known as the IEEE 802.3 standard). Sunny is an Associate Professor of Cyber Security at the University of Saint Mary in Leavenworth, Kansas. He teaches courses such as networking, cyber security, computer forensics and programming. In the below video, Sunny explains 7 key things about Ethernet, right down to where the name ‘Ethernet’ comes from.
Video transcript: Hello and this is Sunny. Welcome back. Today my topic is Ethernet. What is ether? Ether was once thought to be media carrying light throughout the universe.
Ethernet was named for IEEE 802.3 standards in a way that physical media could similarly carry data everywhere throughout the network. Ethernet refers to a family of a local area network technologies or LAN technologies that share some main features. The implementation of the network might be different. But the basic topology, frame type and network access method remain the same.
7 things you should know about Ethernet:
- Ethernet is a local area network (LAN) technology and is the most widely-installed LAN technology. Ethernet has largely replaced other competing LAN technologies such as Token Ring, FDDI and ARCNET.
- Ethernet technology operates at a both physical and a data link layer of the OSI model.
- Star-bus topology is a standard Ethernet topology.
- In terms of cabling, an Ethernet LAN typically uses a TCP/STP fiber optics and a coaxial cable.
- Ethernet media access method is CSMA/CD, e/212332weCarrier Sense Multiple Access with Collision Detection.
- There are many versions of Ethernet speeds; 10 megabit, 100 megabit, 1 gigabit, 10 gigabit, 100 gigabit and even more.
- Last but not least, seven. Ethernet is a base band system.
Please check out my playlist “Ethernet Basics” if you want to know more about Ethernet and some concepts mentioned in this video.
What Ethernet Cable To Use – Cat 5 Cat 6 Cat 7?
With so many options and category variants available, what Ethernet cable should you use? Below is a cable explainer from ThioJoe. Joe hosts one of the most popular technology explainer channels on YouTube, with more than 2.4 million subscribers. He takes a look through the options for Ethernet cables, from cat 5 through to cat 6 and 7.
Video transcript: If you’ve ever gone to buy an Ethernet cable for any reason, you may have noticed that there are several different types to choose from. Some of them may say Cat 5, Cat 5e, Cat 6, Cat 6a. But what do all of these mean and does it really make a difference which one you get?
Well, that’s what we’re going to talk about today. So you can know what’s worth buying and potentially save some money and also I’m going to go over a real world test to see how much of a difference it makes in your own home internet.
So first of all, what the heck do the cat ratings mean anyway? Well, for Ethernet cables, that stands for category and the different numbers represent different standards and specifications for each type of cable. So you can think of them like different versions. Now the good news is that all of these cables will typically “work” since the new versions are all backwards-compatible. They all use the same RJ-45 connector, often just called the Ethernet port. But the difference in the different ones are the rated performance of each.
The Different Types Of Ethernet Cable
So let’s go over all the different types of Ethernet cables you may come across from Cat 5 all the way through Cat 7 and beyond. The first type is really common which you probably already heard of. It’s called Cat 5. However these days, when someone says “Cat 5,” they’re probably referring to the newer version of Cat 5e, but we’re getting a little bit ahead of ourselves with that.
Now a true Cat 5 cable is actually obsolete and you probably can’t even buy them anymore. A Cat 5 cable is only rated for up to 100 megabits per second at 100-meter maximum length and that’s with a 100 megahertz bandwidth.
So obviously only being rated for 100 megabits, you’re almost never going to see these anymore because usually one gigabit is kind of the minimum and if you’re still using one, you should definitely replace it. Because in addition to having a slower speed, it also might be less reliable than the new types we’re going to talk about in a second and this brings us to Cat 5e which I just mentioned and the Cat 5e stands for category five “enhanced”.
So Cat 5e is very common these days and it’s rated for one gigabit speeds at 100 meters as opposed to the original one. It’s just 100 megabit and again this has a bandwidth of 100 megahertz and this is due to the improved specs regarding twisting of the wire pairs inside, shielding and other improvements which reduce “cross talk” or the interference of the different signals, which would reduce the speed. Also a regular Cat 5 cable only required two twisted pairs of wires inside while Cat 5e uses four. So obviously it can transfer more data. A Cat 5 cable may have had four but it only required two.
So an important thing to note is that the ratings certifications are for the bare minimum specs. So it’s very possible that a cable will be capable of much more than what it’s rated for. So for example a Cat 5 cables might actually be capable of close to gigabit speeds if it’s a really high quality premium cable even though it’s older and the same will go for all of these other types. It’s just the rating is basically a guarantee. After Cat 5e came Category 6, which bumped the spec from one gigabit to ten gigabit at 55-meter length and with a bandwidth of 250 megahertz up from 100.
By the way, the bandwidth refers to the range of frequencies that the cable is able to reliably use, which explains why it would improve the speed. It has got more “space” to fit the data in a way and the Cat 6 further reduces cross talk. That’s kind of the main way to improve the speed in addition to the bandwidth using tighter wound wire pairs and may also use things like a plastic core through the middle of the cable to better separate the internal wires and things like that.
I would say Cat 6 is a good choice if you’re really not sure what type of cable you’re going to need since it probably won’t be that much more expensive depending on where you buy it and it will future-proof your cable for a while. You will probably be able to use it for the near future.
But this is especially important if the wire can’t easily be replaced. Like if you’re wiring a house for example where it would just be in the walls forever, I would definitely get at least Cat 6, probably even one of the higher-rated ones we’re about to talk about.
But if you’re just buying a general purpose Ethernet cable for your laptop or something, Cat 5e would definitely be fine as well since I doubt any of your devices right now are going to be capable of 10 gig anyway. So Cat 5e, Cat 5, 6, whatever you want. So by now you might be thinking, “OK. Surely Cat 6 is pretty much the best. I mean why would you need anything more than 10 gigabit, right?”
Well, you might be right but we’re not going to stop there. What fun would that be? Because there’s also a Cat 6a and this is one is also capable of 10 gigabit but at a longer maximum distance of 100 meters instead of 55 and it has a larger 500 megahertz bandwidth. So if you are actually creating a 10-gig network, Cat 6a will be more reliable at getting your full speed since again it has got further improved specs for reducing that cross talk. It’s just going to be more reliable.
Now finally the big daddy of the Ethernet cables for now at least is Category 7. As far as I could tell, this is the fastest type you can buy at the moment. There are other cables that like claim to be Category 8 but I don’t think they truly are. Cat 7 is also ready for 10 gigabit speeds but with a higher bandwidth of 600 megahertz, even larger than the 500, and it has got the strictest specifications for reducing cross talk such as requiring shielding between individual wire pairs in the cable as well as for the whole cable itself.
This seems to be all about improving reliability. Not necessarily the speed since it doesn’t actually improve anything about 10 gigabit, even though it probably is capable of higher speeds if you had a switch that was capable of faster than 10 gigabit on that side.
So I think Cat 7 might be best suited for extreme future-proofing, permanent wire installations for people who are not just satisfied with the best but rather want the completely unnecessary. So if you’re wiring a house and you just want to go all out, get Cat 7. All right. So we’ve learned that there are tons of different Ethernet cables you can use. But does it even really matter? I wanted to find out. So I decided to do a quick real life test.
I got three different cables, a Cat 5e, a Cat 6 and even a Cat 7, all the same five-foot length and I wanted to find out if it would make a difference when I used it with a gigabit internet connection since that’s really the fastest internet you’re going to get right now anyway. And yeah, I know I could have done a logo gigabit test but I wanted to do it this way. It’s a little bit more practical I think.
So for this, I’m simply going to connect my laptop directly to the router using each cable and to make sure there’s no limiting factors, I have the router connected to the fiber optic intake with the Cat 7 cable, so there’s no limiting agent there.
So Which Ethernet Cable Is Best?
Just looking at the three cables I used here for the test, this is the Cat 5e. It’s definitely the thinnest. Although it’s not flimsy or anything. Then the black one is the Cat 6, definitely a little bit thicker and then of course the Cat 7, there are some noticeable differences. It’s very rigid. You can tell there’s a lot more shielding in here and it has got a metal connector. So definitely way higher construction quality.
So I went and did all that. And what was the difference? Well, none at all. As I pretty much expected, with such a short distance at only five feet, all the cables were more than capable of handling the gigabit connection. I had also tested the upload speed but it was very inconsistent even between tests of the same type of cable. So I just didn’t consider that in this one.
Then out of curiosity, I did a speed test on my desktop which is plugged into the port in the wall. So in that test, I would guess it had maybe an extra 50 feet of Cat 6 cable to deal with, as opposed to the other control test, and the speed only dropped less than 10 megabits. So even with about 50 feet of Cat 6, the loss was less than one percent of the speed. So really it’s not that big a deal.
So the takeaway here is that unless you need to worry about future-proofing your connection, it really doesn’t matter what type of cable you buy. Perhaps if you have tons and tons of cables right next to each other, it’s like really electronically noisy or something and you need that shielding. The improved shielding on the better categories might help you. But in all other cases, it really shouldn’t matter at all.
Now after looking at all this, you may be wondering, “What’s the point of all these other cables if you can’t even really use them, if it doesn’t make that big of a difference in most situations?” Well, part of it is marketing since it’s easy to say you need the better cable with the higher number, which of course costs more. But there is networking hardware out there that is capable of 10 gig internet. It’s usually commercial equipment though.
However, we are starting to see some 10 gigabit consumer-grade switches out there. For example, there’s the new Asus XG-U2008 switch, which has two 10-gigabit ports. It’s only about $250, much less than what you would spend on an enterprise switch, along with a regular gigabit port.
So you could hook up your computer and maybe a network storage device to the 10-gig port. Then everything else would go into the regular gigabit ones. That way, even if none of the other devices on the network are capable of 10 gigabit, it would allow multiple one-gigabit data transfers to multiple devices simultaneously.
So the 10-gigabit NAS or storage server will be able to provide out that 10 gigabit and then it could kind of be leeched off by as many devices as you want or of course you could do a full 10-gigabit transfer between the two devices plugged in. So between the server and your computer, if they’re both plugged into that other port.
In that sort of situation though where you do have 10-gigabit capability, you would need Cat 6 or higher at least for those two 10-gigabit ports and if it’s over any kind of distance, Cat 6a would be ideal because you’re going to get that better reliability. Even if it’s not that big of a difference, you still may as well. But something tells me that not too many people are going to be using 10 gigabit for a while. So I guess from all this, my takeaway is that even the old Ethernet standards have held up surprisingly well. I mean believe it or not, the RJ-45 connector used in all these Ethernet cables was first standardized in 1987. At that time, the minimum spec was only three kilohertz bandwidth and now it’s getting into the gigahertz.
So I think it’s safe to say that the connector will probably be here for a while. It’s not going anywhere anytime soon since it seems like there’s still a lot of room for expansion. We might even see 100 gigabit. Who knows?
So I think that is it. Hopefully guys, you thought this video was pretty cool and interesting. I would love to hear what you think down the comments section. Are you still using old Cat 5 cables you didn’t really know about? It usually says it printed on the side if you’re not sure. Or do you need that full 10 gig speed? I don’t know.
I myself kind of went crazy recently. I bought a bunch of Cat 6 and Cat 7 cables since I could never seem to find any Ethernet cables when I needed them. So I’m like, “May as well get the best,” and I’m actually using the Cat 7 cables to connect all the most important stuff in my network like the router and the switches hooked up to it for a maximum performance just in case. You know, even if it doesn’t make that big of a difference, I want to have the best and remove all doubt where it might matter.
But anyway, if you guys did enjoy this video, be sure to give it a thumbs-up. I would appreciate it and if you want to keep watching, I will put some other videos right here. You can click on these even if you’re on a phone and if you want to subscribe, I make new videos every Tuesday, Thursday, Saturday and also consider clicking the bell next to the subscribe button for notifications or else YouTube might not even show you the new videos at all.
So thanks again for watching guys. I’m looking forward to hearing from you and as usual, I will see you next time. Have a good one.
Why Is WiFi Slower Than Ethernet?
High speed WiFi services such as WiFi 6 are now possible today, with compatible hardware now rolling out. But wireless is never going to beat wired for speeds. Linus over at the Tech Quickie channel created a video to explain why an Ethernet connection via a cable, will almost always be faster than a WiFi connection.
Video transcript: Thanks for watching Techquickie. Click the “Subscribe” button. Then enable notifications with the bell icon, so you won’t miss any future videos. So picture this. You just wired your desktop PC up to some uber fast internet connection which is like super exciting because surely this will be no more lagging out of your favorite game or thrilling Skype dates. Then eager to experience this kind of speed on your laptop or mobile device, you buy a fancy-looking WiFi router. You key in your password and – wait, what? Your speeds aren’t even half of what you’re getting with the wired connection. What gives?
Well, unfortunately, wireless is pretty much always going to be slower than wired. It’s a near universal truth that becomes more and more obvious the faster you try to go, even if you spend tons of money on high-end wireless gear. But then – OK, now bear with me here, because EM waves do move faster through the air than electrons do through a wire. So what is it? Well, let’s start with the most obvious, signal range.
If you’re using an Ethernet cable and you want gigabit speeds, you can have a cable run of up to 100 meters. That’s roughly as long as a football field. This is because the signal inside the cable doesn’t deteriorate appreciably until you have a longer cable run.
But radio signals flying through the air such as WiFi are much more prone to signal degradation. Unlike a physical cable which has a copper wire inside that only carries network traffic and is wrapped up in materials to shield the signal from interference, WiFi signals are just blasted everywhere, meaning they have to compete with walls, your roommate’s microwave and other network traffic.
You see, unlike Ethernet where your device gets one dedicated pipe that runs to your modem or your router, there’s only so much spectrum available for your WiFi enabled laptop and your phone and anything else. What that means is that your device will often be broadcasting on the same frequency or channel as others, which can lead to more interference that can further degrade the signal and give your router more work to do to sort it all out.
But OK, hold on a second Linus. You can hook up lots of wired devices to a router as well. So doesn’t your router have to figure out where all those different signals are supposed to go? Yes. But WiFi and Ethernet have different strategies to combat packet loss, which is exactly what it sounds like, when a chunk or a packet of data doesn’t reach its destination.
Oftentimes this can occur due to a collision, when two devices try to transmit it at precisely the same time. If this happens, the packets have to be resent. So the way that an Ethernet connection avoids collisions is that once the sender determines that its path to the destination is clear, it sends the packet immediately. If the path is busy, the sender will send the data as soon as the path is clear again. WiFi on the other hand introduces a small delay once the path becomes clear.
The idea is that since a wireless router can’t magically detect a collision in midair, this delay reduces the risk of collisions. But as it does so, it also adds more latency. And although many leaps in WiFi technology have been made over the years, it still resembles much older school communications protocols in one important way. It is half-duplex, meaning that a WiFi gadget’s antenna can only be sending or receiving at any given moment, not both.
Now full duplex wireless is in the works but it’s still experimental and suffers from its own special kind of interference that results from the antenna trying to deal with both inbound and outbound signals at the same time. By contrast, Ethernet has been full duplex for quite some time now as it’s not difficult to simply put one wire in for transmitting data and another one for receiving it on the same cable.
So all other things being equal, don’t be surprised if your Wi-Fi connection always seems just a bit slower even if you do walk around with your smartphone neurotically duct-taped directly to a router.
Just please, please go faster. Speaking of going faster, if you’re a freelancer or a small business owner and you want to get your work done faster, check out FreshBooks. FreshBooks is the cloud accounting software that’s designed for the way you want to work and it’s the simplest, easiest way to be more productive, more organized and perhaps most importantly, get paid faster.
You can create and send professional-looking invoices in less than 30 seconds. You can set up online payments with just a couple of clicks to get paid up to four days faster and you can see when your client has seen your invoice to put an end to the guessing games. So don’t take my word for it. Try out FreshBooks for free. They’ve got a 30-day free trial available to our viewers down below and then guys, when you do sign up, that’s www.FreshBooks.com/Techquickie. Make sure you enter “Techquickie” in the “How did you hear about us?” section.
So thanks for watching, guys. Like, dislike. Check out our other videos. Don’t forget to leave a comment if you have suggestions for future As-Fast-As-Possibles. We do read those things, you know, and subscribe because if you don’t subscribe, bees, bees will eat your hair. They do that, you know. It’s not a misinformation. We’re a tech channel. People aren’t expecting biology factual accuracy.
|
<urn:uuid:7841cccd-9d36-4fcd-b8e8-3e9c5f382cbf>
|
CC-MAIN-2022-40
|
https://www.fastmetrics.com/blog/wifi/what-is-ethernet/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00437.warc.gz
|
en
| 0.955799 | 4,853 | 3.4375 | 3 |
Spyware is a subset of malware, or malicious software, intentionally deployed against users and downloaded onto their devices. Once installed, spyware logs information from the user’s device and web browsing habits and relays it to an outside party where the information is leveraged for fraud.
Spyware may also take the form of nuisance or unwanted software that collects information on browsing habits, relaying it to a third party for commercial purposes.
Among the more nefarious uses of spyware is collection of credentials or bankcard information. Some spyware downloads additional bothersome or dangerous software after the initial download. Still others alter the device settings affecting security, privacy, and performance.
Intrusion detection systems for enterprise networks and device layer protections for users are ways to prevent the installation of mischievous or risky programs. The best prevention, however, is to view all external email attachments and advertising banners as an access point through which spyware can infect one’s device.
"I was wondering why my life seems to be impeccably synced with relevant advertising. That is, until I discovered that there's a ton of spyware on my device needing to be uninstalled. I suppose I should be thankful that it wasn't a keylogger."
|
<urn:uuid:dea06c07-6609-4cc5-ad32-f6f50a170343>
|
CC-MAIN-2022-40
|
https://www.hypr.com/security-encyclopedia/spyware
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00437.warc.gz
|
en
| 0.9111 | 262 | 2.703125 | 3 |
AI is the buzz word in technology these days and everyone seems to be using it (or misusing - a recent edtech ad). Despite the hype, AI is here to stay and more importantly grow. And the growth is not going to be small by any means and measure. While AI is a 1000 ft view, there are multiple technologies that are growing pretty quickly under that umbrella – Machine Learning, Deep Learning, GANs etc.
Multiple industries and domains have multiple use cases for AI implementation. But in this article, we shall be focusing on one domain – Health care.
In Health care we are going to witness a wide adoption of AI with wide variety of applications in multiple areas like Prognosis, Diagnosis and Treatment. Let’s look at each of these three areas at a high level first and then we shall discuss about some use cases in the Diagnosis of Conditions.
Prognosis – Prognosis involves the prediction of certain conditions based on certain statistical data. Given the complexity of the human body, it is not easy to predict future conditions. Medical practitioners and the data custodians can come out with some rules by taking few parameters into consideration to come out with predictions and outcomes. But that will be quite limited and that’s where Deep Learning can help. If the AI model can be trained with quite a bit of data with multiple parameters, it can come out with some outcomes for a specific case. But this requires data to be collected across multiple years with certain conditions for the model to be able to predict outcomes.
Treatment – While currently Medical practitioners use some data to understand the efficacy and effects of medicines, there is a limit to that and importantly it’s not possible to use some past data at a detailed level to come out with custom prescription for each patient. But with AI, it is possible to track the progress of a patient treatment based on the existing conditions, medication prescribed and use that for future patients. This also requires data that needs to be tracked over multiple years before being put to use.
One such use case is treatment of cancer tumors. When biopsy is done, a small portion of the tissue is analyzed and these samples are stored. The images of these tissues can be given to an AI algorithm along with multiple other parameters like demographics of the patient, diagnosis, treatment given, response and various such parameters. Using all this data, the AI model will be able to predict what kind of a treatment would be suitable given an image of a cancerous sample. But as mentioned this takes time and importantly large amount of data is required.
Diagnosis – This is one area where AI can have an immediate and perceptible impact. In this post we shall look at few use cases where usage of AI can significantly help in diagnosing conditions.
Detection of TB based on Chest X Rays
Tuberculosis (TB) is still a pretty fatal disease in developing nations like India, Africa etc. More than 80% of the TB cases are pulmonary TB cases. And pulmonary TB can be easily detected in chest X Rays. The primary reason for TB fatality is not because there is no treatment, but because of lack of timely diagnosis. There are TB hotspot areas in various states of the country where TB is quite prevalent. These being predominantly rural areas, there is no proper access to X Ray/Diagnostic centers. Importantly, access to a radiologist is much restricted. India has a radiologist for every 1 lakh of population, whereas US has 1 radiologist for 10000 of its population.
Given the above situation, if a mobile van can be constituted with a X Ray machine connected to a laptop, which can run the inference software, then this van can be taken to the TB hotspot areas. Suspected patients can be run through the X Ray and within seconds, the software will be able to say whether the patient is TB +ve or -ve. Those suspected patients can then go to a PHC or a hospital in a nearby town to get the treatment. Obviously, the software should be properly calibrated to ensure false negatives are close to 0. This way AI can save lives.
Remote monitoring for chronic patients
It’s very expensive for chronic patients to stay in a hospital for longer periods, especially for monitoring. Importantly for old patients this has to be a regular process. So, there are devices that can monitor multiple critical parameters and vitals of a patient and these devices can be used at home. This data from these devices can be sent to a central server (at a hospital) continuously, which can be analyzed for abnormalities and the relevant stakeholders (like doctors, Medical practitioners etc) can be immediately notified so that a prompt action can be taken. This will definitely save the human lives. This kind of a system can be useful for post-surgery monitoring as well, instead of the patients visiting the hospitals periodically for routine checkups.
There are multiple use cases that AI can solve in an ER. But lets discuss one use case – Brain stroke. When a patient enters an ER with a brain stroke, its very important to determine what kind of a stroke it is – Hemorrhage or Ischemic. This can be determined through a CT Scan. And this inference can be made by an expert radiologist. The type of stroke detection is important, as the treatments for each of these types are exactly opposite. For hemorrhagic strokes, you don’t give blood thinners, whereas for Ischemic strokes you give blood thinners. Given the limited access to an expert radiologist, an AI software can immediately detect the type of stroke from the CT films and appropriate medication can be immediately administered. Apparently (American Heart association Study) for every minute of delayed treatment, around 2 million neurons are lost in the brain. So, time saved is brain saved. The future is extremely bright for AI being an integral part of Healthcare and importantly without any bias.
|
<urn:uuid:4b618975-ddb0-4ea4-8bec-8092b1482134>
|
CC-MAIN-2022-40
|
https://industryoutreachmagazine.com/ai-use-cases-in-health-care/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00437.warc.gz
|
en
| 0.93566 | 1,209 | 2.984375 | 3 |
Ever heard of IOT, Internet of Things? Well, this is the new bling and it is something that is going to help you extend the use of internet from just computer, laptops to daily household stuff.
Internet of Things is the idea or theory where one will be able to expand the use of internet beyond the usual standard devices such as laptops, smartphones and computers etc.
These IoT devices are integrated with high definition technology which enables them to interact or communicate via internet smoothly. Since these devices are operated through internet, it will be easier for the user to control and manage them remotely when needed.
What are IoT devices?
We have to inform you as a matter of fact that the IoT devices available today has easily surpassed the number of humans living on this planet.
With an increase in the demand of 5g, there will be around 20 billion IoT smart devices running in homes by the end of 2021, whereas the number of humans on this planet is 7.62 billion. Doesn’t that astonish you?
If we draw out an average of this ratio, then it is to say that just in 3-years, almost each person in the US would have more than 10 IoT devices of their own. If you want to know more about these statistics, we suggest you take a look at this post for more information.
We all know that technology is developing day by day. It is no surprise at all that in just a few years, our homes will be known as smart homes with an influx of IoT smart devices. The doors and windows of our homes are already being integrated with this Automation System. Soon, our homes will have numerous IoT devices installed in them.
Let’s take a look at some of the IoT revolutions that will be available in almost every home in 2 years’ time,
1. Google Home Voice Controller
We are sure that you must have heard about Google Home but if you haven’t, let us tell you all about it.
Google Home Voice Controller is an IoT smart device that allows the user to enjoy the perks of controlling alarms, media, thermostats, lights, control the volume and much more just by their voice.
It is an amazing device that is now being used by people all over the world.
2. Amazon Echo Plus Voice Over
This is a powerful and reliable IoT device that has made a name for itself in just a few months of its launch.
This device will allow the user to run songs, set timers and alarms, do phone calls, provide information, ask questions, check the weather, manage to-do list and so much more. It will allow you to manage house instruments and several other things as well.
3. Amazon Dash Button
Now this is an important and interesting piece of technology. The Amazon Dash button is basically a device that you can connect through the internet and it keeps you informed so you don’t run out of household items such as medical and personal care, grocery materials, soft drinks and any pet or kid item.
However, if the user wants to thoroughly enjoy the perks of Amazon Dash Button, then he must be a member of Amazon Prime.
4. August Doorbell Cam
The August Doorbell Cam is a very useful device for people who do not want to open their gates or doors at someone’s arrival every now and then.
With the help of August Doorbell Cam, the user can answer his door from any remote location. The device captures any kind of motion changes in your doorstep and also checks your door constantly.
5. August Smart Lock
This is a reliable and security IoT device that is being promoted by many manufacturers of windows and doors Calgary.
It allows the user to manage his/her door from any location. It will also enable the user to keep thieves away and keep his family safe.
6. Kuri Mobile Robot
This is the first ever home robot that is there to capture all your favorite memories and to entertain you with the best of its capacity.
Kuri moves around the house, interacts with the house members and captures memories. This is an amazing new house member whom you can definitely invite in.
7. Belkin Wemo Smart Light Switch
The Wemo Smart Light Switch, as the name suggest, is an IoT device that will allow the user to control all the home light by wall, voice or or over smartphones.
This smart light switch can be connected to your existing home Wi-Fi network in order to provide wireless access of your entire light system. It doesn’t require any hub or subscription.
8. Football Air Quality Monitor
Wanting to improve the air quality of any house is the dream of almost every house member. Well, that dream will now be achieved with the help of Football Air Quality Monitor.
This air quality monitor will let you know how much pollution there is inside the house in order to improve the quality of it. It provides accurate calculations and is highly efficient.
9. Nest T3021US Learning Thermostat Easy Temperature Control
With the help of Nest Thermostat, the user will be able to control the home temperature and cooling environment with ease.
This IoT device comes with smart intelligence and is capable of managing the room temperature automatically entirely based on your routine.
10. Philips Hue Bulbs and Lighting System
This is a very famous smart IoT device that will enable you to control the lighting system of your home and enable just the right ambience you need.
It innovates the smart home and turns it to live with the most connected lights in the world.
So, these are the smart devices that will be available in almost every smart home within 10 years. We hope this article was helpful to you.
Let us know if you have any other smart IoT devices that we should add to this list.
|
<urn:uuid:f1c4be9a-f803-436c-a8e5-c5a85d4b7c5f>
|
CC-MAIN-2022-40
|
https://www.m2sys.com/blog/guest-blog-posts/10-iot-revolutions-that-will-be-available-in-every-smart-home-after-10-years/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00637.warc.gz
|
en
| 0.94862 | 1,195 | 2.53125 | 3 |
The problem with erecting wind turbines in big cities and other urban areas come down to several factors: size and noise, zoning laws, wind variability and intermittency, bird safety… But thanks to some new innovations, urban communities and a couple of “small wind” startups are poised to bring renewable energy to town squares and apartment complexes.
New Scientist brings us news of some exciting advances in small wind, including work inspired by fish tails from Ephrahim Garcia, a mechanical engineer at Cornell University in Ithaca, New York. Instead of a mechanical turbine, Garcia “attached a flexible aerofoil to the end of a pole made out of a piezoelectric material. When air passes over the aerofoil it flutters, causing the pole to flex and generate a small alternating current.” Neat!
Not only could such a method remove a lot of the stigma of having big noisy blades slicing the air above rooftops, it could potentially solve the problem of drawing power from cities where buildings and streetscapes interrupt the smooth flow of air. According to the researchers at Cornell, cities have plenty of wind to harvest, they just require new ways of thinking. Garcia’s method looks like it fits the bill.
In Alex Salkever’s Yale 360 report on small wind’s ascent, he mentions Hawaii’s Humdinger Wind Energy, which is also betting on piezoelectrics as a means to fuel the market for compact wind energy systems. However, a startup from Michigan called Windtronics isn’t giving up on blades to capture wind energy. Instead of merely down-sizing turbines and perching them on buildings, the company’s windmills generate power at the blade tips, or the outer circumference rather than a central rotor and generator assembly. What results is a compact, bird-friendly and semi-enclosed unit that produces no noise and no vibration, according to the company. In other words, perfect for adding some renewable energy sources to your mid-rise office tower.
One great thing about these systems is that they should breeze past zoning and safety requirements in all but the most stringent areas. That they are designed to be immune to NIMBYism is also a big plus.
Image: Terry Wha – Flickr – CC
|
<urn:uuid:7abeaac7-0694-41d2-90b6-1e091e6bf6d8>
|
CC-MAIN-2022-40
|
https://www.ecoinsite.com/2010/08/urban-wind-gathers-momentum.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00637.warc.gz
|
en
| 0.930417 | 476 | 2.953125 | 3 |
When most people think about data, they might imagine a mad dash of numbers swarming across the screen in a frenzy. Unfortunately, they don’t always think about what those ones and zeroes might really mean.
If you’re talking about protecting student data privacy, they mean a whole lot. But not all personal data is created equal – some are more sensitive than others. That’s why classifying data by risk is a pivotal part of data loss prevention. For example, consider the amount of data a single student is creating on a daily basis, compounded by every other student in each individual school. Your district is collecting the whole spectrum of sensitive data from test scores to addresses to social security numbers.
Data classification and data loss prevention (DLP) go hand-in-hand. To understand why you can’t effectively do the latter without the former, here’s a guide through the ins and outs of data classification.
What is data classification?
Imagine you’re putting away your groceries. Some fruits and vegetables might need to be protected in the refrigerator, others might need to be frozen, and a few are safe and sound on the kitchen counter. You wouldn’t put all your groceries in one place, just like you wouldn’t classify all your data the same way.
Data classification is essentially the same procedure. TechTarget defines data classification as the process of separating and organizing personal data into categories based on their shared characteristics. While certain types of data are low-risk, others might contain more sensitive information that needs to be tightly secured.
Why is classification important to data protection? Simply put, it’s what helps you allocate your data security resources most effectively. Think about it: If you don’t know your data, where it resides, or what it contains, how will you know where to focus your attention? That’s the type of insight data loss prevention depends on to easily locate, monitor, and apply the best protection.
How does classification work?
Data classification is all about sorting structured and unstructured data. But what’s the difference?
In short, structured data is quantitative. For your district, that likely means test scores, birth dates, Social Security numbers, credit card numbers, and other sensitive information that might be represented numerically. On the other hand, unstructured data is qualitative, such as personally identifiable information found in text and image content.
In either case, data discovery methods will locate the created data and classify it in three ways:
- High sensitivity: Confidential data such as financial records, intellectual property, personally identifiable information, and medical histories.
- Moderate sensitivity: Data that isn’t public but is used internally, such as academic records, class rosters, or emails and documents without confidential data.
- Low sensitivity: Public information that is easily accessible, such as web pages and blogs.
Bottom line: The greater the sensitivity, the bigger the risk to data security.
Data classification barriers, challenges, and consequences
Data classification and data loss prevention efforts are no simple task for many school districts. What should be a fast, simple, and automated process is often made more complicated than it needs to be.
There’s likely a number of barriers that challenge your ability to classify student data effectively and efficiently.
Let’s take a look at some challenges with which you might be familiar:
- Data classification can be a costly and difficult process
This is especially true for school districts still using manual classification methods. Classifying data by hand is not only time-consuming, cumbersome, and inefficient, but it’s also prone to human error. In addition, mishandling sensitive information could leave confidential data unprotected if sorted incorrectly.
- K-12 IT teams are stretched and understaffed
According to Edweek Research, K-12 districts need to allocate their cybersecurity resources better. In fact, only 20% of budgets are spent on cloud application security. Simply put, there’s not enough time to classify data by hand.
- Data silos can’t be easily monitored
Inadequate data classification can lead to information becoming siloed off in repositories where it isn’t meant to be. In other words, disparate information makes data protection exceedingly difficult for IT teams if they cannot locate data and, in turn, keep it secure.
- Policies are difficult to enforce
A data policy is what dictates how your confidential data can be classified, accessed, or used. Without proper enforcement, highly sensitive data could mistakenly be shared outside the district – opening a Pandora’s box of privacy law and compliance implications.
All told, any one of these challenges could spell trouble for your district’s data security. If left unclassified and therefore unprotected, sensitive data could fall into the hands of unauthorized third parties or cybercriminals. At that point, there’s no telling where that information might go or how it’ll be used.
The benefits of classifying data
When you consider the importance of organized data – or the dangers of unorganized data – it becomes clear that improving data classification should be one of your district’s top priorities. After all, an investment in data classification is an investment in your district’s safety.
And the good news? There’s plenty of benefits that make your investment worthwhile:
- Improved data security efforts
Most importantly, a more accurately classified database is a major advantage for protecting your confidential data. Locating and identifying your most sensitive data swiftly allows you to mitigate risks as they arise quickly – all made possible by classification.
- Simplified compliance
Meeting strict industry regulations is a major obstacle for any district, not to mention the rising standards of teachers and parents. Highly classified data eases this burden and accelerates data retrieval when it’s time to perform an audit.
- Enhanced data monitoring ability
Improving your ability to classify data will naturally break down data silos accurately. The result? More visibility into data use throughout the district allows you to identify DLP policy violations as quickly as possible.
- Greater access control and accountability
Likewise, data classification can help you ensure all users – students, staff members, and administrators – are managing their data appropriately. Accurately organized data enables better insight and access control over how personal information is shared throughout the district.
Data classification best practices
By now, you might be wondering: How can I start improving data classification today? To answer that question and help you realize the intended benefits, here are a few best practices:
- Perform a data risk assessment: Gain an understanding of your data environment by auditing and assessing your district’s regulated and unregulated data circulating.
- Create a data classification policy: A formal policy provides the governance your district needs to uphold accountability and keep data secure.
- Deploy an automated solution: Let’s face it: Protecting your district’s personal data is too big of a job for just a handful of staff members. Time is limited, but data is growing exponentially every day. Cloud DLP uses optical character recognition to automate data classification for both structured and unstructured data while providing an additional layer of security for your district.
Protecting your school district’s data might begin with data classification, but it doesn’t end there. ManagedMethods understands that effective data loss prevention is all about securing the entire data lifecycle. Our cloud security solution automates 24/7 data protection to help your team keep sensitive data out of harm’s way.
|
<urn:uuid:61575dce-a21a-4fb1-b67c-4c3d88bc592b>
|
CC-MAIN-2022-40
|
https://managedmethods.com/blog/a-guide-to-data-classification-and-data-loss-prevention/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00637.warc.gz
|
en
| 0.912286 | 1,563 | 3.46875 | 3 |
Technology continues to spark creativity.
Students around the country are creating news broadcasts and award-winning short documentaries that are making a difference in their community. Last year, ten students at Rancho Minerva Middle School crafted a four-minute documentary on the importance of digital equity. The young filmmakers explained how students can find free Wi-Fi hotspots in their neighborhoods and interviewed the school principal, who described his efforts to increase students’ broadband access at home.
The video went on to win multiple awards for its creativity and presentation. In fact, the school board purchased 200 mobile wireless hotspots that the school’s students could borrow and take home at no cost so that their family members could enjoy fast internet access.
Thanks to the popularity of YouTube, social media, and mobile devices, online video has become ever more prevalent in today’s society, especially among children. Schools have taken this on board and are engaging students with modern technology. School districts are beginning to embrace digital storytelling and are teaching students to progress their multimedia skills, such as video production.
Video production allows students to learn about potential careers in broadcasting and filmmaking, all the while honing important skills, such as creativity, critical thinking, communication, collaboration, and meeting deadlines.
“We’re asking them to become independent learners and problem solvers,” says Melissa Julian, technology director at Pittsford Central School District in New York. “It’s not a simple book report, where they are taking information and regurgitating it in a similar format. It’s taking artistic license and the concept of storytelling and putting their own spin on things.”
As schools rely on more technology in the classroom, ensure that your security needs are met. Contact D&D Security by calling 800-453-4195 or by clicking here.
|
<urn:uuid:383524b9-9935-4879-9da2-c44e0b255433>
|
CC-MAIN-2022-40
|
https://ddsecurity.com/2018/03/08/students-adopt-software-to-create-digital-stories/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00637.warc.gz
|
en
| 0.961351 | 375 | 3.171875 | 3 |
No company is immune to the risk of cyber attacks and the resulting loss of customer information. Network security solutions can reduce the risk of attack, but these solutions face an unexpected adversary: SSL encryption.
While SSL encryption improves privacy and integrity, it also creates a blind spot in corporate defences. Today, roughly half of all Internet traffic is encrypted, and this figure is expected to reach 67 per cent by 2016.
Attacks hiding in SSL traffic are on the rise. Since Edward Snowden’s revelation in 2013, SSL encryption has become all the rage for both application owners and hackers. For good reason, given encryption improves security by providing data confidentiality and integrity. It’s also led to the rise of movements like “Let’s Encrypt,” the free, automated and open certificate authority (CA) provided by the Internet Security Research Group (ISRG).
Unfortunately, encryption also allows hackers to conceal their exploits to sneak past security devices like firewalls, intrusion prevention systems and data loss prevention platforms. Some of these products cannot decrypt SSL traffic without degrading performance, while others simply cannot decrypt SSL traffic at all because of their location in the network. As a result, hackers are taking advantage of movements like Let’s Encrypt to generate SSL certificates to sign malicious code or to host malicious HTTPS sites.
One way to counter the threat is for organisations to decrypt and inspect inbound and outbound traffic. A dedicated SSL inspection platform will enable third-party security devices to eliminate the blind spot in corporate defences. But first, we need to understand the three common ways that malware developers use encryption to escape detection.
Escape the encryption labyrinth
- Zeus Trojan: Since its discovery in 2007, Zeus Trojan continues to be one of the most prevalent and dangerous financial malware around. The Zeus attack toolkit is widely used by criminal groups to develop variants that are even more sophisticated. Between 2012 and 2014, the number of infections and its variants grew tenfold. One of the deadliest variety is the peer-to-peer botnet Gameover Zeus, which leverages encryption for both malware distribution and command and control (C&C) communications.
- Command and control updates from social media sites: Our growing obsession of social sharing has no doubt attracted the attention of hackers too. New malware strains use social networks, such as Twitter and Facebook, and web-based email for command and control communications. For instance, malware can receive C&C commands from malicious Twitter accounts or comments on Pinterest, which encrypt all communications. To detect these botnet threats, organisations need to decrypt and inspect SSL traffic, otherwise security analysts might unwittingly view access to client machines through social media sites as harmless.
- Remote Access Trojan (RAT): Online email accounts such as Gmail and Yahoo! Mail have lately become incubators for a remote access Trojan (RAT) that receives C&C commands. The malware works by attempting to evade detection by not quite sending emails. With both Gmail and Yahoo! Mail encrypting traffic, malware developers use this to evade detection. The onus therefore is on organisations to decrypt and inspect their own traffic to these email sites, or malware will pass them by.
Stay ahead of the game
Encryption today accounts for roughly one-third of all Internet traffic, and it’s expected to reach two-thirds of all traffic next year when Internet powerhouses like Netflix transition to SSL. As a result, encrypted traffic will become the “go-to” way of distributing malware and executing cyber attacks.
Whether sharing a malicious file on a social networking site or attaching malware to an email or instant message, cyber criminals are hiding their attacks using SSL traffic to circumvent existing security controls. It is imperative that CIOs and IT managers familiarise themselves with solutions that uncover hidden threats in encrypted traffic, and invest in data protection that decrypts and inspects all SSL traffic.
[su_box title=”About A10 Networks” style=”noise” box_color=”#336588″]A10 Networks is a leader in application delivery networking, providing a range of high-performance application networking solutions that help organisations ensure that their data centre applications and networks remain highly available, accelerated and secure. Founded in 2004, A10 Networks is based in San Jose, Calif., and serves customers globally with offices worldwide.[/su_box]
|
<urn:uuid:2b6bea29-bd0a-41b5-86ff-bea2f19d5cde>
|
CC-MAIN-2022-40
|
https://informationsecuritybuzz.com/articles/what-lies-beneath-advanced-cyber-attacks-that-hide-in-ssl-traffic/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00037.warc.gz
|
en
| 0.918527 | 894 | 2.609375 | 3 |
Kapley - stock.adobe.com
For any organisation that operates in a high-risk environment, critical alarms are a fundamental component to assure the safety of staff and continuity of operations.
The manufacturing industry, in particular, is known to be a high-risk environment as workers typically have to deal with heavy-duty machinery and potentially hazardous materials. This environment can be especially dangerous for lone workers who perform their roles without supervision from other colleagues.
Organisations not only have a moral duty to ensure the safety of staff, but also a legal responsibility. By law, employers must control any potential risks to injury or health that could arise in the workplace by undertaking risk assessments, provide information about potential hazards and how staff are protected, as well as train employees on how to deal with the risks.
With a critical alarm system in place, staff can feel confident in the knowledge that their welfare is being monitored so in the event of an incident on-site, it will be acknowledged and rectified as quickly as possible.
To make this a reality, businesses must therefore ensure that the workforce fully understands the critical alarm response processes that are in place so that when an alarm is activated, they are fully prepared to act swiftly and appropriately.
But what are the risks if a critical alert is not responded to in good time? While alerts are useful, unless critical alarms are managed and responded to efficiently, they are of little use to an organisation in terms of keeping employees safe and ensuring machinery continues to function at an optimum level.
Inadequate alarm management and poor processes can lead to a number of consequences for organisations. If an employee is injured on-site and requires emergency assistance, the time it takes to respond could be the difference between life and death, and could mean the individual is left with life-changing injuries.
By failing to respond appropriately to critical alarms or failing to put measures in place to keep staff as safe as possible, companies are risking the safety of their workforce, and as a result, significant fines could be imposed by the Health and Safety Executive (HSE). Not only can these fines be financially damaging, the subsequent reputational damage can also be severe. Furthermore, staff may be opposed to working in the same roles until the right safety procedures are put into place, putting additional strain on the organisation.
There is also a likelihood that organisations will also be faced with wasted time, product and money if machine alerts are not dealt with in a timely manner.
For example, an alarm on a food production line could be alerting to faulty temperature regulation. If the alarm is responded to and resolved efficiently, the quantity of spoiled produce could be minimal. However, if there is a delay in acknowledging the alert and performing a subsequent fix, the fault could escalate and the amount of wasted food could rapidly increase. In a worst-case scenario, if the entire production line has to be stopped or the building has to be evacuated due to safety concerns, the knock-on effect in terms of time and money could be significant.
In any high-risk environment, incidents will occur but by deploying a combination of appropriate technology and processes, companies can effectively safeguard their operations and staff and mitigate risks as much as possible.
|
<urn:uuid:eaf3928c-bcca-4cc5-9033-205f6f78f905>
|
CC-MAIN-2022-40
|
https://www.computerweekly.com/microscope/opinion/What-are-the-risks-of-not-responding-to-a-critical-alarm-in-time
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00037.warc.gz
|
en
| 0.967164 | 651 | 2.9375 | 3 |
Collaborative robots (cobots) don’t get sick, take their time getting tools, crash after lunch, or have bad days. Cobots meet daily standards efficiently through laser-focused performance, which leads understanding and defining metrics.
Cobots working on the industrial floor can be deployed for a wide range of duties. Although the highly automated car manufacturing sector remains the most utilized industry for electro-mechanical machines, cobots and other forms of automation are now entering more sectors.
Currently, about 15% of businesses use AI, but 31% plan to add support for it over the next 12 months. In addition, the industrial robotics market is predicted to grow by 175% over the next decade.
At one time, traditional workers were afraid robotic applications would take over the workplace, and, perhaps, the world. However, cobots are working alongside their human counterparts in every sector of manufacturing — doing jobs that, in the past, resulted in permanent injuries or even death.
Robotics used in manufacturing are subject to two ISO standards: Robots and Robotic Devices-Safety Requirements for Industrial Robots, and Robot Systems and Integration.
Cobots are installed with force limitations, rounded edges, and non-pinching joints. They are also lightweight, portable, and ideal for various tasks within a factory.
Service cobots can be used for information in public spaces, transporting goods, or providing security. Industrial cobots have several applications, including pick and place, packaging and palletizing, assembly, machine tending, surface finishing, quality testing, and inspection.
Cobots can be used all over the factory floor in a variety of production functions, including those listed below.
- Manual pick-and-place, one of most repetitive tasks performed by human workers today.
- Machine tending, which demands workers stand for long hours in front of CNC machines, injection-molding machines, or other similar devices.
- Packaging and palletizing, the derivative of pick and place.
- Process tasks that require a tool to interact with a workpiece, including gluing processing, dispensing, or welding.
- Quality inspection as it involves full inspection of finished parts, high-resolution images for precision-machined parts, and part verification against CAD models.
- Finishing tasks — when performed human operators require a manual tool and large amounts of force that can cause injury to the operator.
|
<urn:uuid:ca67105c-f2de-4b96-915a-9a34d1507819>
|
CC-MAIN-2022-40
|
https://www.missioncriticalmagazine.com/articles/93994-cobots-are-changing-manufacturing-forever
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00037.warc.gz
|
en
| 0.9439 | 505 | 3.171875 | 3 |
People who engage in technology-facilitated violence, abuse, and harassment sometimes install spyware on a targeted person’s mobile phone. Known as ‘stalkerware’, this technology allows a operator to reach inside a target’s phone to terrorize, control, and manipulate current and former partners. And two Citizen Lab reports pull the curtains back on this shadowy and underreported area of spyware.
“In theory, we found that there are laws in place to protect the victims of stalkerware,” research fellow Cynthia Khoo told the Toronto Star.
Empowering targets to know that they have legal protections is paramount given the increasing scale of the problem: an NPR a survey of 72 domestic violence shelters in the United States found that 85% of domestic violence workers assisted victims whose abuser tracked them using GPS. While the technical and legal remedies outlined in these reports might provide important relief in the context of consumer spyware, the ongoing struggle to transcend patriarchal gender inequalities, misogyny, and corrosive societal norms around controlling, abusive, and violent behaviour directed at women, girls, non-binary persons, and children is an undertaking that requires critical and supportive communities at its core.
|
<urn:uuid:567682bc-30d1-476b-8b6c-f9c3dc43c0cb>
|
CC-MAIN-2022-40
|
https://citizenlab.ca/2019/06/toronto-star-legal-gaps-allow-cellphone-stalkerware-to-thrive-researchers-say/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00037.warc.gz
|
en
| 0.925464 | 309 | 2.53125 | 3 |
Cookie consent is a hot topic with the recent data privacy legislation. However, there seems to be a lot of confusion around cookie consent and consent preferences. But before we dive into the difference, let’s talk about the Cookie Law.
Cookie Law, or the ePrivacy directive, is a separate piece of legislation that works together with GDPR but generally takes precedent regarding cookies. The Cookie Law was enacted in 2002 and covers electronic privacy regarding cookies and cookie banners.
What Is cookie consent?
- Cookie consent must be based on a definitive affirmative action that could include clicking, scrolling, or another manual action by the user.
- You must disclose how and why you’re using cookies.
- You must gain consent before storing cookies on a user’s device.
- You must make it easy for visitors to refuse or withdraw consent.
Preferences are not consent
A common misconception is that it’s enough just to gain cookie consent. That’s not the case. Because it’s more than just gaining consent, it’s about what type of consent you’re gaining and what you can use it for.
The following five questions, centered on the what, why, who,’ ‘when’ and ‘whereof data privacy, will be essential to you ensuring your business doesn’t fail to comply with the many privacy regulations.
1. What data are you collecting?
The questions around personal data are more than just “what kind of info can we send?” or “can we call them” Both businesses and visitors need to know precisely what data has been collected at every touchpoint. And this info needs to be easily accessible.
2. Why are you collecting it?
Next, businesses need to show why the data was collected in the first place. Organizations need to be clear about why they’re collecting personal data and have justifiable lawful reasons for collecting and processing this data. This is an integral part of the new regulations because it pertains to the security of personal data.
3. Who is using the data?
The next component is being clear on exactly who is using the data. From the moment you’ve collected a visitor’s personal information, you need to know exactly who will access the data internally and externally with third parties. It’s good to remember that third parties will also be liable for penalties under the GDPR.
4. When does the consent expire?
Businesses will also need to record exactly when permissions were granted to use personal data. With GDPR, companies need to know how long they need to keep data for and prove this duration has been documented.
5. Where does the data come from?
Finally, in context to where the data is processed, we should know where this permission is granted down to the source. This is a bit different than the blanketed permission-based sources and channels. It’s about knowing where the data came from and having proof you can use it.
An opportunity to deepen engagement with Consent preference management
This is an excellent opportunity to build and deepen your customers’ trust. A chance to improve how you capture consent, audit the quality of your data, and update any outdated policies.
Additionally, you can use this opportunity to connect with your visitors at a deeper level and create a value exchange where both you and the customer can benefit. Preferences versus consent: let’s get data privacy from the beginning.
With Adzapier’s Consent Management Platform, you have access to three powerful tools you can:
- Increase opt-in rates while increasing transparency and respecting consumers’ privacy choices with Cookie Consent Management
- Collect and manage your online consumers’ consent and preferences with Consent Management Platform
- Quickly discover, disclose, and respond to DSAR requests with DSAR Management
|
<urn:uuid:b5c35604-3b76-4f3b-9599-91501e5394fb>
|
CC-MAIN-2022-40
|
https://adzapier.com/2022/03/08/cookie-consent-vs-consent-preferences/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00238.warc.gz
|
en
| 0.929387 | 849 | 2.890625 | 3 |
The recent explosion of industrial data, the emergence of advanced big data technologies, and inexpensive hardware and bandwidthall fuel a renewed interest in the field of Data Science. Now, more than ever, highly skilled data scientists are in demand by governments, businesses, and educational institutions to leverage superior technology to extract the the last drop of insight from vast piles of data generated every day.
Data Scientist Study Infographic
The infographic below examines a data scientist from various angles:
- The supply-demand ratio in the data scientist market over the next five years: Survey results are published to prove the market perception of this supply-demand ratio.
- The key sources of data science talent pool: The common debate between computer science vs. non-computer science vs. practical experience remains.
- Some obstacles to data science work in organizations: Common roadblocks like budgetary constraints, lack of technological infrastructure, lack of skilled manpower, among others are cited.
- The probable academic backgrounds of data scientists: Here, the data scientists are compared with their BI peers for an evaluative study of probable academic degrees, probable years of study etc of each of these roles
- Technology-aided role models of data scientists: The importance of cutting-edge technology in the life of a data scientist is emphasized here.
- Personality traits of successful data scientists: Some traits have been identified as more likely to be found in successful data scientists.
- The organizational role of data scientist: The typical position, roles, and responsibilities of data scientists in an organization are graphically discussed here.
|
<urn:uuid:85938ce1-999f-416a-83f7-a746fea963aa>
|
CC-MAIN-2022-40
|
https://resources.experfy.com/bigdata-cloud/data-scientist-study-infographic-emc/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00238.warc.gz
|
en
| 0.910548 | 314 | 3 | 3 |
Stronger security is always necessary, and sometimes the classic password is not enough. But fear not, other methods will help you a lot.
Access to a company’s devices and user accounts should ideally involve the highest possible security level, at least stronger security than access to a public forum on the web requires. Strong and randomly long passwords encrypted with the most up-to-date and efficient procedures of the moment. But you should not only rely on passwords, no matter how strong they are, because a single authentication factor is a tempting fate. The optimum would be to add several authentication layers to increase security through more factors.
Let’s see what this is all about.
What is authentication?
As you may already know, authentication is the act of giving credence or certainty to an assertion. Authentication proves that something is true, most commonly the identity of something or someone. As far as we are concerned, it is to prove that we are who we say we are to a given computer system since authentication is the verification process that follows identification.
Of course, authentication goes beyond IT and is used at all levels of society. Identity documents confirm this, which prove to the relevant entities that we are who we say we are. But in this case, we are referring specifically to electronic authentication.
Electronic authentication is the procedure of establishing confidence in the user identities indicated to an information system. This step is essential after the identification step itself. It is the step that prevents unauthorized access to the system and avoids online identity theft since it verifies whether the person is who he/she claims to be. Therefore, they guarantee stronger security.
Authentication mechanisms work by requiring the user of a device or program to present one, two, or more pieces of evidence that give credence to the identity claimed. Once you provide the proof satisfactorily, you are granted access to the system.
Authentication evidence can be of various types. In the IT world, they are generally known as factors. Passwords, for example, are one of them and fall under the classification of knowledge factors, as you will see below.
Types of authentication factors for stronger security
Authentication factors are classified according to the type of evidence presented by the user attempting to access the system, which can be:
- Factors that are something the user knows.
- Are something the user possesses.
- Factors that are something the user is.
- Factors that are a place where the user is.
Let’s take a closer look at this classification.
Knowledge factors are the most widespread form of authentication. This form requires the user to show secret knowledge to achieve authentication.
This type of factor implies that the user has something that authenticates his identity and accesses the system. Humanity has used this factor for a considerable part of its history, as evidenced by the key’s historical possession to open a lock. In this case, the user possesses a “key”, physical or otherwise, that opens the “lock” that protects the system’s access.
These factors involve something associated with the user, precisely some physical characteristic.
This type of factor involves the physical location of the user, allowing access to the system only if you are located in a specific place, such as, for example, the office.
There are many electronic authentication methods, each belonging to one of the types presented above. Let us show you the most commonly used ones:
A token is a peripheral device with which the claimant proves his identity to a restricted electronic network, gaining access in the process. They are electronic keys.
Tokens can be physical or software hosted on a device, such as a computer or a cell phone. The most common example of tokens is USB devices containing authentication information connected to the computer to verify your identity.
Pin-based authentication and passwords
This is the knowledge authentication method par excellence. They are a secret combination of characters or numbers used to prove the user’s identity. There are also passphrases, sequences of words used for the same purpose. Of a similar nature are security or secret questions. The most common ones, such as “Where were you born?” are not the most reliable, as they are information that can be easily discovered with dedicated research.
In recent times, the vast majority of cyber attacks are directed at password-based authentication systems, so they are at greater risk than the rest.
Stronger security: Biometric authentication
It is the use of the user’s inherent physical characteristics for identity authentication. This method includes facial recognition, voice recognition, fingerprint recognition, and iris scanning. New technologies provide other biometric authentication methods, including behavioral characteristics, using, for example, the user’s typing speed or rhythm as a method of authentication.
It is currently considered the most secure authentication method, as these characteristics are unique to each individual.
Through an SMS, email, or similar, the user receives a specific password that authenticates his or her identity when entered into the system. It is not the most secure authentication method as text messages can be intercepted.
The key to strong authentication and stronger security
You can see then that authentication goes beyond passwords, that there are multiple ways to protect access to your company’s devices and networks. And all of these authentication methods are there to provide stronger security. Still, it is not enough to use single-factor authentication mechanisms alone, especially if that factor only involves passwords.
The key to strong e-authentication is not just implementing single-factor authentication. Single-factor authentication is the weakest type of authentication and therefore does not offer much protection. You must combine at least two factors or more, using multi-factor authentication or MFA.
The chances of gaining unauthorized access to company devices or servers are reduced to almost zero when the system requires the agent have to connect a token, enter a password, and pass facial recognition to gain access. MFA will guarantee you stronger security for sure.
|
<urn:uuid:f755d7b7-0041-4b8e-8d0b-df0ab13f76b1>
|
CC-MAIN-2022-40
|
https://demyo.com/stronger-security-authentication-beyond-passwords/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00238.warc.gz
|
en
| 0.934237 | 1,254 | 3.1875 | 3 |
The ALPR matcher uses the "Common and contiguous characters" technique to improve plate read accuracy rate (sometimes called “fuzzy matching”).
You can configure how the ALPR matcher handles common and contiguous characters by modifying the MatcherSettings.xml file. For more information, see MatcherSettings.xml file.
- OCR equivalence (see ALPR matcher technique: OCR equivalence)
- Number of character differences (see ALPR matcher technique: Number of character differences)
The following settings are available when configuring common and contiguous characters:
- Necessary common characters
- The minimum number of characters that need to be common to both the first and second plate read. The characters must also appear in the same order in the plate, but not necessarily in sequence.
- Necessary contiguous characters
- Minimum character sequence length between the first and second plate read.
In overtime enforcement, there is an extra margin of error because the ALPR matcher is comparing a plate read against another plate read, not against a hotlist or permit list created by a person.
- OCR equivalence
- The OCR equivalents B and 8 are considered the same character and apply towards the common and contiguous character count.
- Five common characters
- Both reads have 5, A, B/8, C, and 3 in common, and they all appear in the same order. The “3” is not in sequence, but it respects the order.
- Four contiguous characters
- Both reads have 5, A, B/8, and C in sequence.
Plate read 5ABC113 does not match with SA8CH3 (example 3) because there are two OCR equivalents in the second read (S/5 and B/8). You allowed for only one OCR equivalent.
Using common and contiguous characters helps reduce the margin of error involved when both first and second plate reads are coming from the Sharp.
|
<urn:uuid:8ade8676-6c29-4476-90f5-7d1629730d90>
|
CC-MAIN-2022-40
|
https://techdocs.genetec.com/r/en-US/Security-Center-Administrator-Guide-5.9/ALPR-matcher-technique-Common-and-contiguous-characters
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00238.warc.gz
|
en
| 0.899387 | 408 | 2.640625 | 3 |
Sometimes when people start throwing around the words software and firmware, things can get a little muddled and confusing. Software and firmware are two separate, but related things and some people aren’t sure how to differentiate the two. Today, we’re here to explain what software and firmware are while providing you with some information about the purpose of both.
What is Software?
Before we can dive into software, it is important to first understand the difference between software and hardware. Hardware encompasses items and/or components that are physical; prime examples include: phones, computers, laptops, motherboards, USB sticks, memory cards and more. The easy way of identifying hardware is to think of physical items you can physically touch. Software on the other hand is virtual information that can be installed and run on a system. Software works together with the hardware and can be encompassed as a(n) program, operating system and/or utility tool. There are a few examples to illustrate this.
In the case of a software program, say that there’s a program you want to install on your computer (i.e. a photo editing program or a malware protection program). Your computer (the hardware) will go through a process to download the program information (the software) so that you can have that program on your computer and use it for its intended purpose. In the case of operating systems, operating systems aren’t programs: they are, well, operating systems. However, you can download operating system software onto a system and run the OS on that system once the download is successful. When it comes to utility tools, again you can run them on your system but they aren’t programs. Utility tools would include things like drivers, encryption, disk cleanup, file compression and screen savers.
In the past, it was common to download software programs from CDs; however, nowadays it is more common to download software via the Internet.
What is Firmware?
Moving onto firmware, it might get a little tricky. Like software, firmware is intangible. You cannot physically touch it because it’s also virtual information. Firmware is software that has been programmed/written onto a hardware device (i.e. a BIOS) and stored within the flash ROM of a hardware device. The primary role of the firmware is to provide instructions as to how to communicate with the hardware to get the hardware to work. Firmware is necessary to make hardware devices function properly because it acts as the liaison between the hardware and the operating system. An example of firmware in action, where we discussed the role of UEFI/BIOS firmware when you press the power button on a computer, can be found in a previous blog post. Firmware can be found in a multitude of things, even devices we interact with on a daily basis. Traffic lights have firmware in them. ATMs have firmware in them. TVs have firmware in them. Computers definitely have firmware in them (hello, UEFI/BIOS firmware? All computers have it!).
Overall, it’s safe to say that we would be lost without software and firmware. All these things work together and they definitely keep our technology running!
|
<urn:uuid:8988933e-2015-4fe5-957b-fb42186e7c66>
|
CC-MAIN-2022-40
|
https://www.ami.com/blog/2018/03/29/what-is-firmware-how-is-it-different-from-software/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00238.warc.gz
|
en
| 0.944995 | 651 | 3.390625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.