text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Although QR codes have been around for more than 25 years, their use in everyday life has increased dramatically since the pandemic began. But is it always safe to scan them? We hardly think twice about scanning a QR code in a restaurant to view a menu or pay for food. But scammers have begun to take advantage of our trust in QR codes. As a result, QR code phishing is a growing cybersecurity threat. In this article, we’ll tell you what it is, how it works, and how to protect yourself from QR code fraud.
What are QR codes?
A Quick response code (QR code) is an image that can contain 7089 numbers or 4296 characters. Originally QR codes were just tags for physical objects. In the 1990s, the Japanese car industry began using them to track vehicles and components in production. But as QR codes are machine-readable and can store information, they were later used to send data to a smartphone.
Although the type of data contained in a QR code can be different, it is often just a link to a website. For example, in iOS, the Camera app will automatically detect the QR code when you hover over it. You will be prompted to open the linked URL in your default web browser. First of all, this is what you need to remember about QR codes: they are usually nothing more than simple web links. And as we will see, this has profound cybersecurity implications.
What is QR code phishing?
Today many tools can recognize and remove malicious links that can lead to phishing sites or malware. However, the majority of them are not yet able to check malicious QR codes, so cybercriminals have started using them more frequently in their schemes. QR code phishing is very similar to other forms of phishing. It is a social engineering attack designed to get people to hand over personal information, whether it is login credentials or financial information. QR code phishing is nothing new. The difference is that it uses a QR code to get the victim to a malicious website. Like any other phishing attack, its sole purpose is to get you to enter your sensitive info, like social security number, bank login information, email credentials, or so.
Threats that QR codes can pose
So, it seems evident that we instinctively trust QR codes, but should we? We need to examine how QR codes can pose a threat to answer that question. This research has revealed how hackers can manipulate QR codes to steal your personal information, opening the way for organizations to be hacked. The latter should encourage their employees to be careful about whom they offer personal information, including double-checking the web address to which they were sent by a QR code that matches what they expect.
Signing up for a job fair, entering a contest, or a survey might seem like legitimate reasons to share your personal information. However, double-checking the web address should help confirm its authenticity. If the web address doesn’t look like what the organization’s website should look like, don’t trust it. A QR code can send the user to a fake version of a mobile app store. Through this attack, it is possible to access the user’s phone, personal (or corporate confidential) messages, GPS location, and even camera. This can seriously threaten any business, risking company data and leaving it open to a devastating attack. Organizations should take an interest in their employees’ personal security by encouraging them to check the source of applications or downloads to prevent foul play from affecting them.
The notorious QR code attacks
- In China, there were caught scammers who placed fake parking tickets with QR codes for convenient payment with the help of cell phones on parked cars.
- In the Netherlands, fraudsters used a legitimate feature of a mobile banking app to scam bank customers with QR codes.
- In Texas, criminals pasted stickers with malicious QR codes to the city parking meters. This way, they tricked residents into entering credit card details into a fake phishing site.
With the rise of such attacks, there is a need to raise awareness and do more to keep people from falling for the attackers’ tricks.
How to protect yourself and your organization
It’s not too different from how we used to double-check emails and strange texts. However, we need to learn to be more discerning about QR codes.
- Don’t scan! Trust your instincts. If a code seems suspicious, don’t scan it. Underneath any legitimate QR code, there should be a URL to which the QR code refers. That way, you can enter it directly or through a search engine. A missing URL should raise suspicion.
- Slow down. Take a second to put under the circumstances before you scan. Do you know exactly who posted the QR code there? Can you believe it hasn’t been tampered with? Is there even a need to use a QR code in this situation?
- Check the URLs of the QR code carefully. As with a tricky website, check the URL you’re being sent to before moving on. If it seems suspicious, misspelled, or doesn’t match the organization you’re trying to access, don’t open the link. For example, in the parking meter scam in Texas, part of the URL used was “passportlab.xyz”-it doesn’t look like an official government website.
- Look for signs of physical tampering. An easy way to gain your trust is to cheat the legitimate use of a QR code, such as in a parking lot. So, think hard if there are signs of tampering, such as a sticker over another code.
- Never download apps from QR codes. Attackers can easily clone and tamper with websites. Instead, always go to the official app store for your device to download the app.
- Do not make a payment using QR codes. Instead, use a (securely downloaded) proprietary app or search online for an official payment site.
- Enable multi-factor authentication (MFA). If there is an unintended attack on any of these, MFA will prevent an attacker from accessing your accounts (email or social media accounts) with a simple login and alert you to the suspicious attempt.
When it comes to QR codes, the best advice is always to use common sense. We better think twice about the slightly odd emails, calls, and text messages we receive, realizing that they may have a hidden malicious purpose. Somehow QR codes have escaped this extra scrutiny, and more and more people are scanning them without thinking twice, but it’s time to change that. Scan safely.
|
<urn:uuid:48aad623-ef59-4e83-aef3-4c8d60b73775>
|
CC-MAIN-2022-40
|
https://gridinsoft.com/blogs/qr-code-phishing-attack/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00447.warc.gz
|
en
| 0.940033 | 1,403 | 3.046875 | 3 |
In the wake of the pandemic, the education sector saw one of the most dramatic digital transformations as schools and universities worldwide were forced to move overnight to remote learning. This resulted in a growing cybersecurity footprint seized by attackers, especially targeting the Domain Name System (DNS) which plays a crucial role in routing internal and external traffic. While almost all organizations have been vulnerable, K-12 schools have been shown to be particularly at risk.
Entering 2021, schools began to adopt hybrid learning systems incorporating remote e-learning and in-school learning, making resiliency of DNS and Dynamic Host Configuration Protocol (DHCP) services vital for students and staff to connect to the network and access applications. Unless institutions prepare and work to strengthen their DNS security, remote learning environments will remain at risk from attackers, meaning private information and productivity will be seriously threatened.
According the the 2021 Global DNS Threat Report, published by International Data Corp., the education sector remains highly vulnerable to these attacks. Of all the organizations surveyed, 76% were victims of DNS attacks, and they reported suffering six attacks on average. The overall average cost per attack was $851,000.
DNS attacks threaten the education sector in several major ways, including the following.
- Financial, reputation, and productivity loss — A successful DNS attack can result in significant financial impacts for universities and permanently damage their reputation (41% experienced a compromised website). DNS attacks caused app downtime for 51% of organizations, and cloud service downtime for 35% of them.
- Data theft — Cybercriminals may attempt to access sensitive student and staff data, including names and addresses, in order to sell it to a third party. The report showed that one in four organizations were victims of data theft via DNS.
- IP theft and espionage — This is especially the case for research institutions developing new solutions in the fields of computer science as well as medical or natural sciences.
- Ransomware — Attackers may also try to disrupt or halt traffic on a university’s network in order to hurt productivity or to extort money from the university.
Phishing Attacks and Ransomware
The survey data demonstrates that organizations in the education sector were susceptible to a variety of DNS attacks. Phishing was the most reported attack type, with 34% of education institutions having experienced phishing. Similarly, distributed denial of service (DDoS) attacks, which may cause widespread disruption of an organization‘s network, were a common occurrence as well (17%).
Education is particularly vulnerable to both DNS attacks and data theft. The size of possible data breaches can be seen in the attack on the Baltimore School District in late 2020. The Baltimore County’s school system was shut down by a ransomware attack that hit all of its network systems and closed schools for several days for about 111,000 students. It wasn’t until weeks later that school officials could finally regain access to vital files they feared were lost, including student transcripts, recorded grades, and special education program records.
Unfortunately, many countermeasures being taken to mitigate the impact of DNS attacks are not suitable: 49% shut down the DNS server, 37% shut down part of the network infrastructure, and 37% disabled affected applications. These measures may stop an attack in process, but they are harsh and can have a serious effect on output as well as on the general learning experience — especially if students cannot access e-learning tools by logging into the network remotely. On average, it took educational institutions the longest time to mitigate an attack (7.6 hours). Therefore, universities and schools would benefit from a purpose-built DNS security solution offering adaptive countermeasures that keep services running while an attack is being mitigated.
Fortunately, DNS is ideally placed to be the first line of defense as it has unique early visibility over most traffic. Numerous effective steps strengthen security measures and help mitigate DNS attacks once they occur, as outlined below.
- IT Hygiene — IT departments in the education sector should implement internal threat intelligence to protect data and services. Using real-time DNS analytics helps detect and thwart even advanced attacks and is particularly necessary for catching data exfiltration via DNS, which traditional security components, such as firewalls, are unable to detect. This is why 35% of organizations see monitoring and analysis of DNS traffic as their top priority for preventing data theft, compared to securing endpoints (22%) or adding more firewalls (23%).
- Automation — According to the survey, less than half of education institutions have implemented automation of network security policy management.
- “Zero-trust” strategies — Education organizations should also rely more on zero-trust strategies, strengthening verification before granting access to resources.
On top of the huge uptake of bring your own device (BYOD) and cloud, COVID-19 has had a dramatic impact on education networks, and as organizations continue with hybrid systems a secure digital infrastructure is more important than ever. School districts and universities need to ensure their data and privacy are protected, so DNS security has become a critical component of their new digital education reality.
This article was contributed by EfficientIP.
|
<urn:uuid:d0fa474a-85d1-48a9-8cb5-300840aa015b>
|
CC-MAIN-2022-40
|
https://www.missioncriticalmagazine.com/articles/93819-addressing-dns-security-risks-in-the-education-sector
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00647.warc.gz
|
en
| 0.962824 | 1,051 | 2.75 | 3 |
This is the 2nd article of series “Coding Deep Learning for Beginners”. You can view the 1st article here.
Getting into Machine Learning isn’t an easy thing. I really want to take good care of the reader. That’s why from time to time, you can expect articles focused only on theory. Because long articles discourage from learning, I will keep them at 5–8 minutes of reading time. I cannot just put everything into a single article — code snippets, math, terminology — because that would result in reducing the explanations of essential concepts. I believe that dividing the knowledge into smaller parts and expanding it across more articles will make the learning process smooth as there will be no need to take stops and detours.
Machine Learning model
Starting with the definition of “model” which will appear quite often from now on. The names like Linear Regression, Logistic Regression, Decision Trees etc. are just the names of the algorithms. Those are just theoretical concepts that describe what to do in order to achieve the specific effect. Model is a mathematical formula which is a result of Machine Learning algorithm implementation (in case of these articles — in code). It has measurable parameters that can be used for prediction. Models can be trained by modifying their parameters in order to achieve better results. It is possible to say that models are representations of what a Machine Learning system has learned from the training data.
Diagram visualising difference between Machine Learning Algorithm and Machine Learning Model.
Branches of Machine Learning
There are three most regularly listed categories of Machine Learning:
- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning
The group of algorithms that require dataset which consists of example input-output pairs. Each pair consists of data sample used to make prediction and expected outcome called label. Word “supervised” comes from a fact that labels need to be assigned to data by the human supervisor.
In training process, samples are being iteratively fed to the model. For every sample, the model uses the current state of parameters and returns a prediction. Prediction is compared to label, and the difference is called an error. The error is a feedback for the model of what went wrong and how to update itself in order to decrease the error in future predictions. This means that model will change the values of its parameters according to the algorithm based on which it was created.
Diagram demonstrating how Supervised Learning works.
Supervised Learning models are trying to find parameter values that will allow them to perform well on historical data. Then they are used for making predictions on unknown data, that was not a part of training dataset.
There are two main problems that can be solved with Supervised Learning:
- Classification — process of assigning category to input data sample. Example usages: predicting whether a person is ill or not, detecting fraudulent transactions, face classifier.
- Regression – process of predicting a continuous, numerical value for input data sample. Example usages: assessing the house price, forecasting grocery store food demand, temperature forecasting.
Example of Classification and Regression models
Group of algorithms that try to draw inferences from non-labeled data(without reference to known or labeled outcomes). In Unsupervised Learning, there are no correct answers. Models based on this type of algorithms can be used for discovering unknown data patterns and data structure itself.
Example of Unsupervised Learning concept. All data is fed to the model and it produces an output on it’s own based on similarity between samples and algorithm used to create the model.
The most common applications of Unsupervised Learning are:
- Pattern recognition and data clustering – Process of dividing and grouping similar data samples together. Groups are usually called clusters. Example usages: segmentation of supermarkets, user base segmentation, signal denoising.
- Reducing data dimensionality – Data dimension is the number of features needed to describe data sample. Dimensionality reduction is aprocess of compressing features into so-called principal values which conveys similar information concisely. By selecting only a few components, the amount of features is reduced and a small part of the data is lost in the process. Example usages: speeding up other Machine Learning algorithms by reducing numbers of calculations, finding a group of most reliable features in data.
Dividing data from various countries around the world into three clusters representing Developed, Developing and Underdeveloped nations (source: Tableau blog).
Branch of Machine Learning algorithms which produces so-called agents. The agent role is slightly different than classic model. It’s to receive information from the environment and react to it by performing an action. The information is fed to an agent in form of numerical data, called state,which is stored and then used for choosing right action. As a result, an agent receives a reward that can be either positive or negative. The reward is a feedback that can be used by an agent to update its parameters.
Training of an agent is a process of trial and error. It needs to find itself in various situations and get punished every time it takes the wrong action in order to learn. The goal of optimisation can be set in many ways depending on Reinforcement Learning approach e.g. based on Value Function, Gradient Policy or Environment Model.
Interaction between Agent and Environment.
There is a broad group of Reinforcement Learning applications. Majority of them are the inventions, that are regularly mentioned as most innovative accomplishments of AI.
Example of solutions where Reinforcement Learning is used. From self-driving cars through various games such as Go, Chess, Poker or computer ones — Dota or Starcraft, to manufacturing.
Simulating the movement of 3D models is a complicated task. Such models need to interact with different models in a given environment. Reinforcement Learning is becoming more actively used as a tool for solving this problem, as the results it produces seem very trustworthy for human eye and algorithms are capable of automatically adjusting to rules describing the environment.
Main video accompanying the SIGGRAPH 2018 paper: “DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skill”. https://youtu.be/vppFvq2quQ0
And that’s it. In the next article, I will explain basics and implementation of Linear Regression algorithm which is one of the basic Supervised Learning algorithms.
|
<urn:uuid:d36e307d-9efe-494c-98b0-a2d59f768b0e>
|
CC-MAIN-2022-40
|
https://resources.experfy.com/ai-ml/coding-deep-learning-for-beginners-types-of-machine-learning/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00647.warc.gz
|
en
| 0.924937 | 1,340 | 3.359375 | 3 |
Malware Attacks: Definition and Best Practices
What is a Malware Attack?
A malware attack is when malicious software is used to infect a system to cause damage or for theft. Malware, also known as malicious software, has many variants created to damage, exploit, or disable computers, laptops, mobile devices, Internet of Things (IoT) devices, and networks.
Intent and approaches for malware attacks can vary. Typical malware objectives, common malware delivery methods, types of malware attacks, and more are covered below. Since the first malware was discovered, individuals and organizations have been under attack with hundreds of thousands of malware variants.
The First Malware Attack to Be Discovered
The Creeper virus (also referred to as the Creeper worm) was discovered on March 16, 1971.
It is not known who created the first self-replicating program in the world, but it is clear that the first worm in the world (Creeper) was created by BBN engineer Robert (Bob) H. Thomas in Cambridge, Massachusetts. The Creeper was an experimental self-replicating program that was not destined to cause damage, but designed to demonstrate a mobile application.
The Creeper was written in PDP-10 assembly and ran on the Tenex operating system (Tenex is the OS that saw the first email programs, SNDMSG and READMAIL, in addition to the use of the "@" symbol that appears in our email addresses even today), and used the ARPANET (predecessor of the current Internet) to infect DEC PDP-10 computers that were running Tenex. The Creeper caused infected systems to display the message "I'M THE CREEPER: CATCH ME IF YOU CAN.”
The Creeper would start to print a file, but then stop, find another Tenex system, open a connection, pick itself up and transfer to the other machine (along with its external state, files, etc.), and then start running on the new machine, displaying the message. The program rarely, if ever, actually replicated itself; instead, it jumped from one system to another, attempting to remove itself from previous systems as it propagated forward.
The Creeper did not install multiple instances of itself on several targets. It just moved around a network.
It is uncertain how much damage (if any) that the Creeper caused. Most sources say the worm was little more than an annoyance. Some sources claim that the Creeper replicated so many times that it crowded out other programs, but the extent of the damage is unknown.
The Creeper revealed the key problem with such worm programs—it is difficult to control the worm.
Malware is created and propagated by a wide range of people. Malware attacks are unleashed by a variety of vandals, blackmailers, and other criminals who seek to cause disruption, get attention, make a statement, or steal money. Individuals, organizations, and even governments have been known to perpetrate malware attacks.
Typical Malware Objectives
Though malware objectives vary in scope and goal, propagators of a malware attack usually have one or more of the following motivations:
- Make or steal money
- Commit espionage, targeting secret or sensitive information
- Create disruption
- Bypass access controls
- Harm computers, devices, or networks
- Activism, also known as hacktivism
- Gray market “business,” such as:
- Distributing unrequested advertisements and promotions
- Offering fake software utilities
- Enticing users into accessing chargeable content online
Common Malware Delivery Methods
Since its inception in the early 1970s, a number of malware vectors have been identified. Common malware delivery methods include the following.
Users are tricked into clicking a file attachment, such as a PDF, ZIP, document, spreadsheet, or other email attachment that includes the malware. Depending on the value of the payload, the people behind the attack may conduct extensive research on their targets to make the email look more legitimate by including specific details related to the users.
Once opened, the malware is on the system and ready for action. However, malware activation is not always the same. In some cases, the malware attack begins immediately. Other malware attacks may lie in wait for days, weeks, or even months.
Malware attacks often start with a malicious URL, which is a link created with the purpose of drawing users into a malware attack. Once the link is clicked, all types of malware could be downloaded and begin to compromise systems and networks.
Malicious URLs show up in emails, on websites, and even in social media posts and ads. In the case of emails, messages are often worded with a sense of urgency or intrigue to entice users to click.
Remote Desktop Tools
A number of remote desktop tools allow users to connect to another computer over a network connection. While these tools are helpful, remote desktop tools are a popular attack vector for malware, because of their open ports.
Malware attacks can include a port-scan phase where the Internet is trolled for computers with exposed ports. Once an exposed port is identified, the attacker attempts to get into the targeted system by exploiting security vulnerabilities or using brute force attacks.
Once inside, the malware attack ensues and often includes a step to disable antivirus software or leave a backdoor open for future malware attacks.
Malicious Advertising (Malvertising)
Malvertising uses malicious online advertisements to spread malware and launch malware attacks. Perpetrators buy ad space on legitimate online advertising networks and post ads infected with malware.
Because so many ads are submitted to ad networks, it is challenging to identify malicious ads. In addition, most ads rotate, which makes it hard to pin down the malvertisement even when the page has been flagged.
Ads are creatively produced to look legitimate and entice users to click. Once the user clicks, the malware scans the system for information about its software, operating system, browser details, and more to identify a vulnerability that can be exploited.
Drive-by Download Attack
Viruses can be hidden in the HTML of compromised websites, so when a user visits the site, the malware is automatically downloaded when the page is loaded. The fact that a download happens without the user’s knowledge makes this a particularly menacing malware attack vector.
Drive-by downloads are made possible by hosting the malicious content on the attacker’s site or exploiting vulnerabilities to inject malware into legitimate websites.
Self-propagating malware can move from computer to computer or network to network, propagating itself. Older strains of malware were only able to infect the single system where it was installed. More advanced malware variants are able to spread across an entire organization.
Free and Pirated Software
Rarely does anyone get something for nothing, so it is not surprising that free or pirated software often comes with adware or hidden malware. Even if it does not contain malware, this software is vulnerable because unlicensed software does not receive patches to address vulnerabilities that can be exploited in malware attacks.
USB drives and external hard drives are common vectors for malware attacks. These devices are commonly infected with malware which loads onto users’ systems when they are connected.
Other Malware Delivery Methods
- Operating system and application vulnerabilities provide security loopholes and backdoors for malware attacks.
- Social engineering is used to launch malware attacks as users can be enticed into clicking malicious URLs or downloading infected files.
- Connected smart devices and IoT devices serve as vectors for malware attacks.
Types of Malware Attacks
There are many types of malware attacks with multiplying variants on each. Below is an overview of the most common types of malware attacks.
Adware is ostensibly benign, but is still malware and does have a dark side.
The basic functions of adware programs are displaying online advertisements, redirecting search requests to advertising websites, and collecting information about users. Adware can also change internet browser settings, default browser and search settings, and the home page.
Adware takes advantage of browser vulnerabilities to infect systems. One danger of adware is that a user profile can be created from the data collected and used for targeted malware attacks.
Bots and Botnets
A bot, or robot, is a malicious version of a program designed to automate tasks. A benign bot would be used for legitimate purposes, such as indexing search engines and searching for information to provide real-time updates. In the context of malware attacks, bots deliver spam for phishing attempts and turn computers into zombies.
The bot infiltrates a computer and then automatically receives and executes instructions given from a centralized command and control server. Bots have similar traits as other malware. Like worms, they can self-replicate and can replicate based on users’ actions, like viruses and Trojans.
Used in large numbers, bots can take over an entire network, turning it into a botnet that can be used to launch large-scale attacks. The most common botnet malware attack is a distributed denial of service (DDoS) attack, which can render systems, networks, or an entire domain unavailable.
One of the most common types of malware, the computer virus, is a piece of code that inserts itself into an application. The virus executes when the application is run, so it relies on an unsuspecting user or an automated process to start its work.
Once the virus executes, the malware attack begins with the virus spreading across computers and networks.
Viruses can also be used to steal information or money, create botnets, and launch other malware.
Fileless malware is injected into a running process, taking advantage of tools that are built into the operating system to carry out attacks. Because there is not an executable file, there is no signature for antivirus software to detect. Fileless malware attacks are also referred to as living-off-the-land attacks.
A keystroke logger, or keylogger, is a type of spyware that monitors user activity by recording every keystroke entry made on a computer. Like bots, keyloggers have legitimate uses, such as monitoring employee activity and keeping track of children’s online behavior. Keyloggers are also used for malware attacks and capturing sensitive information, such as usernames, passwords, answers to security questions, and account numbers.
A more advanced type of software used for malware attacks is metamorphic malware. Metamorphic malware is capable of changing its code and signature patterns with each iteration, which means that each newly propagated version of itself no longer matches the previous one. In essence, the malware becomes a completely different piece of software.
While metamorphic malware is more difficult to develop, it has proven to be an effective way for cybercriminals to disguise their malicious code to avoid detection from antimalware and antivirus programs and hide identifiable signatures.
Malware attacks have been honed to take advantage of the unique vulnerabilities in mobile devices. Smartphones, tablets, and IoT devices are equally or more susceptible to the malware attacks that are perpetrated on computers and networks.
Unlike metamorphic malware, which completely changes itself, polymorphic software alters its appearance and does so rapidly—as frequently as every 15-20 seconds. By altering its appearance, polymorphic malware can evade detection by antivirus and antimalware programs. This attribute is associated with many types of malware attacks.
Also known as scareware, ransomware is used for malware attacks designed for extortion. Ransomware attacks use encryption to lock down networks and prevent access to data until a ransom is paid. There is no way to decrypt data without the encryption key, so ransomware has been a lucrative malware attack type for cybercriminals.
Remote Administration Tools (RATs)
Remote administration tools give attackers total control over systems. Legitimate tools have been repurposed for malware attacks; remote administration tools are difficult to detect. Remote administration tools usually do not appear in lists of running programs or tasks, and are often mistaken for legitimate programs.
A rootkit is a set of malicious attack software tools that gives an unauthorized user remote control of a victim’s computer with full administrative privileges. Rootkits are usually injected into applications, kernels, hypervisors, or firmware.
Once installed, rootkits can be used to execute files and change system configurations as well as conceal other malware. While a rootkit cannot self-propagate or replicate, it is difficult to detect and remove.
Spyware is designed to monitor and capture information about online activities. Unlike adware, spyware can also capture sensitive information.
While not all spyware is malicious, it does invade privacy and captures data that can be used to support other malware attacks. Spyware spreads by exploiting software vulnerabilities, injecting itself into legitimate software, or via Trojans.
A Trojan is a malware program that enters systems in disguise and tricks users into installing it. Once installed, the Trojan gives unauthorized access to the affected system, allowing cybercriminals to introduce additional malware and launch malware attacks.
Like viruses, worms are infectious and able to replicate themselves. An important difference is that, unlike a virus, worms do not require users’ actions to activate. Worms can self-propagate and move to other systems as soon as the breach occurs.
Usually taking advantage of operating system vulnerabilities, worms can quickly spread across networks. Worms can be programmed to delete files, encrypt data, steal information, install backdoors, and create botnets.
Worm vs. Virus vs. Trojan
|Tunnels into systems then moves through networks||Attaches to legitimate programs and runs when the affected program is executed||Disguises as a legitimate program, enticing users to download it|
|Self-replicates||Self-replicates||Cannot replicate itself|
|Can be controlled remotely||Cannot be controlled remotely||Can be controlled remotely|
|Fast spread rate compared to viruses and Trojans||Moderate spread rate compared to worms and Trojans||Slow spread rate compared to worms and viruses|
Malware Attack Techniques
There are a finite number of techniques employed by attackers when crafting malware attacks. And, though there are many different types of malware attacks, they follow a common pattern.
Malware attacks also include a number of tactics to evade detection:
- Wrappers to hide malware
- Variations, such as those created by polymorphic and metamorphic malware
- Packers to bypass prevention measures
- Limiting a malware attack to a specific machine or configuration
Five Malware Attack Techniques
- 1. Establish objectives and assess targets.
The end goal drives the decision about what type of malware attack to use.
- 2. Determine the best exploit to employ.
Vulnerabilities are identified to find the optimal point of entry.
- 3. Enter the system.
Malware is delivered onto the system.
- 4. Execute the malware attack.
With the malware in place, the attack starts.
- 5. Malware replication and propagation.
The malware replicates itself and moves laterally to exploit other systems.
Best Practices for Preventing Malware Attacks
- Keep frontline defenses up to date.
- Create application, system, and appliance security policies.
- Use strong usernames and passwords.
- Enable two-factor authentication.
- Regularly install patches and immediately patch critical vulnerabilities.
- Document backup / restore plans and policies.
- Educate users about cybersecurity and malware attack defensive protocols.
- Train users on who and what to trust.
- Teach users how not to fall for phishing or other social engineering tactics.
- Explain the remediation steps in the event of a malware attack.
- Partition networks.
- Remove unnecessary browser plugins.
- Use security tools.
- Employ security analytics.
- Monitor network traffic for anomalies.
- Use advanced analytics to see across networks.
- Develop and use prevention and remediation programs and processes.
- Deploy a zero-trust security framework to enforce least-privilege protocols.
- Be wary of email.
- Only open attachments from trusted senders.
- Look at the sender’s address to confirm that it is legitimate.
- If an attachment includes macros, do not open it.
- Use email filtering to block malicious attack software.
Employees are a critical line of defense against malware attacks. Training raises awareness about how habits and behaviors can impact overall security. Employee training about malware attacks also provides guidance on best practices for being part of the overall security posture.
When preparing employee training around malware attacks, consider these important points to ensure the efficacy of training:
- Include malware-specific topics in cybersecurity training.
- Make cybersecurity training mandatory for all employees.
- Require regular cybersecurity refresher training and include the latest malware trends.
- Focus training on how employees can be part of the solution, as defenders against malware attacks.
Topics that should be part of employee training include:
- Email scams
What they are and how they work, with a focus on phishing and spear phishing
- Malicious URLs
What they do and how to identify them
What types of malware exist, how to avoid it, and what to do if employees accidentally engage with malware
- Password security
What password protocols exist and instructions for how to adhere to them
- Removable media
The dangers of removable media and how to safely share sensitive files
- Social networking dangers
Explain what dangers exist and how to avoid scams
- Physical security
What to keep in mind, from what information is accessible on desks and laptops to policies regarding open doors and unknown visitors
How to Detect a Malware Attack
Malware attacks, while often stealthy, may generate signs of their presence. Clues that can indicate a malware attack include:
- Computers slowing down
- System resource usage appearing abnormally high
- A computer’s fan running at full speed
- A proliferation of annoying pop-up ads
- Unanticipated loss of disk space
- An increase in a system’s internet activity
- Changes to browser settings
- Antivirus / anti-malware that stops working
- Loss of access to files
- A higher volume of network activity
- Abnormal behavior across systems
Examples of Malware Attacks
Adware Used for Malware Attacks
Usually bundled with free software that overwhelms browsers with ads.
Starts discreetly, but ultimately takes full control of the target browser.
Developed in the Netherlands, it was one of the first big adware programs to spread worldwide. It was instrumental in a number of large-scale botnet attacks.
Chinese malware that infects target browsers and turns them into zombies.
Removed ads from sites and replaced them with their ads from the network.
Botnet Used for Malware Attacks
- EarthLink Spammer
Created to send phishing emails.
Attacks IoT devices and exploits unpatched, legacy vulnerabilities.
Uses command and control servers around the world and sends hundreds of spam messages.
Infects digital smart devices and turns them into botnets.
Computer Viruses Used for Malware Attacks
Used an enticing email message to get users to open an attached infected document, which granted the virus access, then emailed the same message and attachment to all email contacts.
- Code Red
Replicated itself, taking over computer resources, then opened targeted machines for remote access.
Manipulated vulnerabilities to slow down systems, then caused crashes and made it difficult to power down.
Fileless Malware Used for Attacks
Referred to as an information stealer that is capable of running undetected, it takes sensitive information from an affected user (e.g., account credentials, keystrokes, data) and sends it to the attacker.
- Olympic Vision
Gains access to business email accounts, then steals information, including the computer name, Windows product keys, keystrokes, network information, clipboard text, and data saved in browsers, exfiltrating it and taking screenshots. It also gains access to messaging clients, FTP clients, and email clients.
- SQL Slammer
Exploited vulnerabilities in SQL servers.
Took advantage of the same vulnerability as WannaCry, but it was fileless malware.
Spyware Used for Malware Attacks
- CoolWebSearch (CWS)
Installs via malicious HTML applications, then hijacks Web searches, the home page, and other settings.
Targeted business and government leaders who used hotel Wi-Fi networks.
- Internet Optimizer
Hijacks error pages and redirects users to their webpage.
Monitors requested webpages and data entered into forms, then sends related pop-up ads.
Ransomware Used for Malware Attacks
For more on ransomware, see the Ultimate Guide to Ransomware.
Rootkit Software Used for Malware Attacks
Infects systems when users download a fake VPN app, then acts like a human to trick behavioral analysis software.
A kernel-mode rootkit that downloaded malware in the background and made affected systems part of its botnet.
Monitors network traffic, captures screenshots and audio from computers, and logs keyboard activity.
Trojans Used for Malware Attacks
Considered to be the world’s most dangerous malware until it was stopped in 2021, it acted as a door opener for computer systems around the globe, then sold that access to cybercriminals.
This is a slang term for malware that is used by governmental entities to surreptitiously extract information from targeted systems.
A rooting Trojan that gains access to sensitive areas in the operating system and installs spam apps.
Also known as Zbot, Zeus is malware software that targets devices’ operating systems and can grant access to third parties.
Worms Used for Malware Attacks
Reset account lockout settings and blocked antivirus software, then locked users out and used ransomware to extort payments.
With its “LOVE-LETTER-FOR-YOU” attachment and “ILOVEYOU” subject line arriving in an email from a friend, ILOVEYOU became the first global computer virus pandemic, and it remains one of the farthest-reaching worms ever. Once activated, ILOVEYOU generated millions of messages that crippled mail systems and overwrote millions of files on computers across the world.
Regarded as one of the fastest-spreading and most destructive computer viruses of all time. At one point, the MyDoom worm generated up to a quarter of all emails sent worldwide. MyDoom scraped email addresses from infected computers, spread to the contacts, then began sending a new version of itself as a malicious attachment. .
Unlike any other virus or worm that came before, Stuxnet wreaked havoc and destroyed the equipment the computers it controlled, rather than sticking with the computer or jumping around networks to other computers.
Take Precautions Against Malware Attacks
The many objectives, methods, techniques, and types of malware attacks generate fear in most organizations; the financial costs, lost productivity, and other problems that are created are quite staggering. Businesses should stay informed and be vigilant against malware attacks, with recovery plans ready to execute if an attack does occur.
Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 16,000 customers with millions of customers worldwide.
Last Updated: 3rd August, 2021
|
<urn:uuid:c6f871ae-df0c-4016-b00f-d58411defcad>
|
CC-MAIN-2022-40
|
https://www.egnyte.com/guides/governance/malware-attacks
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00647.warc.gz
|
en
| 0.910712 | 5,065 | 3.8125 | 4 |
When people think about IoT devices, many often think of those that fill their homes. Smart lights, ovens, TVs, etc. But there’s a whole other type of IoT devices that are inside the home that parents may not be as cognizant of – children’s toys. In 2018, smartwatches, smart teddy bears, and more are all in kids’ hands. And though parents are happy to purchase the next hot item for their children, they sometimes aren’t fully aware of how these devices can impact their child’s personal security. IoT has expanded to children, but it’s parents that need to understand how these toys affect their family, and what they can do to keep their children protected from an IoT-based cyberthreat.
Now, add IoT into the mix. The reason people are commonly adopting IoT devices is for one reason – convenience. And that’s the same reason these devices have gotten into children’s hands as well. They’re convenient, engaging, easy-to-use toys, some of which are even used to help educate kids.
But this adoption has changed children’s online security. Now, instead of just limiting their device usage and screen time, parents have to start thinking about the types of threats that can emerge from their child’s interaction with IoT devices. For example, smartwatches have been used to track and record kids’ physical location. And children’s data is often recorded with these devices, which means their data could be potentially leveraged for malicious reasons if a cybercriminal breaches the organization behind a specific connected product or app. The FBI has even previously cautioned that these smart toys can be compromised by hackers.
Keeping connected kids safe
Fortunately, there are many things parents can do to keep their connected kids safe. First off, do the homework. Before buying any connected toy or device for a kid, parents should look up the manufacturer first and see if they have security top of mind. If the device has had any issues with security in the past, it’s best to avoid purchasing it. Additionally, always read the fine print. Terms and conditions should outline how and when a company accesses a kid’s data. When buying a connected device or signing them up for an online service/app, always read the terms and conditions carefully in order to remain fully aware of the extent and impact of a kid’s online presence and use of connected devices.
Mind you, these IoT toys must connect to a home Wi-Fi network in order to run. If they’re vulnerable, they could expose a family’s home network as a result. Since it can be challenging to lock down all the IoT devices in a home, utilize a solution like McAfee Secure Home Platform to provide protection at the router-level. Also, parents can keep an eye on their kid’s online interactions by leveraging a parental control solution like McAfee Safe Family. They can know what their kids are up to, guard them from harm, and limit their screen time by setting rules and time limits for apps and websites.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
|
<urn:uuid:1da13104-111f-4214-99dd-8b3b987f9408>
|
CC-MAIN-2022-40
|
https://www.mcafee.com/blogs/family-safety/understanding-your-kids-smart-gadgets/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00647.warc.gz
|
en
| 0.963443 | 672 | 2.671875 | 3 |
Typing on a touchscreen can be a cumbersome task, being generally pretty slow and more often than not fraught with typos. But all this is soon about to change.
IBM has patented some new technology that can reposition the keyboard keys based on a user’s typing style and size of the fingers. The patent filing by the company shows a keyboard with some keys placed higher or fatter than others. This "adapts the keyboard to the user's unique typing motion paths", the patent reads.
Users would first need to configure the keyboard according to their typing style and finger size. The intelligent technology will then reposition and re-size the keys in a manner which is most suitable for the users.
Changing the size of the keys based on the anatomy of the user will allow users to type even faster and more accurately, making the touchscreen typing experience much more convenient than before.
The technology is a bit similar to the keyboard algorithm used by Apple in the iPhone. The algorithm figures out which letter the user is going to type next and increases its surface area.
However, the size of the keys on the surface remain the same whereas with IBM’s technology it changes the actual size of the keys.
|
<urn:uuid:95684356-7b81-48a0-b696-4ca0c0c6047f>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/2011/07/27/new-touchscreen-keyboard-adapts-user-finger-size-ibm-patent/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00647.warc.gz
|
en
| 0.939979 | 248 | 2.859375 | 3 |
Art works stored on DNA data storage will be fired into Space in 2022. The pieces by three members of the all-female Beyond Earth collective, are responses to human population growth, consumption and degradation, and preservation of Earth’s biodiversity.
The works are digitised and converted from binary data to the DNA bases represented by the letters A, T, G and C. The encoded DNA sequences are synthesised with California startup Twist Bioscience’s silicon-based platform and preserved in a specialised capsule.
Beyond Earth says that DNA is nature’s oldest and most resilient data storage method. No energy or maintenance is required to preserve it, it is ultra-dense and compact, and it lasts hundreds of thousands of years, making it the ultimate time capsule for any digitised artwork.
Beyond Earth’s mission is to explore the frontiers of art, space, and biology through space-bound artworks. To Space, From Earth will endure the test of time, it says, and serve as an important record of human history and the biosphere.
Of course, once in space the artworks will likely constitute write-once-read-never storage – unless they are somehow retrieved and read by our alien overlords.
|
<urn:uuid:812dbd71-286d-4db7-b100-cf00d7ac5f5b>
|
CC-MAIN-2022-40
|
https://blocksandfiles.com/2020/11/06/beyond-earth-dna-data-storage/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00647.warc.gz
|
en
| 0.921827 | 255 | 3.015625 | 3 |
Multi-User MIMO (MU-MIMO) and Single-User MIMO (SU-MIMO) are two antenna technologies used in modern wireless networks, including 4G and 5G mobile networks. While both MU-MIMO and SU-MIMO are part of the overall Multiple Input Multiple Output (MIMO) technology, there are key differences between the two that we will cover in this post.
MU-MIMO uses radio communication layers from multiple cellular network antennas to support multiple mobile devices simultaneously; SU-MIMO uses radio communication layers from multiple antennas to support a single device at a time. Both MU-MIMO and SU-MIMO are used in 4G and 5G mobile networks.
There seems to be a general misconception around Multi-User MIMO as some people consider it just to be a characteristic of the Massive MIMO antenna technology, which is used by 5G New Radio networks. However, multi-User is just a type of MIMO and can exist in any MIMO variant, including Massive MIMO in 5G and the regular MIMO in 4G LTE. For context, multi-user MIMO is also used by other modern wireless networks, including WiFi6.
Different communication layers in MU-MIMO and SU-MIMO
Multiple Input Multiple Output (MIMO) technology in 4G LTE and 5G networks is based on the principles of spatial multiplexing, also known as space-division multiplexing (SDM). SDM consists of a number of antenna elements built into an antenna panel in such a way that they are physically separated in space. With multiple antenna elements built into the antenna panels of the transmitter and receiver, the communication between the base station and the mobile phone takes in various layers. This way, the antenna elements can communicate multiple data streams in parallel between the transmitter and the receiver by efficiently utilising the same time and frequency resources. For example, the original LTE uses a MIMO configuration of 4 x 4 for downlink, which means there are four layers of communication in the downlink, i.e. from the base station to the mobile phone. Since these layers can carry separate streams of data, depending on the data rate required for a particular user device, the network can decide how to utilise the available data rate. For example, suppose the network decides to offer a higher data rate to a single user at a time (e.g. for watching a 4k video). In that case, it may use Single-User MIMO to allocate all the available layers to one user device at a given time. However, if the network decides to accommodate multiple simultaneous data sessions at an instance, it may use Multi-User MIMO to share the available data rate with multiple users.
MU-MIMO and SU-MIMO in 4G and 5G base stations
Like any MIMO technology, multi-user and single-user MIMO need to be built into the network and the user device for the system to use them. On the network side, as part of the mobile radio network, the Multi-User MIMO (MU-MIMO) and Single User MIMO (SU-MIMO) capabilities are built into the cellular base station (cell tower). In 4G LTE networks, the cellular base station is generally the eNodeB or eNB, whereas, in 5G NR networks, the base station is called the gNodeB (gNB). In a 5G network deployment where a 5G core network is used for accommodating both 5G and 4G devices, the base station is called next-generation evolved node B or ng-eNB. The antenna panels within the transmitter and the receiver of eNB, gNB and ng-eNB need to support MU-MIMO and SU-MIMO for the network to accommodate single or multiple simultaneous users. The same is required on the mobile phone side also.
Multi-User MIMO in 5G Massive MIMO
The multi-user aspect of MIMO often becomes a topic of discussion in 5G NR networks that support Massive MIMO. Multi-User support is one of the key characteristics of the Massive MIMO technology that uses a vast number of antenna elements within a single antenna panel. Massive MIMO panels can consist of tens or even hundreds of antenna elements within a single panel for the transmitter and receiver. The principle behind multi-user support in 5G Massive MIMO is to improve network capacity by efficiently using the antenna elements to accommodate a large number of simultaneous users. So instead of creating an overall wide beam of the signal to serve an individual user at any given time, the 5G base station antennas can create multiple narrower signal beams that are targetted at individual users. It is also worth noting that, unlike 4G LTE networks that have a maximum of eight transmission layers (i.e. in LTE Advanced and LTE Advanced Pro MIMO configuration), 5G NR networks can potentially have many layers of communication. However, that doesn’t mean that in a Single-User MIMO scenario in 5G NR networks, all of the massive MIMO antenna elements may be assigned to a single user device. SU-MIMO and MU-MIMO support are required on a user device also for the capability to function. The user device has limitations on the number of layers it can simultaneously support. For example, a 64 x 64 massive MIMO configuration in 5G can theoretically support 64 layers of transmission between a base station and a user device. But in real life, due to practical limitations on a user device’s antenna panel, the 64 antenna elements in a base station transmitter are split across various user devices with a current maximum limit of 16 communication layers for one device. This is aligned with the principle of multi-user support in 5G which is to improve network capacity by supporting multiple simultaneous sessions.
In MU-MIMO (Multi-User MIMO), the multiple layers of communication that result from multiple antenna elements of a MIMO system can simultaneously support multiple user devices. On the other hand, in SU-MIMO (Single User MIMO), the multiple layers of communication that result due to multiple antenna elements in a MIMO system can only support one user device at a time. Both Multi-User MIMO (MU-MIMO) and Single User MIMO (SU-MIMO) are types of antenna technologies employed by 4G and 5G networks. MU-MIMO and SU-MIMO are based on the underlying MIMO technology that has been part of 4G networks from the beginning.
Here are some helpful downloads
Thank you for reading this post. I hope it helped you in developing a better understanding of cellular networks. Sometimes, we need extra support, especially when preparing for a new job, studying a new topic, or buying a new phone. Whatever you are trying to do, here are some downloads that can help you:
Students & fresh graduates: If you are just starting, the complexity of the cellular industry can be a bit overwhelming. But don’t worry, I have created this FREE ebook so you can familiarise yourself with the basics like 3G, 4G etc. As a next step, check out the latest edition of the same ebook with more details on 4G & 5G networks with diagrams. You can then read Mobile Networks Made Easy, which explains the network nodes, e.g., BTS, MSC, GGSN etc.
Professionals: If you are an experienced professional but new to mobile communications, it may seem hard to compete with someone who has a decade of experience in the cellular industry. But not everyone who works in this industry is always up to date on the bigger picture and the challenges considering how quickly the industry evolves. The bigger picture comes from experience, which is why I’ve carefully put together a few slides to get you started in no time. So if you work in sales, marketing, product, project or any other area of business where you need a high-level view, Introduction to Mobile Communications can give you a quick start. Also, here are some templates to help you prepare your own slides on the product overview and product roadmap.
|
<urn:uuid:a47f0ad7-e1fe-49ac-b76e-5a646d9dc2f0>
|
CC-MAIN-2022-40
|
https://commsbrief.com/difference-between-mu-mimo-and-su-mimo-in-mobile-networks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00647.warc.gz
|
en
| 0.931511 | 1,730 | 3.671875 | 4 |
All websites, computers and connected devices communicated with each other using IP addresses. Since the IP address are difficult to remember, the IP address is assigned a domain name that’s usually easy to remember and type into the browser search bar. For instance, if Google has an IPv4 format IP address: 18.104.22.168, it’s rather easy to just type the URL domain name Google.com instead. A service that maps IP addresses to domain names and allows users to access the website or a target server using a domain name is called a DNS service.
AWS Route 53 is a DNS service that connects the Internet traffic to appropriate servers hosting the requested Web application. Amazon takes its offering beyond traditional DNS management that merely register website domains and direct user requests to the hosting infrastructure. The subscription-based AWS service allows users to register domain names, apply routing policies, perform infrastructure health checks and manage configurations without coding requirements using the AWS Management Console. Unlike traditional DNS management services, Amazon Route 53, together with a range of AWS services, enables scalable, flexible, secure and manageable traffic routing.
AWS Route 53 takes its name with reference to Port 53, which handles DNS for both the TCP and UDP traffic requests; the term Route may signify the routing, or perhaps the popular highway naming convention. Route 53 is an Authoritative DNS service, which contains information about the mapping of IP addresses to domain names.
(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)
Here’s a brief description of how the AWS Route 53 service works for routing traffic between end-users and the hosted Web apps:
- The domain name is first registered with AWS Route 53, which is then configured to route Internet traffic to the servers hosting the domain name. The servers can be both AWS public cloud or a private cloud infrastructure.
- End-users enter the domain name or the complete URL into the browser search bar.
- The ISP routes the request to a DNS resolver, a tool that converts the domain name into its IP address.
- The DNS resolver then forwards the user request to a DNS root name server, which is then directed to its Top Level Domain (TLD) server and ultimately, to AWS Route 53.
- The Route 53 name server returns the IP address of the domain name to the DNS resolver.
- Now that the DNS resolver has the required IP address, it can forward the user request to the appropriate server hosting the content as per the configurations of the AWS Route 53 service.
- AWS Route 53 also checks the health of backend servers. The service feature called the DNS Failover checks the endpoints for availability. If the endpoint is deemed unhealthy, Route 53 will route traffic to another healthy endpoint. An alarm will be triggered using the AWS CloudWatch functionality to inform the specified recipient regarding the necessary actions.
Here’s a brief description of the current AWS Route 53 Features:
- Resolver: DNS resolution between local networks and VPC can be performed using the Route 53 Resolver. Users can forward DNS queries from the local network to a Route 53 Resolver and apply conditional configurations to forward DNS queries from AWS instances to a local network. AWS Route 53 supports both IPv4 and IPv6 formats.
- Traffic Flow: Intelligent traffic routing based on key parameters including proximity, health of endpoints and latency, among others.
- Geo DNS and Latency Based Routing: Reduce latency and improve end-user experience by routing traffic from servers closest to end-users.
- Private DNS for Amazon VPC: Configure Route 53 to respond to DNS queries within private hosted VPC zones. As a result, the DNS resolution data is not exposed to the public networks.
- Health Checks, Monitoring and Failover: Route 53 directs internet traffic to healthy target instances as per the specified configurations. In event of an outage, the health-checking agents will route the traffic to healthy endpoints. The health check feature generates CloudWatch metrics that can further trigger AWS Lambda functions to perform appropriate corrective actions.
- Domain Registration: The scalable DNS management service allows users to transfer management of existing domains or register new domain names to AWS Route 53. This feature consolidates management and billing associated with delivering Web hosted services.
- S3 and CloudFront Zone Apex Support: Create Custom SSL certificates without requirements for proprietary code or complicated configurations. Zone Apex support allows Route 53 to return requests for root domain such as example.com in the same was as the complete URL scheme of example.com without incurring any performance penalty as an additional proxy server is not required to access the backend servers.
- Amazon ELB Integration: AWS Elastic Load Balancing capability allows the traffic load to be distributed between multiple AWS target instances to maximize service availability and performance. AWS ELB allows users to increase the fault tolerance of their Web services to healthy target instances within AWS and on-premise infrastructure resources.
- Weighted Round Robin: A service for developers to configure how often a DNS response is returned. This capability is useful for service testing purposes as well as balancing traffic between target instances.
- Management Console: A simple and intuitive management console allows users to view resources and perform operational tasks. The management console is also offered as a mobile app. Users can further manage Route 53 controls such as the DNS record modification permission using the AWS Identity and Access Management service.
Amazon Route 53 capabilities of policy-based routing, health check and monitoring, support for bi-directional query resolution for hybrid cloud environments, and integration with an exhaustive set of AWS services give it a leading edge over its competition. Routing policies such as Multi-Value Routing and Weighted Routing give users more control and management capability over their internet traffic. Route 53 is also designed to work with a range of AWS services necessary to run apps hosted on the AWS infrastructure. The close integration of services allows users to perform changes to their architecture and scale resources to accommodate increasing Internet traffic volume without significant DNS resolution, configuration and management requirements.
|
<urn:uuid:8d940a50-6b51-4e4d-82ae-61fb8f7ee5f7>
|
CC-MAIN-2022-40
|
https://www.bmc.com/blogs/an-introduction-to-aws-route-53/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00647.warc.gz
|
en
| 0.900334 | 1,232 | 3.1875 | 3 |
New Training: Planning for Data Operations on Amazon S3
In this 6-video skill, CBT Nuggets trainer Bart Castle covers the common Amazon S3 data operations of creating buckets and writing and reading objects. Gain an understanding of how to perform data operations using the Management console, AWS CLI, and SDKs. Watch this new AWS training.
Watch the full course: Working with Amazon S3
This training includes:
1.1 hours of training
You’ll learn these topics in this skill:
Getting Started with S3 Data Operations
Bucket and Object Naming Rules
Creating and Working with Buckets
Copying, Deleting, and Syncing Objects
What is an S3 Bucket?
An S3 bucket is a container for your S3 data objects. You can look at buckets and objects as being similar in concept to folders and files in a file system. Each bucket in S3 has a globally unique name. This means that all AWS accounts, regardless of Region, use the same namespace.
You create buckets within a specific AWS Region. Typically, you will create it in a Region that is geographically near to you or your end-users, to limit latency and minimize cost. In certain situations, you may also have to do this to fulfill regulatory requirements. Once you upload an object into a bucket, that object will remain in this Region unless you explicitly transfer it to another.
You can create buckets and upload objects into them using the Amazon S3 API or the S3 console. You grant public access to buckets and their objects through a combination of bucket policies and ACLs (access control lists).
|
<urn:uuid:be1505e3-978b-45c5-87de-87b1b27d8b0e>
|
CC-MAIN-2022-40
|
https://www.cbtnuggets.com/blog/new-skills/new-training-planning-for-data-operations-on-amazon-s3
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00647.warc.gz
|
en
| 0.899288 | 343 | 2.734375 | 3 |
Every day the world becomes more technological and with this rapid development of technology, there is no doubt that the protection of information is a must. In addition to this, while everything goes online, we are leaving footprints in this tech world and we are sharing and exchanging more and more information in it, even private and sensitive information.
Whether we are sharing it unconsciously or giving such information willingly for a service that we need, we are all concerned about where this personal data goes and how it is maintained. Since there might be sensitive information involved and knowing that cyber threats are always at bay ready to attack, organizations need to convince their customers that they have taken all the necessary steps to protect their information.
There is no debate whether an organization should consider implementing information security and privacy practices within its systems. The question is, which information security and privacy practices are the best for an organization to follow?
ISO standards provide a strong starting point for organizations that want to implement the best practices within their management systems and prove their excellence and commitment to offering secured services and products.
For an organization to operate efficiently and effectively, it should ensure that they are following the industry trends and are in line with what the market is offering. Hence, while information technology is a core basis for almost any organization nowadays, adapting the best practices of ISO/IEC 27001 and ISO/IEC 27701, would help them ensure that they have created a secure infrastructure to protect information assets against the risk of loss, damage, or any other threats.
ISO/IEC 27701 and its relation to ISO/IEC 27001
Organizations that implement an Information Security Management System (ISMS) based on an internationally recognized standard such as ISO/IEC 27001 will ensure the confidentiality, integrity, and availability of information, and they would convince the interested parties that their risk of processes has been assessed and is adequately managed, among others.
However, despite that ISO/IEC 27001 ensures interested parties that their information is secure, the issue of the privacy of information has been a matter of discussion these past few years.
Thus, in August 2019, ISO presented the first international standard that deals with privacy information management – ISO/IEC 27701, which provides requirements on how to implement a Privacy Information Management System (PIMS) and helps organizations that identify themselves as Personally Identifiable Information (PII) Controllers and/or PII Processors, regardless of their type, size, or the country they operate.
Organizations that implement a PIMS based on ISO/IEC 27701 requirements, ensure to third parties that they take into consideration all the necessary steps to properly review, evaluate, and maintain the privacy of information.
“To explain it from another perspective, ISO/IEC 27001 relates to the way an organization keeps data accurate, available, and accessible only to approved persons, while ISO/IEC 27701 relates to the way an organization collects personal data and prevents unauthorized use or disclosure.”
– Oludare Ogunkoya, MSECB Auditor for both, the ISO/IEC 27001:2013 and ISO/IEC 27701:2019 standard.
Therefore, while ISO/IEC 27001 addressed the issue of information security and helps organizations to protect their information assets, ISO/IEC 27701 focuses specifically on the issues of privacy information.
6 reasons to integrate ISO/IEC 27701 to ISO/IEC 27001
In this article we discussed the relationship between these two standards, but why is it important for organizations that have an ISO/IEC 27001 certification to get certified with ISO/IEC 27701 as well?
Let us start by naming a few reasons:
- ISO/IEC 27701 – Privacy Information Management System (PIMS) is not a standalone standard but an extension of ISO/IEC 27001.
- ISO/IEC 27701 cannot be certified as a separate/standalone management system.
- ISO/IEC 27701 helps to continually improve the ISMS by giving more emphasis to the protection of Privacy Information.
- ISO/IEC 27701 provides more details on the term “information security”, mentioned in ISO/IEC 27001.
- The privacy and the protection of personal data mentioned in ISO/IEC 27001, have a further extended scope in ISO/IEC 27701 and include the protection of privacy as potentially affected by the controlling/processing of PII.
- ISO/IEC 27701 helps in ensuring that an organization has effectively designed and managed an ISMS.
How do ISO/IEC 27701 and ISO/IEC 27001 help organizations meet legislation and regulations?
The protection of PII, as a debating point, has made many countries create legislation and regulations that organizations should follow, for instance, GDPR, CCPA, NY Shield Act, etc. All these regulatory and legal requirements help organizations ensure the protection of PII, however, having ISO/IEC 27701 certification will help organizations demonstrate that they are operating in accordance with the regulatory requirements as well.
As explained in our Q&A session: “The ISO/IEC 27701 has a very detailed and clear mapping of GDPR clauses, therefore, when the standard is implemented with GDPR as a primary focal point, it ensures that all the clauses of GDPR have been taken into consideration. Thus, organizations can demonstrate alignment and governance to the GDPR requirements, though they should not claim certification to GDPR.”
Integrating ISO/IEC 27701 into your existing ISO/IEC 27001 will help your organization become compliant with data privacy regimes while increasing transparency of the process and procedures. In this way, you will ensure that you maintain the integrity of information to your customers and other interested parties, as this will build more customer trust and increase customer satisfaction.
MSECB is here to help you
Receiving an internationally recognized certification from a globally renowned certification body, such as MSECB, has proved to have a multidimensional impact on previously certified organizations, and ultimately has increased the market share and recognition of those organizations.
Our certification process is separated into two stages:
- During the Stage 1 Audit, MSECB would conduct a review of the ISMS/PIMS to verify whether the client is ready for the Stage 2 Audit.
- After the Stage 1 is completed successfully, the Stage 2 Audit will be conducted. The Stage 2 Audit is a more in-depth audit to verify whether the client has met all the requirements of the standard.
Upon verifying that your organization is in conformity with the requirements of the ISO/IEC 27001 and ISO/IEC 27701, the certifications are granted by MSECB. The certifications are then maintained through scheduled annual surveillance audits conducted by MSECB, with the recertification audit performed on a triennial basis.
Furthermore, considering that ISO/IEC 27701 is an extension to the ISO/IEC 27001, the audit and certification process for PIMS can be initiated in any given cycle, whether it is initial, any of the surveillances, recertification or as a scope extension audit.
Start your ISO/IEC 27701 audit and certification today by getting a Free Quote.
|
<urn:uuid:136cc952-c9e4-4644-ac6a-96183390a9fb>
|
CC-MAIN-2022-40
|
https://msecb.com/integrate-iso-27701-into-your-existing-iso-27001-certification/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00647.warc.gz
|
en
| 0.940039 | 1,500 | 2.546875 | 3 |
According to security weakness hunters at Cyble, over 9,000 exposed VNC (virtual network computing) endpoints are exposed on the internet without authentication.
VNC (virtual network computing) is a platform-independent system that can help users connect to systems that require monitoring and adjustments.
VNC grants users control over a remote computer via the RFB (remote frame buffer protocol) over a network connection.
According to the researchers, the most exposed endpoints are in China and Sweden. Other countries that made it to the top 5 list are the United States, Spain and Brazil. Most attempts to access VNC servers came from the Netherlands, Russia and the United States.
These vulnerable endpoints could serve as entry points for unauthorized users, including threat actors with malicious targets.
Exposed VNCs are widely targeted by attackers. Using its cyber intelligence tools on 5909, Cyble researchers discover that there were over six million requests within a month.
To protect VNC, administrators are advised never to expose servers directly to the internet. If they need to be remotely accessible, they must be placed behind a VPN.
Administrators should also provide instances with a password to restrict access to the VNC servers.
The sources for this piece include an article in BleepingComputer.
|
<urn:uuid:be61c8c1-f0f8-421b-b624-1d73b94ff267>
|
CC-MAIN-2022-40
|
https://www.itworldcanada.com/post/more-than-9000-vnc-endpoints-exposed-online-without-a-password
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00647.warc.gz
|
en
| 0.936076 | 261 | 2.859375 | 3 |
Access privileges are integral to presenting users with their views of data. Users can view only those elements and menu options for which they have a View permission.
In general, it is efficient to define access privileges for groups, then assign users to the groups, and finally customize the access privileges for individual users, if necessary.
When determining access permissions, the server first checks to see if the element has permissions set for a user or group. Three types of permission are possible:
Positive: Grants the user permissions for an object
Null: Permissions have no effect on user access for an object
Deny: Permissions override the inheritance of a granted permission
Individual user permissions always override the privileges of the groups to which the user belongs. If a group is denied access to an object, but a user who is a group member is granted access, that user can access the object. Conversely, if the group can access an object, but a group member is denied access, that user cannot access the object. If a user holds a null permission and is a member of two or more groups with conflicting permissions, deny permissions take precedence.
Access privileges can be granted for specific elements at any level of the element hierarchy. For example, a user can have access to view a server that is connected to the network, but can be denied access to any other network components. Access control can be assigned to the, , , , , and hierarchies.
Perform the following steps to assign access privileges:
Create groups, but do not assign users to them yet.
Go through the element hierarchy and assign access privileges to different groups.
Assign users to groups.
Assign different access privileges for specific elements to individual users in groups.
By default, the privileges assigned to higher levels of a hierarchy are automatically inherited by the lower levels of the hierarchy. However, it is possible to set different permissions on a lower-level element and have those permissions flow down the hierarchy.
If there is no defined permission for a requested element, then security processing moves up the hierarchy until it locates a defined permission. In ascending the hierarchy, the first permission granting or denying permission takes precedence. However if a user is a member of two or more groups with conflicting permissions at the same level in the element hierarchy, the Deny permissions take precedence.
|
<urn:uuid:51cba859-c133-4056-88c8-7cf851be1679>
|
CC-MAIN-2022-40
|
https://www.netiq.com/documentation/operations-center-57/security_management/data/accessoverview.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00647.warc.gz
|
en
| 0.885038 | 473 | 2.96875 | 3 |
By the end of the year, the U.S. is going to have two record breaking supercomputers. We learned of the first a few weeks back, with the announcement of IBM's speed demon, Summit, with a clock speed of 200 petaflops, making it the planet's fastest supercomputer, leaving the former number one, China's Sunway TaihuLight at 93 petaflops, in the dust. Then early this week Hewlett Packard Enterprise (HPE) and the Department of Energy (DOE), announced they'll also have a supercomputer called Astra up and running, possibly as early as the end of this summer but definitely by the end of the year.
This system will be used by the National Nuclear Security Administration to run advanced modeling and simulation workloads to address issues such as national security, energy and science.
At 2.3 theoretical peak petaflops, this one's not set to top the Top 500 list, but by one important metric it will be the world's fastest. When up and running, it will be the largest and fasted supercomputer running ARM silicon. And if 2.3 petaflops doesn't sound like much when compared to Summit or Sunway TaihuLight, it'll still be somewhere about midpoint on the Top 100 list.
"It's not the largest supercomputer in the world, but it's by far the largest ARM-based computer," Mike Vildibill, VP of HPE's advanced technologies group told Data Center Knowledge. "It's still in the top 100, which is just a phenomenal milestone. To my knowledge there's no ARM-based systems on the Top 500 today, to kind of show you how aggressive the Department of Energy is being in taking this new architecture all the way into their production environments."
According to HPE, Astra represents something of a test bed on the path to develop an exascale-class system, meaning a system that can achieve 1,000 petaflops, or 10 times faster than Summit.
The system is based on HPE's Apollo 70 System, a 2U enclosure (twice the height of a standard rack mounted server) with four servers utilizing two Cavium ThunderX2 systems-on-a-chip in each enclosure. In total, the system will deploy 2,592 servers with 5,184 CPUs, all tied together using InfiniBand, a high bandwidth interconnect.
The ThunderX2 processor is relatively new, announced only a couple of months ago, and was chosen partly because of its memory performance. HPE claims that the system will offer 33 percent better memory performance than traditional systems with greater system density. The memory performance is important, since it enhances the system's ability to perform supercomputer workloads.
"The idea with these HPC systems, is the customer plans to run a single application across the entire computer system at the same time," Vildibill said. "So all of the CPUs are running on the same application and they're sharing data back and forth at very high rates. So they need this very high bandwidth InfiniBand interconnect, but they also need the high memory bandwidth that the ThunderX2 provides. The ThunderX2 comes with eight memory controllers built into the CPU, which is more memory controllers that are in traditional x86 systems today."
Astra will utilize the Lustre file system, a parallel file system that grants high-performance access through simultaneous, coordinated input/output operations (IOPS). For storage, Astra will deploy 20 all flash HPE Apollo 4520 units connected to run as a single file system with a capacity of over 400 terrabytes.
"The parallel file system allows for all 5,000 of these CPUs to read and write to the one common file all at the same time," he explained. "The parallel file system is like the traffic cop and manages all of the I/Os so that the complete supercomputer can run one application very fast, and it can do all of its I/O to one file or one file system, all simultaneous."
The 1.2 MW system will be liquid cooled using HPE's MCS 300, a liquid cooling solution that houses the Apollo 70 racks.
"We remove 99 percent of the heat from the racks with this cooling solution, which brings a lot of efficiency to the data center and saves the customer a lot of money over time," he said. "It's very expensive to try to blow hot air long distances, and if you can convert it to liquid to extract it you're much more efficient.
"Furthermore, with this solution the distance that the hot air has to travel is actually minimized compared to even other liquid cooling solutions. Quite interestingly, at the end of the day when you're calculating the efficiency of the cooling environment, calculating the distance that you're blowing with fans, all this hot air actually becomes an interesting bellwether to how your overall efficiency is going to look."
Vildibill said that the decision to use ARM processors was made by the Department of Energy before they began seeking a partner to design and build the system.
|
<urn:uuid:0f9425c6-37f7-4990-8597-abd3bd02e8e8>
|
CC-MAIN-2022-40
|
https://www.datacenterknowledge.com/supercomputers/hpe-and-doe-partner-build-largest-arm-based-supercomputer
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00047.warc.gz
|
en
| 0.95242 | 1,050 | 2.53125 | 3 |
While there is not one exact industry wide definition, threat modeling can be summarized as a practice to proactively analyze the cyber security posture of a system or system of systems. Threat modeling can be conducted both in the design/development phases and for live system environments.
It is often referred to as Designing for Security. In short, threat modeling answers questions as “Where am I most vulnerable to attacks?”, “What are the key risks?”, and “What should I do to reduce these risks?”.
More specifically, threat modeling identifies cybersecurity threats and vulnerabilities and provides insights into the security posture, and what controls or defenses should be in place given the nature of the system, the high-value assets to be protected, the potential attackers’ profiles, the potential attack vectors, and the potential attack paths to the high-value assets.
Threat modeling can consist of the following steps:
1. Create a representation of the environment to be analyzed
2. Identify the high value assets, the threat actors, and articulate risk tolerance
3. Analyze the system environment from potential attackers’ perspective:
- How can attackers reach and compromise my high value assets? I.e. what are the possible attack paths for how attackers can reach and compromise my high-value assets?
- What of these paths are easier and harder for attackers?
- What is my cyber posture — how hard is it for attackers to reach and compromise my high-value assets?
If the security is too weak/risks are too high:
4. Identify potential measures to improve security to acceptable/target levels
5. Identify the potential measures that should be implemented — the most efficient ways for your organization to reach acceptable/target risk levels.
Why threat model: The business values
Threat modeling is a very effective way to make informed decisions when managing and improving your cybersecurity posture. It can be argued that threat modeling, when done well, can be the very most effective way of managing and improving your cyber risk posture, as it can enable you to identify and quantify risks proactively and holistically and steer your security measures to where they create the best value.
Identify and manage vulnerabilities and risks before they are implemented and exploited
- Before implementation: Threat modeling enables companies to “shift left” and identify and mitigate security risks already in the planning/ design/ development phases, which is multiples — often 10x, 100x, or even more — times more cost-effective than fixing them in the production phase.
- Before exploited: As rational and effective cyber defenders we need both proactive and reactive cyber capabilities. Strengthening security proactively, before attacks happen, has clear advantages. However, it also comes with a cost. An effective threat modeling enables the user to make risk-based decisions on what measures to implement proactively.
Prioritize security resources to where they create the best value
- One of the very key challenges in managing cybersecurity is to determine how to prioritize and allocate scarce resources to manage risks with the best effect per dollar spent. The process for threat modeling, presented in the first section of this text, is a process for determining exactly this. When done effectively, it takes into consideration all the key parts guiding rational decision making.
There are several additional benefits to threat modeling. One is that all the analyses are conducted on a model representation of your environment, which creates significant advantages as the analyses are non-intrusive and that analyzers can test scenarios before implementations.
Another set of values are that threat models create a common ground for communication in your organization and increase cybersecurity awareness. To keep this text concise, we here primarily highlight the values above. We also want to state that there are several other excellent descriptions of the values of threat modeling, and we encourage you to explore them.
Who does threat modeling and when?
On the question “Who should threat model?” the Threat Modeling Manifesto says “You. Everyone. Anyone who is concerned about the privacy, safety, and security of their system.” While we do agree with this principle in the long term, we want to nuance the view and highlight the need for automation.
Threat modeling in development
This is the ”base case” for threat modeling. Threat modeling is typically conducted from the design phase and onward in the development process. It is rational and common to do it more thoroughly for high criticality systems and less thorough for low criticality systems. Threat modeling work is typically done by a combination of development/DevOps teams and the security organization.
More mature organizations typically have more of the work done by the development/DevOps teams and the less mature organizations have more work support from the security organization.
Threat modeling of live environments
Many organizations also do threat modeling on their live environments. Especially for high criticality systems. As with the threat modeling in development, organizations have organized the work in different ways. Here, the work is typically done by a combination of operations/DevOps teams and security organization.
Naturally, it is advantageous when threat models fit together and evolves over time from development through operations and DevOps cycles.
|
<urn:uuid:4a7ac30d-7c95-4c86-8b42-97062087ea1f>
|
CC-MAIN-2022-40
|
https://www.helpnetsecurity.com/2021/04/30/what-is-threat-modeling/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00047.warc.gz
|
en
| 0.935449 | 1,064 | 2.609375 | 3 |
The increasing growth of IoT devices is causing IT architects to rethink how they upgrade their infrastructures. Data and analysis have clearly pushed further out to the edge, with a multitude of sensors and monitoring devices collecting information for nearly every purpose. In fact, Gartner projects that by 2025, 75% of enterprise data will be generated and handled outside of traditional data centers or clouds.
The unavoidable temptation for malicious actors to attack edge device vulnerabilities and compromise data is an unpleasant byproduct of this growth. New edge computing security concerns, such as lateral attacks, account theft, entitlement theft, DDoS attacks, and more, can interrupt services in more ways than one. They provide a significant challenge to security professionals’ ability to deploy data at the edge in order to maintain a secure and dependable flow of critical information across the organization.
The security risks of edge computing
IoT and edge devices are frequently installed outside of a centralized data infrastructure or datacenter, making them far more challenging to monitor in terms of both digital and physical security. There are a number of edge computing security concerns that IT architects should be aware of:
Storage and security of data
Data gathered and processed at the edge does not have the same level of physical protection as data stored in more centralized locations. Vital information can be jeopardized easily by removing a disc drive from an edge resource or copying data from a simple memory stick. It can also be more challenging to provide reliable data backup due to restricted local resources.
Attacks and physical tampering
In an edge-computing architecture, physical tampering of devices is a likely possibility, based on their location and degree of physical protection from adversaries. By bringing processing resources nearer to data sources, edge computing, by its very nature, expands the attack surface.
Physical attackers seeking to breach entire edge networks have more parts to cover with an enlarged attack surface, but the fact that there are more devices in more locations makes physical attacks a lot easier to carry out.
Also Read: Top Three Security Mistakes CISOs Make today
Outside standard information security visibility
Processing and storage at the periphery generally reside outside of traditional information security visibility and control, posing new security issues. Beyond standard data center security methods, living strategic plans must encompass heterogeneous mobile and Internet of Things (IoT) computing security.
Injections of malicious hardware or software
Cyber-attackers can use a variety of hardware and software-based techniques to corrupt, alter, steal, or erase data circulating within edge networks, especially when it comes to infecting and manipulating edge nodes, devices, or servers found at the edge.
Cyber-attackers can infiltrate illegal software and hardware components into the edge network, having a devastating impact on the efficacy of current edge servers and devices. It even allows service provider exploitation, in which entities that provide the software and hardware solutions that enable edge computing to begin unwittingly executing hacking processes on behalf of the attacker.
|
<urn:uuid:2598880d-f74c-41bb-a520-52d832e04a90>
|
CC-MAIN-2022-40
|
https://itsecuritywire.com/featured/edge-computing-four-cybersecurity-challenges-in-2022/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00047.warc.gz
|
en
| 0.906601 | 598 | 2.609375 | 3 |
Ransomware is a type of malicious software designed to block access and encrypt data on your computer until you pay a ransom. As such, the number of major cyberattacks on enterprises is rising month after month. But have companies really been affected by ransomware? What makes an enterprise vulnerable to ransomware? And why are ransomware attacks on the rise?
Pulse interviewed over 300 business decision-makers to find out. Below are the key findings:
57% business leaders believe that their organizations are vulnerable to cyber-attacks.
71% business leaders have already experienced ransomware attacks.
The biggest consequence of a successful ransomware attack is reputational damage, followed by the fear that it could inspire copycat cyberattacks (70%).
37% leaders believe that most organizations are vulnerable to ransomware because they do not have a unified cybersecurity strategy.
Ransomware has been a growing problem for businesses.
The number of ransomware attacks is rising rapidly. In 2021 alone, there were 2,700 cases reported. This trend is largely attributed to the pandemic which saw many people working from home or taking jobs outside their offices, and with hybrid work culture looking set to stay among organizations, ransomware attacks are most likely to stay.
Hence, it is no surprise that most business leaders (three-quarters – 71%) have experienced a ransomware attack.
Despite facing these cyber-attacks, organizations are not seemingly prepared for them with most (57%) believing they will face another one in future too. This expectancy of attack is higher amongst those who have experienced a ransomware attack than those who haven’t (45%). This suggests that most business executives are aware of cyber-attacks and are preparing for the worst.
Paying a ransomware demand isn’t the best option for business leaders.
When businesses are hit by a cyber-attack, they have no choice but to pay a ransom to recover from it. Paying the ransom may seem like a quick way to restore data, but there is always the risk that businesses may not get all of their information back.
According to the report, more than half (54%) of organizations were able to fully recover their data after a ransomware attack. However, of the organizations that paid the ransom, 52% were able to recover their data, while 65% opted not to pay it.
It is clear from the report that most organizations are not in favour of paying a ransom to hackers.
“People are paying, so there is a market. Frankly, I’d rather pay the penalty than pay the criminals.” – VP, large education company
Having a unified cybersecurity strategy is crucial for protecting an organization from ransomware attacks.
Research results show that the main reason organizations fall victim to ransomware attacks is because they lack an integrated cyber security strategy (37%), followed by technology vulnerabilities (21%) and poor risk management strategy (17%).
Respondents identified employees as the main vulnerability to ransomware attacks. The top 2 factors that took them under attack are employee negligence (78%) and social exploitation of employees (78%). It is noteworthy that a lot of business leaders (42%) believe that organizations are still operating legacy infrastructure due to which new ransomware vulnerabilities are being identified and hence, high profile ransomware incidents are being reported in the news.
Furthermore, the survey found that energy infrastructure (79%) and healthcare (72%) are the industries most likely to fall victim to ransomware attacks. The government sector is the third biggest target of ransomware hackers (71%).
Overall, the report shows that traditional cyber security strategies are failing because both technologies and humans make mistakes that create vulnerabilities in their systems – which are avoidable. To prevent ransomware attacks, organizations need to step up their cyber security. They are recommended to use a comprehensive cyber security solution that can effectively detect any advanced ransomware attack and provide timely remediation of the incident.
Download the report here.
|
<urn:uuid:bb5b8678-97f3-4a67-9496-4121d27b3cee>
|
CC-MAIN-2022-40
|
https://www.dailyhostnews.com/unified-cybersecurity-strategy-is-crucial-to-beat-ransomware-attacks
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00047.warc.gz
|
en
| 0.967194 | 790 | 2.71875 | 3 |
A recent report published by the UK House of Commons Science and Technology Committee stated that approximately 12.6 million adults in the UK lack basic digital skills (opens in new tab). This IT skills gap is affecting businesses across industries, from financial services and local authorities to retail and manufacturing.
The technological revolution of the late twentieth and early twenty-first century has brought with it significant changes. Not only has it fundamentally changed the way businesses operate, it has significantly increased the volume of data available to us. We can now monitor and track every process in detail, gaining valuable information and insights in the process.
Unfortunately, as the House of Commons found, digital skills have struggled to keep up with demand. As such, business management teams repeatedly encounter difficulties with aspects of operation and even recruitment. In fact, 72 per cent of employers have expressed unwillingness to consider potential candidates lacking these skills. This is understandable, but problematic in the midst of a skills crisis.
Tackling the gap
Interestingly, this latest report was commissioned as a result of a previous report — the big data dilemma report in February 2016 — that identified, “the risk of a growing data analytics skills gap as big data reaches further into the economy”. This is a pressing concern, because the ability to analyse data effectively directly influences the strategy of decision makers.
For example, most businesses can use data analytics to identify opportunities to improve operational processes and achieve time and cost savings. However, this can only be done if staff have the skills to interact with this data and pick out the actionable information.
While this can be done by specialist staff, recruitment increases costs and relying on IT departments can limit the amount of real-time practical insight.
So how can businesses tackle the digital skills gap? The most obvious approach is by investing in upskilling programmes to ensure staff are fully competent using business IT systems. However, this is a long-term objective that will do little to make an impact in the more immediate future.
Improving the upskilling process
Fortunately, businesses can make some small changes to improve the upskilling process. While some software companies are already pushing towards self-service data analytics, which sees analysis tools move out of the IT department and into the wider workforce, only 16 per cent of business executives can adequately use those tools.
This is where search-based analytics software (opens in new tab), such as Connexica’s CXAIR, can be used to bridge the skills gap. Using natural language search, the same format found in search engines, makes business intelligence accessible and actionable on a wider scale. Changing the way that users interact with the tools directly can remove the unnecessary technical barriers to business intelligence.
Of course, this doesn’t remove the importance of trained data analysts and scientists. Technically trained staff can provide complex analysis maintain systems and build data models. These are tasks that cannot be completed without advanced digital skill sets.
It is clear that the UK government must introduce a digital strategy to improve the IT skills held by future generations, while businesses need to invest in upskilling schemes to boost the competencies of existing staff. Search-based analytics can ensure that business strategy does not suffer, but it remains essential that staff develop the skills to keep businesses ahead of the technological curve.
Greg Richards, Sales and Marketing Director, Connexica (opens in new tab)
Image source: Shutterstock/Duncan Andison
|
<urn:uuid:2ec7087e-1de2-46f9-a6a0-2a9dcebbf866>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/features/addressing-the-uk-digital-skills-crisis/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00047.warc.gz
|
en
| 0.937744 | 699 | 2.65625 | 3 |
This is the second and final installment of our two-part series on automated teller machine (ATM) attacks and fraud.
In part 1, we identified the reasons why ATMs are vulnerable—from inherent weaknesses of its frame to its software—and delved deep into two of the four kinds of attacks against them: terminal tampering and physical attacks.
Terminal tampering has many types, but it involves either physically manipulating components of the ATM or introducing other devices to it as part of the fraudulent scheme. Physical attacks, on the other hand, cause destruction to the ATM and to the building or surrounding area where the machine is situated.
We have also supplied guidelines for users—before, during, and after—that will help keep them safe when using the ATM.
For part 2, we’re going to focus on the final two types of attacks: logical attacks and the use of social engineering.
Logical ATM attacks
As ATMs are essentially computers, fraudsters can and do use software as part of a coordinated effort to gain access to an ATM's computer along with its components or its financial institution's (FI's) network. They do this, firstly, to obtain cash; secondarily, to retrieve sensitive data from the machine itself and strip or chip cards; and lastly, intercept data they can use to conduct fraudulent transactions.
Enter logical attacks—a term synonymous with jackpotting or ATM cash-out attacks. Logical attacks involve the exploitation and manipulation of the ATM’s system using malware or another electronic device called a black box. Once cybercriminals gain control of the system, they direct it to essentially spew cash until the safe empties as if it were a slot machine.
The concept of "jackpotting" became mainstream after the late renowned security researcher Barnaby Jack presented and demoed his research on the subject at the Black Hat security conference in 2010. Many expected ATM jackpotting to become a real-world problem since then. And, indeed, it has—in the form of logical attacks.
In order for a logical attack to be successful, access to the ATM is needed. A simple way to do this is to use a tool, such as a drill, to make an opening to the casing so criminals can introduce another piece of hardware (a USB stick, for example) to deliver the payload. Some tools can also be used to pinpoint vulnerable points within the ATM’s frame or casing, such as an endoscope, which is a medical device with a tiny camera that is used to probe inside the human body.
If you think that logical attacks are too complex for the average cybercriminal, think again. For a substantial price, anyone with cash to spare can visit Dark Web forums and purchase ATM malware complete with easy how-to instructions. Because the less competent ATM fraudsters can use malware created and used by the professionals, the distinction between the two blurs.
Logical attack types
To date, there are two sub-categories of logical attacks fraudsters can carry out: malware-based attacks and black box attacks.
Malware-based attacks. As the name suggests, this kind of attack can use several different types of malware, including Ploutus, Anunak/Carbanak, Cutlet Maker, and SUCEFUL, which we'll profile below. How they end up on the ATM’s computer or on its network is a matter we should all familiarize ourselves with.
Installed at the ATM’s PC:
- Via a USB stick. Criminals load up a USB thumb drive with malware and then insert it into a USB port of the ATM’s computer. The port is either exposed to the public or behind a panel that one can easily remove or punch a hole through. As these ATM frames are not sturdy nor secure enough to counter this type of physical tampering, infecting via USB and external hard drive will always be an effective attack vector. In a 2014 article, SecurityWeek covered an ATM fraud that successfully used a malware-laden USB drive.
- Via an external hard drive or CD/DVD drive. The tactic is similar to the USB stick but with an external hard drive or bootable optical disk.
- Via infecting the ATM computer’s own hard drive. The fraudsters either disconnect the ATM’s hard drive to replace it with an infected one or they remove the hard drive from its ATM, infect it with a Trojan, and then reinsert it.
Installed at the ATM’s network:
- Via an insider. Fraudsters can coerce or team up with a bank employee with ill-intent against their employer to let them do the dirty work for them. The insider gets a cut of the cashed-out money.
- Via social engineering. Fraudsters can use spear phishing to target certain employees in the bank to get them to open a malicious attachment. Once executed, the malware infects the entire financial institution’s network and its endpoints, which include ATMs. The ATM then becomes a slave machine. Attackers can send instructions directly to the slave machine for it to dispense money and have money mules collect.
Note that as criminals are already inside the FI’s network, a new opportunity to make money opens its doors: They can now break into sensitive data locations to steal information and/or proprietary data that they can further abuse or sell in the underground market.
Installed via Man-in-the-Middle (MiTM) tactics:
- Via fake updates. Malware could be introduced to ATM systems via a bogus software update, as explained by Benjamin Kunz-Mejri, CEO and founder of Vulnerability Lab after he discovered (by accident) that ATMs in Germany publicly display sensitive system information during their software update process. In an interview, Kunz-Mejri said that fraudsters could potentially use the information to perform a MiTM attack to get inside the network of a local bank, run malware that was made to look like a legitimate software update, and then control the infected the ATM.
Black box attacks. A black box is an electronic device—either another computer, mobile phone, tablet, or even a modified circuit board linked to a USB wire—that issues ATM commands at the fraudster’s bidding. The act of physically disconnecting the cash dispenser from the ATM computer to connect the black box bypasses the need for attackers to use a card or get authorization to confirm transactions. Off-premise retail ATMs are likely targets of this attack.
A black box attack could involve social engineering tactics, like dressing up as an ATM technician, to allay suspicions while the threat actor physically tamper with the ATM. At times, fraudsters use an endoscope, a medical tool used to probe the human body, to locate and disconnect the cash dispenser's wire from the ATM computer and connect it to their black box. This device then issues commands to the dispenser to push out money.
As this type of attack does not use malware, a black box attack usually leaves little to no evidence—unless the fraudsters left behind the hardware they used, of course.
Experts have observed that as reports of black box attacks have dropped, malware attacks on ATMs are increasing.
ATM malware families
As mentioned in part 1, there are over 20 strains of known ATM malware. We've profiled four of those strains to give readers an overview of the diversity of malware families developed for ATM attacks. We've also included links to external references you can read in case you want to learn more.
Ploutus. This is a malware family of ATM backdoors that was first detected in 2013. Ploutus is specifically designed to force the ATM to dispense cash, not steal card holder information. An earlier variant was introduced to the ATM computer via inserting an infected boot disk into its CD-ROM drive. An external keyboard was also used, as the malware responds to commands executed by pressing certain function keys (the F1 to F12 keys on the keyboard). Newer versions also use mobile phones, are persistent, target the most common ATM operating systems, and can be tweaked to make them vendor-agnostic.
Daniel Regalado, principal security researcher for Zingbox, noted in a blog post that a modified Ploutus variant called Piolin was used in the first ATM jackpotting crimes in the North America, and that the actors behind these attacks are not the same actors behind the jackpotting incidents in Latin America.
References on Ploutus:
- Criminals hit the ATM jackpot (Source: Symantec)
- New variant of Ploutus ATM malware observed in the wild in Latin America (Source: FireEye)
Anunak/Carbanak. This advanced persistent malware was first encountered in the wild affecting Ukrainian and Russian banks. It’s a backdoor based on Carberp, a known information-stealing Trojan. Carbanak, however, was designed to siphon off data, perform espionage, and remotely control systems.
It arrives on financial institution networks as attachment to a spear phishing email. Once in the network, it looks for endpoints of interest, such as those belonging to administrators and bank clerks. As the APT actors behind Carbanak campaigns don’t have prior knowledge of how their target’s system works, they surreptitiously video record how the admin or clerk uses it. Knowledge gained can be used to move money out of the bank and into criminal accounts.
References on Anunak/Carbanak:
- Anunak: APT against financial institutions [PDF] (Source: Group-IB and For-IT)
- The great bank robbery: the Carbanak APT (Source: Kaspersky)
Cutlet Maker. This is one of several ATM malware families being sold in underground hacking forums. It is actually a kit comprised of (1) the malware file itself, which is named Cutlet Maker; (2) c0decalc, which is a password-generating tool that criminals use to unlock Cutlet Maker; and (3) Stimulator, another benign tool designed to display information about the target ATM’s cash cassettes, such as the type of currency, the value of the notes, and the number of notes for each cassette.
References on Cutlet Maker:
- ATM malware sold is being sold on Darknet market (Source: Securelist)
SUCEFUL. Hailed as the first multi-vendor ATM malware, SUCEFUL was designed to capture bank cards in the infected ATM’s card slot, read the card’s magnetic strip and/or chip data, and disable ATM sensors to prevent immediate detection.
References on SUCEFUL:
- SUCEFUL: next generation ATM malware (Source: FireEye)
Directly targeting ATMs by compromising their weak points, whether they’re found on the surface or on the inside, isn’t the only effective way for fraudsters to score easy cash. They can also take advantage of the people using the ATMs. Here are the ways users can be social engineered into handing over hard-earned money to criminals, often without knowing.
Defrauding the elderly. This has become a trend in Japan. Fraudsters posing as relatives in need of emergency money or government officials collecting fees target elderly victims. They then “help” them by providing instructions on how to transfer money via the ATM.
Assistance fraud. Someone somewhere at some point in the past may have been approached by a kindly stranger in the same ATM queue, offering a helping hand. Scammers uses this tactic so they can memorize their target’s card number and PIN, which they then use to initiate unlawful money transactions.
The likely targets for this attack are also the elderly, as well as confused new users who are likely first-time ATM card owners.
Shoulder surfing. This is the act of being watched by someone while you punch in your PIN using the ATM’s keypad. Stolen PIN codes are particularly handy for a shoulder surfer, especially if their target absent-mindedly leaves the area after retrieving their cash but hasn't fully completed the session. Some ATM users walk away before they can even answer the machine when it asks if they have another transaction. And before the prompt disappears, the fraudster enters the stolen PIN to continue the session.
Eavesdropping. Like the previous point, the goal of eavesdropping is to steal the target’s PIN code. This is done by listening and memorizing the tones the ATM keys make when someone punches in their PIN during a transaction session.
Distraction fraud. This tactic swept through Britain a couple years ago. And the scenario goes like this: An unknowing ATM user gets distracted by the sound of dropping coins behind him/her while taking out money. He or she turns around to help the person who dropped the coins, not knowing that someone else is already either stealing the cash the ATM just spewed out or swapping a fake card to his real one. The ATM user looks back at the terminal, content that everything looked normal, then goes on their way. The person they helped, on the other hand, is either given the stolen card to or tells their accomplice the stolen card’s PIN, which he/she memorized when their target punched it in and before deliberately dropping the coins.
Continued vigilance for ATM users and manufacturers
Malware campaigns, black box attacks, and social engineering are problems that are actively being addressing by both ATM manufacturers and their financial institutions. However, that doesn’t mean that ATM users should let their guards down.
Keep in mind the social engineering tactics we outlined above when using an ATM, and don't forget to keep a lookout for something "off" with the machine you're interacting with. While it's quite unlikely a user could tell if an information-stealer had compromised her ATM (until she saw the discrepancies in her transaction records later), there are some malware types that can physically capture cards.
If this happens, do not leave the ATM premises. Instead, record every detail in relation to what happened, such as the time it was captured, the ATM branch you use, and which transactions you made prior to realizing the card would not eject. Take pictures of the surroundings, the ATM itself, and attempt to stealthily snap any people potentially lingering about. Finally, call your bank and/or card issuer to report the incident and request card termination.
We would also like to point you back to part 1 of this series again, where we included a useful guideline for reference on what to look out for before dropping by an ATM outlet.
As always, stay safe!
|
<urn:uuid:abae6e89-5b82-4131-8cd7-2e85f3a23148>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/blog/news/2019/08/atm-attacks-and-fraud-part-2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00248.warc.gz
|
en
| 0.943341 | 3,049 | 2.984375 | 3 |
In general I try to limit this blog to posts that focus on generally-applicable techniques in cryptography. That is, I don’t focus on the deeply wonky. But this post is going to be an exception. Today, I’m going to talk about a topic that most “typical” implementers don’t — and shouldn’t — think about.
Specifically: I’m going to talk about various techniques for making public key encryption schemes chosen ciphertext secure. I see this as the kind of post that would have saved me ages of reading when I was a grad student, so I figured it wouldn’t hurt to write it all down (even though this absolutely shouldn’t serve as a replacement for just reading the original papers!)
Background: CCA(1/2) security
Early (classical) ciphers used a relatively weak model of security, if they used one at all. That is, the typical security model for an encryption scheme was something like the following:
- I generate an encryption key (or keypair for public-key encryption)
- I give you the encryption of some message of my choice
- You “win” if you can decrypt it
This is obviously not a great model in the real world, for several reasons. First off, in some cases the attacker knows a lot about the message to be decrypted. For example: it may come from a small space (like a set of playing cards). For this reason we require a stronger definition like “semantic security” that assumes the attacker can choose the plaintext distribution, and can also obtain the encryption of messages of his/her own choice. I’ve written more about this here.
More relevant to this post, another limitation of the above game is that — in some real-world examples — the attacker has even more power. That is: in addition to obtaining the encryption of chosen plaintexts, they may be able to convince the secret keyholder to decrypt chosen ciphertexts of their choice.
The latter attack is called a chosen-ciphertext (CCA) attack.
At first blush this seems like a really stupid model. If you can ask the keyholder to decrypt chosen ciphertexts, then isn’t the scheme just obviously broken? Can’t you just decrypt anything you want?
The answer, it turns out, is that there are many real-life examples where the attacker has decryption capability, but the scheme isn’t obviously broken. For example:
- Sometimes an attacker can decrypt a limited set of ciphertexts (for example, because someone leaves the decryption machine unattended at lunchtime.) The question then is whether they can learn enough from this access to decrypt other ciphertexts that are generated after she loses access to the decryption machine — for example, messages that are encrypted after the operator comes back from lunch.
- Sometimes an attacker can submit any ciphertext she wants — but will only obtain a partial decryption of the ciphertext. For example, she might learn only a single bit of information such as “did this ciphertext decrypt correctly”. The question, then, is whether she can leverage this tiny amount of data to fully decrypt some ciphertext of her choosing.
The first example is generally called a “non-adaptive” chosen ciphertext attack, or a CCA1 attack (and sometimes, historically, a “lunchtime” attack). There are a few encryption schemes that totally fall apart under this attack — the most famous textbook example is Rabin’s public key encryption scheme, which allows you to recover the full secret key from just a single chosen-ciphertext decryption.
The more powerful second example is generally referred to as an “adaptive” chosen ciphertext attack, or a CCA2 attack. The term refers to the idea that the attacker can select the ciphertexts they try to decrypt based on seeing a specific ciphertext that they want to attack, and by seeing the answers to specific decryption queries.
In this article we’re going to use the more powerful “adaptive” (CCA2) definition, because that subsumes the CCA1 definition. We’re also going to focus primarily on public-key encryption.
With this in mind, here is the intuitive definition of the experiment we want a CCA2 public-key encryption scheme to be able to survive:
- I generate an encryption keypair for a public-key scheme and give you the public key.
- You can send me (sequentially and adaptively) many ciphertexts, which I will decrypt with my secret key. I’ll give you the result of each decryption.
- Eventually you’ll send me a pair of messages (of equal length) and I’ll pick a bit at random, and return to you the encryption of , which I will denote as .
- You’ll repeat step (2), sending me ciphertexts to decrypt. If you send me I’ll reject your attempt. But I’ll decrypt any other ciphertext you send me, even if it’s only slightly different from .
- The attacker outputs their guess . They “win” the game if .
We say that our scheme is secure if the attacker wins only with a significantly greater probability than they would win with if they simply guessed at random. Since they can win this game with probability 1/2 just by guessing randomly, that means we want (Probability attacker wins the game) – 1/2 to be “very small” (typically a negligible function of the security parameter).
You should notice two things about this definition. First, it gives the attacker the full decryption of any ciphertext they send me. This is obviously much more powerful than just giving the attacker a single bit of information, as we mentioned in the example further above. But note that powerful is good. If our scheme can remain secure in this powerful experiment, then clearly it will be secure in a setting where the attacker gets strictly less information from each decryption query.
The second thing you should notice is that we impose a single extra condition in step (4), namely that the attacker cannot ask us to decrypt . We do this only to prevent the game from being “trivial” — if we did not impose this requirement, the attacker could always just hand us back to decrypt, and they would always learn the value of .
(Notice as well that we do not give the attacker the ability to request encryptions of chosen plaintexts. We don’t need to do that in the public key encryption version of this game, because we’re focusing exclusively on public-key encryption here — since the attacker has the public key, she can encrypt anything she wants without my help.)
With definitions out of the way, let’s talk a bit about how we achieve CCA2 security in real schemes.
A quick detour: symmetric encryption
This post is mainly going to focus on public-key encryption, because that’s actually the problem that’s challenging and interesting to solve. It turns out that achieving CCA2 for symmetric-key encryption is really easy. Let me briefly explain why this is, and why the same ideas don’t work for public-key encryption.
(To explain this, we’ll need to slightly tweak the CCA2 definition above to make it work in the symmetric setting. The changes here are small: we won’t give the attacker a public key in step (1), and at steps (2) and (4) we will allow the attacker to request the encryption of chosen plaintexts as well as the decryption.)
The first observation is that many common encryption schemes — particularly, the widely-used cipher modes of operation like CBC and CTR — are semantically secure in a model where the attacker does not have the ability to decrypt chosen ciphertexts. However, these same schemes break completely in the CCA2 model.
The simple reason for this is ciphertext malleability. Take CTR mode, which is particularly easy to mess with. Let’s say we’ve obtained a ciphertext at step (4) (recall that is the encryption of ), it’s trivially easy to “maul” the ciphertext — simply by flipping, say, a bit of the message (i.e., XORing it with “1”). This gives us a new ciphertext that we are now allowed to submit for decryption. We are now allowed (by the rules of the game) to submit this ciphertext, and obtain , which we can use to figure out .
(A related, but “real world” variant of this attack is Vaudenay’s Padding Oracle Attack, which breaks actual implementations of symmetric-key cryptosystems. Here’s one we did against Apple iMessage. Here’s an older one on XML encryption.)
So how do we fix this problem? The straightforward observation is that we need to prevent the attacker from mauling the ciphertext . The generic approach to doing this is to modify the encryption scheme so that it includes a Message Authentication Code (MAC) tag computed over every CTR-mode ciphertext. The key for this MAC scheme is generated by the encrypting party (me) and kept with the encryption key. When asked to decrypt a ciphertext, the decryptor first checks whether the MAC is valid. If it’s not, the decryption routine will output “ERROR”. Assuming an appropriate MAC scheme, the attacker can’t modify the ciphertext (including the MAC) without causing the decryption to fail and produce a useless result.
So in short: in the symmetric encryption setting, the answer to CCA2 security is simply for the encrypting parties to authenticate each ciphertext using a secret authentication (MAC) key they generate. Since we’re talking about symmetric encryption, that extra (secret) authentication key can be generated and stored with the decryption key. (Some more efficient schemes make this all work with a single key, but that’s just an engineering convenience.) Everything works out fine.
So now we get to the big question.
CCA security is easy in symmetric encryption. Why can’t we just do the same thing for public-key encryption?
As we saw above, it turns out that strong authenticated encryption is sufficient to get CCA(2) security in the world of symmetric encryption. Sadly, when you try this same idea generically in public key encryption, it doesn’t always work. There’s a short reason for this, and a long one. The short version is: it matters who is doing the encryption.
Let’s focus on the critical difference. In the symmetric CCA2 game above, there is exactly one person who is able to (legitimately) encrypt ciphertexts. That person is me. To put it more clearly: the person who performs the legitimate encryption operations (and has the secret key) is also the same person who is performing decryption.
Even if the encryptor and decryptor aren’t literally the same person, the encryptor still has to be honest. (To see why this has to be the case, remember that the encryptor has shared secret key! If that party was a bad guy, then the whole scheme would be broken, since they could just output the secret key to the bad guys.) And once you’ve made the stipulation that the encryptor is honest, then you’re almost all the way there. It suffices simply to add some kind of authentication (a MAC or a signature) to any ciphertext she encrypts. At that point the decryptor only needs to determine whether any given ciphertexts actually came from the (honest) encryptor, and avoid decrypting the bad ones. You’re done.
Public key encryption (PKE) fundamentally breaks all these assumptions.
In a public-key encryption scheme, the main idea is that anyone can encrypt a message to you, once they get a copy of your public key. The encryption algorithm may sometimes be run by good, honest people. But it can also be run by malicious people. It can be run by parties who are adversarial. The decryptor has to be able to deal with all of those cases. One can’t simply assume that the “real” encryptor is honest.
Let me give a concrete example of how this can hurt you. A couple of years ago I wrote a post about flaws in Apple iMessage, which (at the time) used simple authenticated (public key) encryption scheme. The basic iMessage encryption algorithm used public key encryption (actually a combination of RSA with some AES thrown in for efficiency) so that anyone could encrypt a message to my key. For authenticity, it required that every message be signed with an ECDSA signature by the sender.
When I received a message, I would look up the sender’s public key and first make sure the signature was valid. This would prevent bad guys from tampering with the message in flight — e.g., executing nasty stuff like adaptive chosen ciphertext attacks. If you squint a little, this is almost exactly a direct translation of the symmetric crypto approach we discussed above. We’re simply swapping the MAC for a digital signature.
The problems with this scheme start to become apparent when we consider that there might be multiple people sending me ciphertexts. Let’s say the adversary is on the communication path and intercepts a signed message from you to me. They want to change (i.e., maul) the message so that they can execute some kind of clever attack. Well, it turns out this is simple. They simply rip off the honest signature and replace it one they make themselves:
The new message is identical, but now appears to come from a different person (the attacker). Since the attacker has their own signing key, they can maul the encrypted message as much as they want, and sign new versions of that message. If you plug this attack into (a version) of the public-key CCA2 game up top, you see they’ll win quite easily. All they have to do is modify the challenge ciphertext at step (4) to be signed with their own signing key, then they can change it by munging with the CTR mode encryption, and request the decryption of that ciphertext.
Of course if I only accept messages from signed by some original (guaranteed-to-be-honest) sender, this scheme might work out fine. But that’s not the point of public key encryption. In a real public-key scheme — like the one Apple iMessage was trying to build — I should be able to (safely) decrypt messages from anyone, and in that setting this naive scheme breaks down pretty badly.
Ok, this post has gotten a bit long, and so far I haven’t actually gotten to the various “tricks” for adding chosen ciphertext security to real public key encryption schemes. That will have to wait until the next post, to come shortly.
Click here for Part 2.
|
<urn:uuid:c8e34332-e028-4d15-ba65-fa72cbc054dc>
|
CC-MAIN-2022-40
|
https://blog.cryptographyengineering.com/2018/04/21/wonk-post-chosen-ciphertext-security-in-public-key-encryption-part-1/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00248.warc.gz
|
en
| 0.931127 | 3,226 | 2.78125 | 3 |
This morning, JP Morgan Chase and Samsung announced that they are partnering with IBM to build business apps on quantum computers. Some if not most of you may wonder, what is a quantum computer?
Defining Quantum Computers
In the 1930’s Alan Turning, developed The Turning machine, it is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can hold a symbol, 1 or 0, or be left blank. A read-write device, reads these symbols or blanks and gives the machine instructions to perform a certain program.
Today’s computers work by manipulating bits that exists in one of two states: a 0 or a 1. However, quantum computers are not limited to two states. They encode information as quantum bits, also known as ‘qubits’. Qubits can exist in superposition and represent atoms, ions, photons or electrons and they work together to act as computer memory and a processor. What this means is that a quantum computer can contain multiple states simultaneously, thus giving it the power to supersede classical computers while computing on efficient energy levels.
Energy conservation has been a topic as of late, due to recent research that shows that if we continue on this current market trend, since computers were introduced, by 2040 we will not have the capabilities to power all of the machines around the globe. Hence, the excitement of IBM who sees a big opportunity in quantum computing.
In 1961, IBM’s Research Lab’s Rolf Landauer, found that each single bit operation must use an absolute minimum amount of energy. He went on to formulate a calculation of the lowest limit of energy required for a computer operation. In March of this year, researchers determined it could be possible to make a chip that will operate on the lowest energy levels yet.
Here is what IBM has to say about the development of this energy efficient chip, “IBM’s goal is for its partners to develop applications that demonstrate a business advantage because they run on quantum instead of traditional computers using silicon-based chips. IBM hopes to see such success by 2020, says Gil, though he says IBM is “very honest” about the fact that the technology is still in its early days. ”
|
<urn:uuid:ec026b39-9576-4b58-a031-5b57efe70f5b>
|
CC-MAIN-2022-40
|
https://www.bvainc.com/2017/12/14/quantum-computer-will-shape-future/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00448.warc.gz
|
en
| 0.951706 | 464 | 3.828125 | 4 |
What is Data Governance? A Data Governance Definition
Data governance is the practice of identifying important data across an organization, ensuring it is of high quality, and improving its value to the business.
Data Government Policy
A data governance policy is a document that formally outlines how organizational data will be managed and controlled. A few common areas covered by data governance policies are:
- Data quality – ensuring data is correct, consistent and free of “noise” that might impeded usage and analysis.
- Data availability – ensuring that data is available and easy to consume by the business functions that require it.
- Data usability – ensuring data is clearly structured, documented and labeled, enables easy search and retrieval, and is compatible with tools used by business users.
- Data integrity – ensuring data retains its essential qualities even as it is stored, converted, transferred and viewed across different platforms.
- Data security – ensuring data is classified according to its sensitivity, and defining processes for safeguarding information and preventing data loss and leakage.
A data steward is an organizational role responsible for enacting the data governance policy. Data stewards are typically subject matter experts who are familiar with the data used by a specific business function or department. They ensure the fitness of data elements, both content and metadata, administer the data and ensure compliance with regulations.
Data Governance vs Data Management
Data governance is a strategy used while data management is the practices used to protect the value of data. When creating a data governance strategy, you incorporate and define data management practices. Data governance examples and policies direct how technologies and solutions are used, while management leverages these solutions to achieve tasks.
Data Governance Frameworks
A data governance framework is a structure that helps an organization assign responsibilities, make decisions, and take action on enterprise data. Data governance frameworks can be classified into three types:
- Command and control – the framework designates a few employees as data stewards, and requires them to take on data governance responsibilities.
- Traditional – the framework designates a larger number of employees as data stewards, on a voluntary basis, with a few serving as “critical data stewards” with additional responsibilities.
- Non-invasive – the framework recognizes people as data stewards based on their existing work and relation to the data; everyone who creates and modifies data becomes a data steward for that data.
Essential elements of a data governance framework include:
- Funding and management support – a data governance framework is not meaningful unless it is backed by management as an official company policy.
- User engagement – ensuring those who consume the data understand and will cooperate with data governance rules.
- Data governance council – a formal body responsible for defining the data governance framework and helping to enact it in the organization.
While many companies create data governance frameworks independently, there are several standards which can help formulate a data governance framework, including COBIT, ISO/IEC 38500, and ISO/TC 215.
Goals of Information Governance Initiatives
Data and information governance helps organizations achieve goals such as:
- Complying with standards like SOX, Basel I/II, HIPAA, GDPR
- Maximizing the value of data and enabling its re-use
- Improving data-driven decision making
- Reducing the cost of data management
Data Governance Strategy
A data governance strategy informs the content of an organization’s data governance framework. It requires you to define, for each set of organizational data:
- Where: Where it is physically stored
- Who: Who has or should have access to it
- What: Definition of important entities such as “customer”, “vendor”, “transaction”
- How: What the current structure of the data is
- Quality: Current and desired quality of the source data and consumable data sets
- Goals: What we want to do with this data
- Requirements: What needs to happen for the data to meet the goals
What is a Data Governance Policy and Why is it Important?
Data governance policies are guidelines that you can use to ensure your data and assets are used properly and managed consistently. These guidelines typically include policies related to privacy, security, access, and quality. Guidelines also cover the roles and responsibilities of those implementing policies and compliance measures.
The purpose of these policies are to ensure that organizations are able to maintain and secure high-quality data. Governance policies form the base of your larger governance strategy and enable you to clearly define how governance is carried out.
Data Governance Roles
Data governance operations are performed by a range of organizational members, including IT staff, data management professionals, business executives, and end users. There is no strict standard for who should fill data governance roles but there are standard roles that organizations implement.
Chief Data Officer
Chief data officers are typically senior executives that oversee your governance program. This role is responsible for acting as a program advocate, working to secure staffing, funding, and approval for the project, and monitoring program progress.
Data Governance Manager and Team
Data governance managers may be covered by the chief data officer role or may be separate staff. This role is responsible for managing your data governance team and having a more direct role in the distribution and management of tasks. This person helps coordinate governance processes, leads training sessions and meetings, evaluates performance metrics, and manages internal communications.
Data Governance Committee
The data governance committee is an oversight committee that approves and directs the actions of the governance team and manager. This committee is typically composed of data owners and business executives.
They take the recommendations of the data governance professionals and ensure that processes and strategies align with business goals. This committee is also responsible for resolving disputes between business units related to data or governance.
Data stewards are the individual team members responsible for overseeing data and implementing policies and processes. These roles are typically filled by IT or data professionals with expertise on data domains and assets. Data stewards may also play a role as engineers, quality analysts, data modelers, and data architects.
A 4-Step Data Governance Model
Managing data governance principles effectively requires creating a business function, similar to human resources or research and development. This function needs to be well defined and should include the following process steps:
- Discovery—processes dedicated to determining the current state of data, which processes are dependent on data, what technical and organizational capabilities support data, and the flow of the data lifecycle. These processes derive insights about data and data use for use in definition processes. Discovery processes run simultaneously with and are used iteratively with definition processes.
- Definition—processes dedicated to the documentation of data definitions, relationships, and taxonomies. In these processes, insights from discovery processes are used to define standards, measurements, policies, rules, and strategies to operationalize governance.
- Application—processes dedicated to operationalizing and ensuring compliance with governance strategies and policies. These processes include the implementation of roles and responsibilities for governance.
- Measurement—processes dedicated to monitoring and measuring the value and effectiveness of governance workflows. These processes provide visibility into governance practices and ensure auditability.
Data Governance Maturity Model
Evaluating the maturity of your governance strategies can help you identify areas of improvement. When evaluating your practices, consider the following levels.
Level 0: Unaware
Level 0 organizations have no awareness of data governance meaning and no system or set of policies defined for data. This includes a lack of policies for creating, collecting, or sharing information. No data models are outlined and no standards are established for storing or transferring data.
Strategy planners and system architects need to inform IT and business leaders about the importance and benefits of data governance and enterprise information management (EIM).
Level 1: Aware
Level 1 organizations understand that they are lacking data governance solutions and processes but have few or no strategies in place. Typically IT and business leaders understand that EIM is important but have not taken action to enforce the creation of governance policies.
Planners and architects need to begin determining organization needs and developing a strategy to meet those needs.
Level 2: Reactive
Level 2 organizations understand the importance and value of data and have some policies in place to protect data. Typically, the practices used to protect data by these organizations are ineffective, incomplete, or inconsistently enforced.
Management teams need to push for consistency and standardization for the implementation of policies.
Level 3: Proactive
Level 3 organizations are actively working to apply governance, including implementing proactive measures. Data governance is a part of all organizational processes. However, there is typically no universal system for governance. Instead, information owners are responsible for management.
Organizations need to evaluate governance at the departmental level and centralize responsibilities.
Level 4: Managed
Level 4 organizations have developed and consistently implemented governance policies and standards. These organizations have categorized their data assets and can monitor data use and storage. Additionally, oversight of governance is performed by an established team with roles and responsibilities.
Teams should actively track data management tasks and perform audits to ensure that policies are applied consistently.
Level 5: Effective
Level 5 organizations have achieved reliable data governance structures. They may have individuals in their teams with data governance certifications and have established experts. These organizations can effectively leverage their data for competitive advantage and improvements in productivity.
Teams should work to maintain governance and verify compliance. Teams may also actively investigate methods for improving proactive governance. For example, by researching best practices for specific governance cases, like big data governance.
Data Governance Best Practices
A data governance initiative must start with broad management support and acceptance from stakeholders who own and manage the data (called data custodians).
It is advisable to start with a small pilot project, on a set of data which is especially problematic and in need of governance, to show stakeholders and management what is involved, and demonstrate the return on investment of data governance activity.
When rolling out data governance across the organization, use templates, models and existing tools when possible in order to save time and empower organizational roles to improve quality, accessibility and integrity for their own data. Evaluate and consider using data governance tools which can help standardize processes and automate manual activities.
Most importantly, build a community of data stewards willing to take responsibility for data quality. Preferably, these should be the individuals who already create and manage data sets, and understand the value of making data usable for the entire organization.
Imperva Data Governance Tools
Master Data Management (MDM) tools are commonly used in data governance projects, to define a business glossary which is a single point of reference for critical business data. MDM tools help define official data types, categories and values—for example, an official list of product catalog numbers—and manage business workflows related to this Master Data.
Security tools are also crucial for data governance, and responsible for the task of safeguarding sensitive data.
Imperva File Security is one such tool, built specifically to assist with governance. With it, you can monitor files and databases across the organization, to:
- Discover and map file and database servers
- Identify securing sensitive data such as social security numbers, credit card data, etc.
- Gain visibility and control over current usage of data
- Enable role- and workflow-based management of data—allowing you to grant access to data stewards to the data for which they are responsible, at the appropriate stages of its lifecycle
- Create compliance reports for organizational data
Beyond File Security, Imperva’s data security solution protects your data wherever it lives—on premises, in the cloud and in hybrid environments. It also provides security and IT teams with full visibility into how the data is being accessed, used, and moved around the organization.
Our comprehensive approach relies on multiple layers of protection, including:
- Database firewall—blocks SQL injection and other threats, while evaluating for known vulnerabilities.
- User rights management—monitors data access and activities of privileged users to identify excessive, inappropriate, and unused privileges.
- Data masking and encryption—obfuscates sensitive data so it would be useless to the bad actor, even if somehow extracted.
- Data loss prevention (DLP)—inspects data in motion, at rest on servers, in cloud storage, or on endpoint devices.
- User behavior analytics—establishes baselines of data access behavior, uses machine learning to detect and alert on abnormal and potentially risky activity.
- Data discovery and classification—reveals the location, volume, and context of data on premises and in the cloud.
- Database activity monitoring—monitors relational databases, data warehouses, big data and mainframes to generate real-time alerts on policy violations.
- Alert prioritization—Imperva uses AI and machine learning technology to look across the stream of security events and prioritize the ones that matter most.
|
<urn:uuid:5f5f4815-159c-4545-a916-78931066b3b2>
|
CC-MAIN-2022-40
|
https://www.imperva.com/learn/data-security/data-governance/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00448.warc.gz
|
en
| 0.915539 | 2,677 | 3.0625 | 3 |
Failover to a remote location is a mature technology. So is cloud storage. But when users want to failover their virtual environments to the cloud, they can face distinct challenges.
Although both processes use replication, cloud failover is much more than replicating backup to the cloud for later recovery. The failover process uses the cloud as a secondary DR site. Standby servers take over the processing of a failed VM environment for uninterrupted application performance, then fails back to the primary data center when the event is solved. Failover to the cloud may be automated or manual; both have advantages and disadvantages.
Let’s define some specifics. We’re talking about virtual-to-virtual here. It’s technically possible to failover on-premise physical servers to physical servers in the cloud using bare metal recovery (BMR) technology. But it’s impractical. Few (if any) cloud DR vendors support it because they are based on virtual server technology. VM architecture allows users to avoid the issue of maintaining identical hardware in the secondary data center, which is a huge part of the cloud-based DR value proposition.
We’ll also discuss failover in the context of public clouds. Although failover is certainly possible in company-owned private clouds, it defeats the purpose of simple scalability that the public cloud offers.
What You Need to Know
Why is failover to remote sites a mature technology while failing over to the cloud is not? The cloud itself is the difference.
It is undeniably attractive for its scalability and economics, and once the failover site is tested and complete it can be relatively simple to maintain. With virtual failover you do not need maintain nearly identical hardware like you must in a remote site, and you gain near-infinite scalability. However, there are also real challenges in failing over production data to the cloud.
Maintaining Service Levels
Backup data alone is pretty low-risk. Public cloud reliability is very high, and availability is high and improving thanks to distributed operations. But when it comes to critical business applications, cloud storage risks scale up from the BUR. Thanks to sluggish data movement over the Internet, remote failover to the cloud for virtualized production storage with acceptable RTO and RPO is fairly new.
Backing up server images to the cloud is pretty simple if you have the necessary bandwidth. But running those applications in the cloud in a failover scenario is a different kettle of fish. To begin with, you will need separate failover domains for VMware and Hyper-V. You might need separately configured domains for specific applications too in order to provide proper service levels for failed over applications.
Test your applications before trusting them to the cloud DR site. Amazon, Google, Azure and other large public clouds are capable of offering the performance need (at a price) but you will need to test your bandwidth and configurations.
Invest in Bandwidth
Bandwidth plays a critical part in using the cloud as a DR site. Virtualized data centers produce large snapshots and a lot of them. Efficiently managing your snapshots is key to efficiently managing a failover DR site in the cloud, especially if you are looking at a cloud gateway product to accelerate data transport times. They can work very well in lower traffic environments but can bottleneck in high-volume replication environments.
Whether you use a cloud gateway or not, only replicate delta-level changes and practice dedupe and compression. You will also need to avoid continuous snapshot replication if your service levels allow. Continuous or near-continuous snapshot replication is a drain on LAN resources not to mention on Internet pipes. In any case, effective snapshot algorithms are a must-have for successful cloud-based failover.
Security and Availability
Another challenge is security. Securing backup and archive data in the cloud is important; securing and accessing production data is a lot more so. You need both reliability and availability: reliability in that your cloud provider isn’t going to lose your data; availability in that you can access your data when you need to. Work out your service levels with your provider. You’ll be paying more than simple BUR but you don’t want to mess with success when it comes to applications.
Do your due diligence on encryption levels and make encryption decisions for data-at-rest (which you probably need) and data-in-flight (which you may or may not need). Also watch out for multi-tenant issues. The public cloud is a massively scaled multi-tenant environment. One risk is performance degradation if other tenants unexpectedly consume massive resources. The last thing you want is someone else’s surprise consumption grabbing your resources just as your applications launch from your cloud DR site. Understand how your public cloud provider and your DR vendor protect you from other tenants and from system failures.
Another potential issue is with automated failover. Automating DR, while in general a best practice for critical DR, is not a magic bullet because of the so-called split-brain event. This occurs when an error at the VM level triggers automated failover, even though the VM was not in fact in a failure state. In 2015 automated failover to the cloud is better at monitoring paths and events but it is still an issue to be aware of. For many cases, an immediate alert to an IT team should a VM fail might be a better solution than automated-only.
The Dynamic Cloud
The cloud is a dynamic environment, yet successful failover depends on users being able to find the ported application and its data. One vendor development choice is to use cloud-based clusters as the failover DR site.
MS Windows Server uses the clustering method as a proven DR technology between on-premise and remote sites. However, Windows-based clustering needs access to Active Directory. This means that IT will need to extend AD to the cloud, which requires ongoing synchronization between the network and cloud AD versions.
The more common development technique is replicating VMs and their data to the cloud so that users are transparently redirected to the cloud should the on-premise environment fail. The drawback to this architecture is resolving IP address and DNS record changes to accommodate the changed production site.
These days most service providers and vendors propagate changes for you or provide tools to do so more easily. For example, Amazon Route 53’s DNS web service automates both types of changes for developers and users, making it easier to perform failover processes within the cloud. Another way to solve the addressing issues is newer vendors who built their cloud-based DR offering from the ground up. Zadara with its Virtual Private Storage Array (VPSA) uses the public cloud to provide enterprise-level DR services on AWS and other cloud providers, and automates dynamic address changes
Why Bother? Because It’s Worth It
When you get the setup and service levels right, virtual failover to the cloud is an excellent DR option. Even with the complexity of initial setup and testing, it’s easier than leasing a remote site and physically building a second data center, not to mention the hassle and risk of keeping hardware and software essentially identical. Instead you’ll be replicating to a highly flexible and dynamically scaled environment; not a small consideration for anyone who has tried to keep two data centers in lockstep.
You’ll probably want to invest in higher bandwidth, or at the least invest in products that give you bandwidth optimization techniques – ideally you will invest in both. However, once you have made the additional investment then ongoing costs can be quite reasonable. In addition to avoiding the expense of creating and maintaining the secondary data center, you do not have to pay for staff at the secondary data center. And you can free up existing IT staff to do different high value projects.
Management may be similar to what you are used to. If you are already using VMware or Hyper-V tools to replicate to a secondary data center, you can use the same tools to replicate to the cloud. The same thing is true of third-party products since they will preserve as much as possible of familiar hypervisor console and toolsets.
Hyper-V, for example, uses Azure-centric Hyper-V Replica with Azure Site Recovery Manager to replicate and failover VMs in Virtual Machine Manager (VMM) clouds within Azure. Hyper-V Recovery Manager (HRM) automates more of this process. VMware offers Site Recovery Manager (SRM); its newer public cloud option recovery is VMware vCloud Air Disaster Recovery. Unlike SRM, Air DR provides native cloud-based DR for VMware vSphere. vCloud Air DR is built on vSphere Replication’s asynchronous replication and failover.
Not Just for DR
Drivers for cloud-based failover vary. DR is the biggest driver but data migration, test/dev and additional processes also benefit.
· VM migration. The process also works for planned processes like VM migration. A Nutanix user reported that they used Nutanix Cloud Connect as a failover site for virtualized web app migrations. Nutanix manages BUR, DR and test/dev in the public cloud using Nutanix Prism and Cloud Connect. The cloud-based Controller VM (CVM) cluster operates exactly like a remote cluster. Data moves from the on-premise cluster to the cloud accordingly.
A few days in advance of the planned migration, the user transferred all affected applications and data to the cloud by manually shutting down the VMs, waiting for the automated failover to complete, then activating the cloud cluster. They then restored the applications and data to the new environment when they were ready.
· DR tests. DR tests are traditionally awkward, unrealistic, and time-consuming, which is why companies rarely test their DR plans. With failover in the cloud, IT can easily test failover procedures and recovery times without committing to an identical remote data center. Zerto Virtual Replication is a hypervisor-based replication product that supports large-scale DR and testing in the cloud as well as automated failover and failback. Unitrends Reliable DR manages and automates application-specific testing for multi-VM applications and guarantees failover in virtualized production environments.
· Bare Metal Recovery (BMR). Virtualization in the cloud can also aid in bare metal recovery (BMR). BMR is the process of restoring an identical system in case of failure; all the way from an OS, drivers, applications and production data. Physical BMR requires an identical hardware environment for error-free restores; otherwise you’re going to see serious errors. In virtual environments, vendors like Zetta.net can recover a VM image to spin up bare metal. This makes for a much more efficient and less error-prone BMR procedure.
Given all of its attendant issues, is cloud-based failover worth researching and investing in? For many companies, yes; but not all. If you have a remote DR setup that is working for you there is no need to abandon it. This is certainly the case if your company owns multiple data centers and your have replication and DR setup between them.
However, even then IT might consider testing cloud-based DR for a pilot project in a virtualized server environment. Virtual networks are growing very fast and they throw off a lot of data. The scalability of the cloud offers real advantages in these specific environments.
Photo courtesy of Shutterstock.
|
<urn:uuid:05c061f2-3a76-4e67-8e94-cf86ff9bd4f6>
|
CC-MAIN-2022-40
|
https://www.datamation.com/cloud/virtual-failover-in-the-cloud-challenge-abound/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00648.warc.gz
|
en
| 0.92542 | 2,340 | 2.515625 | 3 |
The software-defined wide area network (SD-WAN) is a technology for configuring and implementing an enterprise WAN — based on software-defined networking (SDN) — to effectively route traffic to remote locations such as branch offices. SD-WAN technology derives significant flexibility and agility benefits from removing the burden of traffic management from physical devices and transferring it to software — the essence of SDN.
In 2019, it has become clear that SD-WAN has secured its position as the way forward for enterprise WAN connectivity. Market adoption is growing rapidly, and industry experts have declared a winner in the SD-WAN vs MPLS debate. For example, Network World called 2018 the year of SD-WAN, and before the end of Q3 2018 Gartner declared SD-WAN is killing MPLS.
In this article we’ll discuss top business benefits of SD-WAN.
How does SD-wan work?
SD-WAN works by separating applications from the underlying network services with a policy-based, virtual overlay. This overlay monitors the real-time performance characteristics of the underlying networks and selects the optimum network for each application based on configuration policies.
Where software-defined networking (SDN) deployed in a service provider network enables flexible deployment and usage-based solutions between high capacity sites (like headquarters and data centers) SD-WAN services help optimize traffic flows for performance and cost at branch sites.
By replacing traditional branch routers with appliances that assess and utilize different transport technologies based on their performance, it allows enterprises to route large portions of their traffic over cost-effective services, such as broadband.
The main goal of SD-WAN (SDWAN) technology is to deliver a business-class, secure, and simple cloud-enabled WAN connection with as much open and software-based technology as possible.
Different Types of SD-WAN Architectures
Although the SD-WAN packages you can avail from service providers vary, they often fall under three architecture types:
SD-WAN deployed exclusively on-premises is less expensive than other architectures since it can only establish connections between remote sites and not cloud services. A use case for this SD-WAN architecture is with companies that host most — if not all — of their business applications locally.
As the name suggests, a cloud-enabled SD-WAN solution can take advantage of cloud gateways to communicate with your cloud applications. This could be anything — from your cloud-based CRM software to office applications.
Cloud-Enabled plus Backbone
In addition to the benefits of cloud-enabled SD-WAN, a “cloud-enabled plus backbone” SD-WAN solution capitalizes the service provider’s nearest Point of Presence or POP. This could be the SD-WAN company’s high-speed, fiber-optic line, which could give your network performance a significant boost.
Now let’s take a look at SD-WAN benefits.
Reduced WAN costs
MPLS bandwidth is expensive. On a “dollar per bit” basis, MPLS is significantly higher than public Internet bandwidth. Exactly how much more expensive will depend on a number of variables, not the least of which is location. However, the costs of MPLS aren’t just a result of significantly higher bandwidth charges. Provisioning an MPLS link often takes weeks or months, while a comparable SD-WAN deployment can often be completed in days. In business, time is money, and removing the WAN as a bottleneck can be a huge competitive advantage.
Enhanced WAN performance
SD-WAN can be configured to prioritize business-critical traffic and real-time services like Voice over Internet Protocol (VoIP) and then steer it over the most efficient route. By facilitating critical applications through reliable, high performance connections, IT teams can help reduce packet loss and latency issues, thereby improving employee productivity and boosting staff morale.
Improved network security
While traditional WAN solutions handle security through multiple appliances at each branch office, SD-WAN has inbuilt security protocols. SD-WAN solutions have built-in encryption capabilities, ensuring that only authorized users are able to access and view assets connected to the corporate network.
SD-WAN also facilitates granular control and enables companies to create policies to inform the network how certain types of traffic should be treated, keeping high-risk traffic from ever entering the network in the first place
Increased WAN availability
While MPLS has a solid reputation for reliability, it isn’t perfect and can fail. Redundancy at the MPLS provider level is expensive and can be a pain to implement.
SD-WAN makes leveraging different transport methods easy, thereby enabling high-availability configurations that help reduce single points of failure. If your fiber link from one ISP is down, you can failover to a link from another provider
Lowers WAN OpEx and CapEx costs
Realizing ROI for Software-Defined Networking in the campus LAN or even the data center has proven elusive. But not so with SD-WANs. The ROI is dramatic and immediate. With an SD-WAN solution you can now augment or even replace MPLS connections with broadband internet services to connect users to applications and lower WAN costs dramatically.
A simplified, centrally managed SD-WAN architecture also lowers OPEX. Bringing a new branch or remote location online is easy and can be done in just a few minutes. No specialized IT expertise is required on premise at the branch.
Greater business agility
Another key selling point of SD-WAN services is increased network performance and agility. Since SD-WAN can automatically funnel your traffic through the fastest and most reliable connection, common network issues such as jitter and latency are considerably reduced.
SD-WAN facilitates the rapid deployment of WAN services, such as bandwidth and firewall. As a result of this, businesses can distribute operations to branch sites without the need to send IT personnel. Bandwidth can also be added or reduced easily as business requirements evolve, giving businesses extra agility to stay ahead of competitors.
While legacy WAN has had its place as a business solution, it’s no longer viable due to increased costs, degraded cloud performance and limited agility. SD-WAN is a much better option.
GlobalDots offers a solution that helps enterprises have all the advantages of a SD-WAN, without the limitations. It’s a secure, cloud-based SD-WAN as a service with built-in global backbone and integrated security.
If you have any questions about how we can help you connect all your business resources and data centers into a secure, unified network, contact us today to help you out with your performance and security needs.
|
<urn:uuid:9072e0b1-7d15-4461-977f-626a2f176265>
|
CC-MAIN-2022-40
|
https://www.globaldots.com/resources/blog/top-business-benefits-of-sd-wan/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00648.warc.gz
|
en
| 0.935459 | 1,420 | 2.59375 | 3 |
What does the typical consumer’s mobile connectivity look like?
Whether they have a single smartphone or an array of different mobile devices for different purposes, most consumers use a mixture of WiFi and cellular connectivity. When it is available, WiFi is preferable thanks to its speed and reliability. Out and about, mobile networks fill in the gaps – and the providers of those networks fight to deliver an experience which matches WiFi connectivity.
However, the 5G era is upon us, and with it the promise of faster and more reliable cellular mobile connectivity than ever before. This could mean that mobile network providers are able to offer consumers broadband-equivalent speeds, particularly in urban and highly-populated areas.
So what will this mean for ISPs?
- What is 5G? Everything you need to know (opens in new tab)
The 5G era will increase home network expectations
As consumers are increasingly aware, the promise of 5G is enormous. Drastically faster network speeds and greater bandwidth could in turn power everything from smart cities to the ability to consume rich multimedia content on the move. The first 5G-enabled phones are now available to buy in some regions. The standard is set to enable consumers to access performance equivalent to their home WiFi – on mobile networks.
This isn’t to say that home WiFi will become obsolete, of course. Far from it. Home WiFi networks will still remain a core requirement for consumers for two main reasons. First, data-hungry applications will mean that data allowances will quickly be used up on 5G networks. Only individuals with unlimited data plans will be able to, say, stream high-definition video or download hefty multimedia files across mobile networks day after day, and it may be that such plans get much more expensive in line with the increased capacity of 5G networks. Consumers will still need to have an alternative to monthly data plans.
Second, many of the connected devices in the home, from smart TVs to gaming consoles, still require a home network to connect to the internet, and can’t connect via mobile networks. Furthermore, the rise of 5G is happening in conjunction with more and more household appliances requiring network connectivity as the IoT moves into our homes. Many smart home devices require connectivity to WiFi rather than a mobile cellular network.
However, this does mean there will be a newfound pressure for the home network performance to match that of 5G. ISPs therefore need to ensure that the home network is functioning better than ever before. The question is – how?
ISPs need to ensure the home network is fit for the 5G era
- The advent of 5G will lead to ‘industry 5.0’: here’s what you need to know and how your company needs to prepare (opens in new tab)
Achieving home networks fit for the 5G era means delivering reliable coverage throughout the entire home, so that users can switch their smartphones and tablets from 5G to an equally capable home network. It also means delivering reliable coverage even when large numbers of devices are connected to that network; last year, the government estimated that every household in the UK owned at least ten connected devices and has predicted that this figure will rise to 15 by next year. Many of those devices will have particularly high expectations places on them in terms of the amount of bandwidth they require.
All this is particularly important when it comes to realising the benefits of new WiFi technology too, such as WiFi 6. This new generation of WiFi not only offers a speed boost; it is also focused on improving performance when multiple devices are connected to the same network.
An outstanding offering
- Is 5G or WiFi 6 the network of choice for industrial IoT? (opens in new tab)
But how can ISPs guarantee reliable WiFi coverage throughout the home? Offering outstanding WiFi means balancing consumers’ need for high-speed connectivity with their data-rich services, including running multiple data-hungry applications and devices at the same time.
ISPs need to ensure that they are working to help consumer achieve the best possible combination of speed and performance on their home networks – particularly in relation to the environmental challenges experienced by those living in large or complex buildings. Powerline communications (PLC) adaptors which boost the performance of WiFi within the home have long been an effective option for helping ISPs to deliver top-quality performance – and now the updated G.hn powerline communications standard is set to deliver an even greater boost.
This second-generation update to the G.hn standard means that G.hn PLC units should be capable of delivering the speeds needed for 4K VR or multiple HD streaming – not just to selected access points but to every power outlet in the home. This is in itself a hefty performance uplift, and also means improved stability and a PLC range of up to 500 metres. In other words, second-generation PLC adaptors can help ISPs to get ahead of the 5G curve and deliver the perfect balance of speed and service on home-based WiFi.
The 5G era is well underway. Whilst it is unlikely to herald the end of the home WiFi network, it does look set to increase consumer expectations of home networking. Forward-thinking ISPs need to get ahead of the game now, and be ready to provide a whole-home WiFi solution which works.
Sebastian Richter, Senior Product Manager Operator Solutions, devolo (opens in new tab)
|
<urn:uuid:b97a0adf-2192-43f7-b75d-4b21527feac3>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/features/how-isps-can-provide-a-home-network-fit-for-the-5g-era/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00648.warc.gz
|
en
| 0.945931 | 1,107 | 2.671875 | 3 |
Using the OCR Feature
You can use the OCR feature to extract text from an image.
The OCR Feature command provides the following options:
- Capture Window: Specify the window title. Automation Anywhere captures the window as an image.
- Capture Area: Specify a specific area of the window to capture.
- Capture Image By Path: To extract text that is contained within an image that is stored on your local or network drive, specify the location of the file. The drive must be accessible when you run the task.
- Capture Image by URL: Specify a website URL that contains the image you want to capture.
Using Keywords to Identify Captured Text
To make specifying the target text to capture easier, use the Before and After keywords. For example, in the text string "Name: ABC Inc. Location:", to copy only "ABC Inc.," specify Before = "Location" and After = "Name:". You can also trim the captured text to remove leading and trailing spaces.
|
<urn:uuid:5639fcf5-a91a-4b30-9ae5-635606456a8a>
|
CC-MAIN-2022-40
|
https://docs.automationanywhere.com/es-ES/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/using-special-features/using-the-ocr-feature.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00648.warc.gz
|
en
| 0.815967 | 212 | 2.671875 | 3 |
The Border Gateway Protocol (BGP) is an interautonomous system routing protocol. An autonomous system is a network or group of networks under a common administration and with common routing policies. BGP is used to exchange routing information for the Internet and is the protocol used between Internet service providers (ISP).
BGP is used between autonomous systems (AS), the protocol is referred to as External BGP (EBGP). If a service provider is using BGP to exchange routes within an AS, then the protocol is referred to as Interior BGP (IBGP).
BGP is a very robust and scalable routing protocol, as evidenced by the fact that BGP is the routing protocol employed on the Internet. The Internet BGP routing tables number more than 90,000 routes. To achieve scalability at this level, BGP uses many route parameters, called attributes, to define routing policies and maintain a stable routing environment.
BGP neighbors exchange full routing information when the TCP connection between neighbors is first established. When changes to the routing table are detected, the BGP routers send to their neighbors only those routes that have changed. BGP routers do not send periodic routing updates, and BGP routing updates advertise only the optimal path to a destination network.
BGP considers only synchronized routes with no autonomous system loops and a valid next hop.
The following process summarizes how BGP chooses the best route on a Cisco router:
Remember: Only the best path is entered in the routing table and propagated to the BGP neighbors of the router.
Remember: BGP Multipath allows installation into the IP routing table of multiple BGP paths to the same destination. These paths are installed in the table together with the best path for load sharing. BGP Multipath does not affect bestpath selection. For example, a router still designates one of the paths as the best path, according to the algorithm, and advertises this best path to its neighbors.
|
<urn:uuid:ed779909-3b58-49e8-83db-311cc765716c>
|
CC-MAIN-2022-40
|
https://www.ciscozine.com/bgp-best-path-selection/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00648.warc.gz
|
en
| 0.910489 | 401 | 3.953125 | 4 |
NASA and Cisco Inc. announced Tuesday a partnership to develop an online collaborative global monitoring platform called the “Planetary Skin” to capture, collect, analyze and report data on environmental conditions around the world.
Under the terms of a Space Act Agreement, NASA and Cisco will work together to develop the Planetary Skin as an online collaborative platform to capture and analyze data from satellite, airborne, sea- and land-based sensors across the globe. This data will be made available for the general public, governments and businesses to measure, report and verify environmental data in near-real-time to help detect and adapt to global climate change.
“In the past 50 years, NASA’s expertise has been applied to solving humanity’s challenges, including playing a part in discovering global climate change,” said S. Pete Worden, director of NASA’s Ames Research Center. “The NASA-Cisco partnership brings together two world-class organizations that are well equipped with the technologies and skills to develop and prototype the Planetary Skin infrastructure.”
Cisco and NASA will kick off Planetary Skin with a series of pilot projects, including “Rainforest Skin,” which will be prototyped during the next year. Rainforest Skin will focus on the deforestation of rainforests around the world and explore how to integrate a comprehensive sensor network. It also will examine how to capture, analyze and present information about the changes in the amount of carbon in rainforests in a transparent and useable way. According to scientists, the destruction of rainforests causes more carbon to be added to the atmosphere and remain there. That contributes significantly to global warming.
“Mitigating the impacts of climate change is critical to the world’s economic and social stability,” said John Chambers, Cisco chief executive officer. “This unique partnership taps the power and innovation of the market and harnesses it for the public good. Cisco is proud to work with NASA on this initiative and hopes others from the public, private and not-for-profit sectors will join us in this exciting endeavor.”
Planetary Skin, deployed at a global scale, is that MRV infrastructure for both mitigation and adaptation. The Planetary Skin MRV infrastructure would enable unlocking US$350 billion per year in 2010–2020 for incremental capex required for mitigation and adaptation to climate change.
As a rough estimate, this “low carbon infrastructure capex” unlocked by Planetary Skin would increase global GDP by US$450 billion (1 percent of global annual GDP) in the 2010–2020 timeframe. The potential 1 percent global GDP boost should have the potential to decrease unemployment by 1 percent during a slack labor market.
|
<urn:uuid:e4ce6c8e-893c-47f9-9939-d2d9f1deb3cf>
|
CC-MAIN-2022-40
|
https://www.ciscozine.com/planetary-skin-a-cisco-nasa-partnership/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00648.warc.gz
|
en
| 0.928777 | 554 | 3.15625 | 3 |
Cyberattacks launching ransomware
In today’s connected world, cybersecurity is at the forefront of IT security, network security and computer security. Cyber attacks are successfully penetrating small and medium-sized SMB businesses, as well as large enterprises. Often, cyberattacks focus on a form of computer virus called ransomware. Ransomware encrypts files on the infected computer so that the user cannot open the files. After the ransomware encrypts the files, the only way to get the files open is to decrypt them with the private decryption key. And in order to obtain the private decryption key, the user must pay the hackers a ransom (the hackers usually require payment in the form of bitcoin to help hide their identity). The following FBI article provides further insights on cyber-attacks and ransomware: Cyber Crime. In addition, the following article from Wikipedia defines cyberattacks in greater detail: Cyberattack defined.
How do computers become infected?
The most common attack vector used by hackers is phishing emails. Most email users have received phishing emails saying they are due a refund from the IRS, or with a receipt for a large purchase on their credit card, which of course they actually did not purchase. Other phishing emails include fake emails from a social network like Facebook saying the user needs to update their profile, fake emails from a financial institution or insurance company purporting to require the user to verify some information, emails with links to open a shared document that another person has shared with the user, and fake emails from UPS or FedEx with an attachment purporting to have shipment tracking information. In reality, the hackers are constantly dreaming up new ideas into tricking users to click on bad links and open malicious attachments.
The goal of these phishing emails is to get the user to click a bad link in the email or to open a malicious attachment. In either case, once the link is clicked and the malicious web site visited, or once the bad attachment is opened, automated code will launch that tries to exploit a vulnerability in the computer. The most common computer vulnerabilities are Flash, Java and browsers that are not updated with the latest security patches. If a vulnerability is found by the automated code, the next step will be to exploit the vulnerability to gain some measure of control over the computer. Often the exploited vulnerability will allow the initial automated code to have the computer contact the hacker’s command and control center through the Internet to then download additional code that starts encrypting the user’s files. Thus, it is more vital today than ever before that users not click on links in emails and not open email attachments unless the user is expecting the email. Even if the email purports to be from a known associate, the user should verify the person actually sent it if the contents appear to be strange.
How to use cybersecurity to protect against cyberattacks?
Hardened cybersecurity with a proper backup and disaster recovery system for data protection is the best defense against cyber attacks and ransomware. If implemented and managed properly, cyber-security will ensure you can recover from a cyberattack and ransomware. First of all, a firewall at the network perimeter must be configured properly, hardened and include malware scanning. Secondly, each endpoint on the network must have an anti-virus / anti-malware program that is regularly updated. A monitoring and alerting system should be in place to ensure notification to the IT help desk in the event the antivirus or antimalware program stops working, or hasn’t been updated. Thirdly, computers should be configured to update the OS and applications on a regular basis. This includes Windows, as well as applications like Office, Flash, Java, iTunes and other Apple programs, etc. Patch management is critical for a proper cybersecurity plan to protect against cyberattacks and ransomware. And again, patch management should be monitored with an alerting system put in place so tech support / technical support personnel can be notified when necessary. Next, a web surfing protection / content filtering agent should be implemented on each computer to protect against visiting malicious sites. Finally, a managed backup and disaster recovery solution to protect your data is paramount in the case an attack successfully exploits a system. With proper and recoverable backup, your files can be quickly restored without having to pay hackers the ransom. Here at IT Tropolis we implement and manage a hardened cyber security plan as discussed above. Contact us today for your customized cybersecurity proposal.
|
<urn:uuid:2ec15929-cb23-4ebd-b50b-9642caabd256>
|
CC-MAIN-2022-40
|
https://www.ittropolis.com/cybersecurity-at-forefront-of-computer-systems-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00648.warc.gz
|
en
| 0.931398 | 890 | 3.5 | 4 |
As per NHTSA statistics, more than 32,000 people lost their lives in the United States in 2013 in road accidents. There is no better use for technology than saving lives. Connected vehicles represent a seismic movement that is ready for prime time. It is at an inflection point where automobiles, telemetry, infrastructure, technology and most importantly the mind set change are converging to make connected vehicles a reality.
The term “connected vehicles” holistically refers to vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) connectivity. Safety benefits of connected vehicles include blind spot detection, pedestrian warning, and collision avoidance. Imagine when the infrastructure interacts with the vehicle about upcoming school zones, construction areas, dangerous curves, black ice conditions and immediately alerts the driver. In one of the NHTSA studies, up to 80 percent of accidents involving non-impaired drivers can be avoided using connected vehicle technologies.
The goal of connected vehicles is to improve safety, mobility, and environmental impact; provide new ways for “infotainment;” and a slew of other business benefits. Making drivers safer, greener and smarter, and providing a way of naturally extending our connected lives through vehicles is one of the biggest application of Internet of things (IoT). With every model year change over, more and more vehicles are being fitted with sensors, telematics and connectivity solutions.
This is all great but the infrastructure, regulations and above all massive data required to make this a reality are still evolving. The automotive companies will struggle from the variety and unstructured nature of the data. The variety, velocity and volume of the data would make it unsuitable for traditional database processing. We need more sophisticated big data platforms, such as Hadoop, to process this massive data.
Auto companies have been capturing telematics and sensor data for a number of years. They just did not know how to fully store, leverage and monetize it. With the emerging big data and analytics, it is now possible to crunch this massive amount of data in real time and make it available to provide an actionable alert through vehicle infotainment systems. The Automotive IoT driven (pun intended) advances in sensor data collection and analytics based on sound data science principles is the game changer.
The true value of the big data is in creating unique customers insights. It is about consuming inbound feeds from telematics, sensors, infrastructure and the environment, and then creating outbound information, alerts and marketing feeds using the right channels appropriate times. The insurance companies can change the consumer mindset from “big brother” is watching to how they can educate drivers with big data insights on their driving habits and reduce premiums for safe drivers.
The advantage of connected vehicles powered by analytics is that now we can extend connections to traffic lights, highway sensors, tunnels and bridges. This tremendous amount of data can be coupled with information gathered from weather services and other sources to provide a true picture of the road conditions, safety hazards and traffic congestion at an individual vehicle level.
The context and reliability of alerts is very important. There is a real danger that due to lack of rigor in applying data analytics consumer may get flooded with multitude of meaningless alerts. Think of transmitting an alert when there is a sudden increase in hard braking events from vehicles in close proximity to each other. It is useful if it happens on a small stretch of a highway during an unexpected whiteout condition. This type of alerts could avoid chain accidents. Whereas, the same braking event happening when a large numbers of automobiles exit a sporting venue in a crowded downtown after a football game may not be that alarming. We need sophisticated data-science-driven predictive and prescriptive models that can separate the noise from real signals.
Another big challenge is information security. The recent Chrysler Jeep hack that went viral is a good example of the uncharted territory we are entering. In this case hackers took control of an on-road Jeep and manipulated its air-conditioning and speed among other things. Think of this scenario happening on a massive scale where traffic signals are hacked into, or tunnels and bridges are maliciously jammed by sending false signals to vehicles. It could easily turn into a homeland security issue. Automakers who do not pay strategic attention to cyber security are risking brand dilution due to direct impact on reliability, safety and potential compromise with the personal information of their customers.
Other industries such as publishing and retail have undergone complete transformation due to the Internet. The automotive industry has taken advantage of the efficiencies due to the Internet, such as global product design and supply chain improvements, but the connected vehicle is the one thing that will complete the transformation journey.
Innovative manufacturers are moving away from just “selling cars.” They are moving towards providing mobility solutions. A solution that involves an ecosystem of partners in which infrastructure providers, mobile data companies, repair shops, insurance providers and a host of other partners maximize the value to the customer.
Raman Mehta is the CIO at Visteon (NYSE:VC) and leads all facets of global information technology, including designing, developing and implementing global IT platforms and business processes to increase performance and help Visteon leverage technology as a competitive advantage.
Raman joined Visteon in April 2017 from Fabrinet, where he was senior vice president and CIO at the global engineering and manufacturing services provider of complex optical and electromechanical components. He previously served as CIO and chief process architect for EWIE, a Tier 1 supplier to Ford Motor Co., driving enterprise-wide technology transformation. Before that, he spent more than 13 years at Oracle USA, Inc., where he was a director and advised Fortune 500 clients on business transformation.
Raman has earned several leadership awards including CIO magazine's 2017, 2013 CIO 100 Award, Computerworld's 2012 Premier 100 IT Leaders Award, and a Crain's Detroit Business CIO award. He has presented at several prominent IT conferences and authored various white papers.
He has an MBA from the University of Michigan's Ross School of Business, and a Bachelor of Engineering degree in electrical and electronics from the Birla Institute of Technology and Science in Pilani, India.
The opinions expressed in this blog are those of Raman Mehta and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.
|
<urn:uuid:dd333e24-5206-4af2-a61f-df3b18298be7>
|
CC-MAIN-2022-40
|
https://www.cio.com/article/244775/data-science-helping-connected-vehicles-vision-turn-into-reality.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00648.warc.gz
|
en
| 0.94396 | 1,311 | 2.5625 | 3 |
In Part 2 of this series, I showed you how to create custom attributes. Now you must add those custom attributes to a class before you’ll be able to use them. Unless you’ve created an attribute that’s very similar to an existing attribute, it’s usually more practical to create a new object class than to simply make the attribute fit within an existing object class. This is especially true if you’re creating more than one new attribute.
Creating a Class Object
To create a class object, right-click on the Classes object in the Active Directory Schema snap-in and select the Create Class command from the resulting context menu. When you do, you’ll see a warning similar to the one that you saw when you created an attribute, telling you that this is a permanent action. After that, the process of creating a class is almost identical to that of creating an attribute. Simply enter a common name, LDAP Display Name, and an X.500 Object ID. The only aspect that differs is the Inheritance and Type section, in which you must enter the parent class (the class the new class will be based on). The Class Type section contains a drop-down list that gives you a choice of three class types. Because you’ll be adding custom attributes to this class, select Auxiliary Class from the Class Type drop-down list.
Now you’re ready to add your attributes to the class you’ve just created. To do so, expand the Class object in the console to reveal the classes it contains. Now, right-click on the class you created and select Properties from the resulting context menu to open the class’s properties sheet. On the properties sheet’s General tab, select the Show Objects Of This Class While Browsing check box. Next, switch to the Attributes tab. You can use the Add buttons on this tab to add mandatory and optional attributes. While you’re here, check out the Security tab; you can use it to control who has what rights to the class you’ve just created.
Adding the Class to a Structural Object
You’re just about done. Before you can use the newly created class, you must add it to a structural class object, such as a computer or a user. To do so, select the structural class object from the list of classes and right-click on it. Select Properties from the resulting context menu to open the class’s properties sheet. Select the Relationship tab, which lets you specify which auxiliary classes are included in the structural class. Simply click the Add button next to the Auxiliary Class section and select the auxiliary class you created earlier. Doing so will cause the structural class to inherit the attributes of the auxiliary class you created. In short, you’ve just added custom attributes to a structural class. //
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it’s impossible for him to respond to every message, although he does read them all.
|
<urn:uuid:4a7c5de3-f1d7-4eac-b7a2-e11253e3cd9e>
|
CC-MAIN-2022-40
|
https://www.enterprisenetworkingplanet.com/os/adding-attributes-to-a-class/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00648.warc.gz
|
en
| 0.884338 | 678 | 2.734375 | 3 |
The Patriot Act was passed in 2001 in response to terrorist attacks on American soil. However, we don’t usually think about how it helped define cybersecurity and how it continues to inform the framework for defining critical infrastructures and how we protect them in the US. With a growing number of connected devices, systems, as well as methods for monitoring and maintaining critical infrastructure, it has also become crucial to apply proper protection, prevention, detection, response and recovery measures.
Although the groundwork for protecting critical infrastructure was laid in 1996, the Patriot Act took the lead on protection in an increasingly digital and connected world. The Act defined critical infrastructure as “those systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” (source: https://en.wikipedia.org/wiki/Critical_infrastructure)
Presidential directives dating from Bill Clinton’s initiatives in 1998 have also helped shape how we in the US define Critical Infrastructure Protection (CIP) and cybersecurity. Today the main sectors as defined by the CIP are; Banking and Finance, Transportation, Power, Information and Communications, Federal and Municipal services, Emergency Services, Fire, Law Enforcement, Public Works, Agriculture, and even National Monuments.
In light of these definitions, the Patriot Act gave us an even clearer definition of what consists of critical infrastructure and the mandate to protect it. However, technology often evolves in ways that are unanticipated and the need to define how to protect that critical infrastructure was further laid out in 2014 by the National Institute of Standards and Technology (NIST). With the establishment of the NIST Cybersecurity Framework we now have a policy framework and guideline for how to protect critical infrastructure.
It goes without saying, but considering the breadth of what is covered under CIP definitions and how losing any aspect of critical infrastructure could impact our lives, protecting these infrastructures is very meaningful. As we become more digital, connected, and reliant on control of systems through IT infrastructure it is so crucial to protect critical infrastructure. Maintaining its livelihood directly affects everything from public utilities to health and human services to chemical production. As someone who lives here in Atlanta, I hate to think of how we would be impacted by a breach into the critical infrastructure of the Center for Disease Control and Prevention.
Since we now have a framework on how to keep critical infrastructure secure thanks to the NIST, I will look in more detail at how to develop and maintain proper safeguards according to the Cybersecurity Framework and how to enact incident response practices and recovery from potential attacks.
When proper incident response measures are not in place and a playbook for remediation is not readily available, critical infrastructure that is left unprotected is vulnerable to crippling cyber attacks.
But the challenge isn’t only in preventing an attack. Even the best protective services cannot guarantee prevention of a successful attack 100% of the time. Implementing the right tools for threat detection, incident response, and forensic case management are all crucial elements in the fight to stay secure before and after a breach. Without these tools, systems can be lost for days at a time. And when it comes to critical infrastructure that could mean power outages, loss of millions of dollars or private information being captured and sold on the black market. A successful cyber attack that goes undetected will fail to notify proper resources for remediation and timely response. Furthermore, without security information event management (SIEM) tools and a proper case management solution, time to recover can be dragged out for days or weeks.
Protection and Prevention
The first aspect of protection of critical infrastructure requires taking a mix of cybersecurity, physical security and human resource management to secure who has Access Control. Protecting assets in this manner means giving access to a limited number of individuals and regulating how and when they access assets. On the cybersecurity front this means strictly regulating and monitoring which devices can gain access to critical assets and limiting when and where they can connect. In spite of some best efforts, employees with access to assets have been known to sabotage a company’s infrastructure. In 2013 Citibank had a disgruntled employee who decided to shut down 90% of the company’s network.
Although anomalies like the Citibank shut down are difficult to prepare for, proper security training and management is key to heading off internal breaches and compromises. Employees are simultaneously a company’s best asset and biggest vulnerability. Spear phishing still remains the biggest threat to security and can only be mitigated through proper training and even running “fire drills” that test employee competency and awareness. Completing regular cybersecurity training and awareness is the best way to get in front of security concerns and keep employees from unintentionally introducing harm to an organization’s critical infrastructure. Organizations are encouraged to participate in partner education through webinars, vendor training, and company best practices. To maintain consistency with training and education it is important to have employee agreements and acceptable use policies written around cybersecurity and agreed to by personnel.
Important data security measures also need to be taken to keep information from leaking that could be used to gain access to critical infrastructure management or protected information and records. This is done by locking down or limiting USB access, blocking shadow applications, and limiting what files and data can be easily uploaded to a third party through desktop applications or mobile devices. Proper data security ensures that critical infrastructure is protected and managed in a way that is consistent with the company’s security strategy to protect the integrity and accessibility of data.
Many critical infrastructure systems are still maintained and operated on legacy software and operating systems. Often this is because of the fear of disrupting critical systems during a glacial upgrade. Yet allowing unsupported legacy systems to run a critical machinery, utilities, or applications is a huge threat to security. Despise the fear of disruption it is imperative that this vulnerability gap be bridged by laying out a proper upgrade path. The only way for an upgrade initiative to be as least disruptive as possible is to engage with the proper vendor channels, security partners, and by laying out well documented plan with a timeframe.
Finally, when it comes to protective measures processes and procedures need to be written out and documented that include roles and responsibilities or personnel depending upon their specific duties within an organization. Policies for protecting information need to be standardized and proper security systems need to be put in place. This may include biometric devices, entry door key fobs, and secured dongles for workstations but also implementing security agents on connected devices. Next generation protective technology can provide advanced security for endpoint devices and this is highly important when it comes to ensuring users and connected devices are not introducing attacks to a management network.
Detection within cybersecurity of critical infrastructures the NIST guidelines instructs “Develop(ing) and implement(ing) the appropriate activities to identify the occurrence of a cybersecurity event.” These events can be anomalies or direct threats but it imperative that the activity is detected in a timely manner and mitigated as quickly as possible. This can be managed through proper threat detection software as part of a Security Operations Center (SOC). Employing a SOC with threat detection allows for monitoring of anomalies, events, threats and measure the effectiveness of said detection. Detection processes also need to be maintained and audited to ensure adequate awareness for all threats and anomalous events.
Developing and implementing which activities to take when a cybersecurity activity is detected is the next step in ensuring proper protection of critical infrastructure. Incident response requires the employment of a SOC but also proper planning for which processes and procedures will be employed when events are detected. Even if a complete SOC is not employed proper tools can be put in place to implement proper incident response but at the very least it is highly recommended to have a documented procedure in place when an event occurs. This means having a tiered line of defense, an on call support team or even scripted remediation tasks that can be carried out.
A good incident response platform will gather intelligence and create alerts and workflows around system failures, information theft and loss, malware infection or even the threat of a rogue employee. Even APT’s (Advanced persistent threats) can then be protected against and proper resources notified to ensure the security and resilience of systems and assets that make up the protected infrastructure.
Response activities also need to be delegated amongst stakeholders, staff, and strategic partnerships. In lining out its response imperatives, the Cybersecurity Framework dictates that analysis, mitigation and improvements be documented as part of a comprehensive Incident Response plan. Crafting a proper Incident Response playbook and workflow is critical. Integrating a threat intelligence solution allows for a process to be created around containing and eliminating threats. When you automate intelligence, data can be quickly collected, quickly scaled and increase the time to respond to an incident. Proper analysis from automated intelligence and automated action guided by a playbook will allow you to identify and eliminate threats quickly. Also, even less experienced technicians can quickly be brought up to speed when playbooks and incident response workflows are employed.
Outages like the one Delta experienced this month may still occur but a recovery plan must be in order to mitigate attacks, downtimes, and loss of crucial services. As with any good Disaster Recovery plan, it’s only as good as its documentation, review, and communication system. The important questions that must be asked regard:
- Who is notified when a recovery protocol is initiated?
- What is the defined time to recovery that can be tolerated?
- What improvements will be documented and put into place to avoid future outages?
- Which resources are in place and called upon when disaster strikes and where are the communication methods and contact information kept?
- If an attack or employee negligence has occurred what method is in place to hold responsible parties culpable?
Managing these crucial systems and keeping utilities, flight schedules, financial systems intact is a responsibility that requires full operation at all times. With the ongoing threat of attack on the horizon at any moment promising to disrupt systems and render critical infrastructure inoperable it has never been more important to properly stay secure. You can find the more about the NIST Cybersecurity Framework here.
Click the button below to schedule your one-on-one demo of the D3 Incident Management Platform.
|
<urn:uuid:76c6befa-0e1b-4654-a01c-d7bd1e2d6f1c>
|
CC-MAIN-2022-40
|
https://d3security.com/blog/protecting-critical-infrastructure/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00648.warc.gz
|
en
| 0.942226 | 2,120 | 3.453125 | 3 |
Logs are a record of the internal workings of a system. Nowadays, organizations can have hundreds and, more regularly, thousands of managed computers, servers, mobile devices, and applications; even refrigerators are generating logs in this Internet of Things era. The result is the production of terabytes of log data—event logs, network flow logs, and application logs, to name a few—that must be carefully sorted, analyzed, and stored.
Without a log management tool, you would need to manually search through many directories of log files on each system to access and extract meaning from these millions of event logs. Historically, writing a script to automate log collection at set timetables was the norm, but this approach is not scalable across modern systems and environments. Even using syslog—a UNIX program that copies logs to a central server—is operating system dependent and not easily configured.
This is where log aggregation comes in.
Log aggregation consolidates logs from different sources into a central location. When performed correctly, logs from all devices are merged into one or more streams of data, depending on how they will be analyzed. Log aggregation turns confusing or unintelligible logs into meaningful log data which can be more effectively synthesized and understood by your organization. Centralized logging simplifies the log management process. It also allows you to harness real-time log data for faster event response time and a deeper understanding of your organization’s digital environment.
Determining which devices, types of logs, and mode of collection is the first step. For workstations and servers, you normally need to install a logging agent. Window Event Forwarding (WEF) can be used for very basic log aggregation on Windows without the need to install an agent, but it has some drawbacks. And, like syslog, it is operating system dependent.
Beginning with data ingestion, the log is collected in its base form. Most modern logs already contain structured or semi-structured data. However, even with structured data, some fields may contain embedded fields of information that can be parsed into new fields, making the event records even more valuable later on during the analytical stage. For unstructured or semi-structured data that is known to contain such fields, it is important to parse the fields sooner rather than later, ideally on the same host where the logs are generated. A workstation will need only negligible resources to parse its own data in real time, while an intermediate or centralized log collection server might become overloaded if given the task of processing the logs from hundreds or thousands of hosts it is receiving concurrently.
With the many different forms that a log can take, a standardization process is necessary. This can include data normalization. The ability to merge specific types of logs, like DNS lookups from macOS and Windows hosts, by normalizing data can greatly reduce the complexity of queries once a SIEM has ingested these DNS client events that now have a common schema. You may also need to standardize logs to a data format like JSON to make them digestible by your SIEM.
Logs can be modified during the aggregation process. In some circumstances, you may want to redact or remove certain key-value fields before further processing the log data—for example, censoring sensitive user information like passwords or encryption keys. Similarly, specific fields may contain extraneous information that can be truncated. Maybe some log sources are valuable to your organization, yet they have some unimportant fields. Truncating or dropping entire fields can conserve network bandwidth and disk storage can be conserved. Fields can also be added to enhance the data quality of your logs, like timestamps and hostnames. This is especially important for log sources from macOS workstations, since they rarely contain a hostname field or any other field that can be used to identify the host that generated the logs. This essential information is required for performing aggregation queries that provide you with metrics like how many (or what percentage) of hosts have experienced any given security event within a specific time frame. Using a standard query with the hostname information would enable you to identify which workstations were affected by the event.
You probably will not need to collect duplicated logs showing the same information hundreds of times, informational logs of a benign nature, or debug messages that are of interest only to the developers who maintain the software generating the events. These can all be dropped at the source, thereby decreasing the network throughput of unnecessary logs and, by extension, increasing the throughput of relevant logs. This means more efficient data collection and a huge boost to the quality of logs you are sending to your SIEM.
This is the final stage of the log aggregation process. Once the logs have been processed, they are ready for forwarding over the network—known as “shipping”—to their final destination. The confidentiality of logs in transit is addressed with secure encryption protocols like TLS. Typically they are shipped to a SIEM that will ingest them. Once in the SIEM you can run correlations on them, generate metrics, and be able to visualize events. SIEMs are not always the final resting place of aggregated logs. They can be shipped to a database like Raijin, which specializes in log data and provides a foundation for developing real-time data analytics.
|Unlike most logging agents, NXLog can parse, standardize, modify, and filter during the collection stage, on the fly, while it is shipping the processed log data to the centralized log destination, on any modern operating system, to almost any SIEM. This is just one of the ways NXLog is able to achieve such high performance.|
A solution that can effectively implement all stages of log aggregation can be an incredibly effective tool in uncovering the real value your organization’s log data is hiding in its current state. There are many benefits to incorporating log aggregation software into your log management strategy.
- Decreased throughput and storage requirements, decreased costs
Many SIEM platforms are priced based on throughput, whether EPS (Events Per Second) or GB/day (Gigabytes per day). Additionally, archival space is priced per GB of disk space used. By filtering out the noise, you can significantly decrease the number of unnecessary logs sent to your analytics and storage platforms, thereby lowering the running costs of these systems.
- Meaningful logs
Logs are only useful if meaning can be extracted from them. Log aggregation platforms can parse free-text event messages into meaningful, structured data that your analytics platform can more efficiently use. This results in simpler queries that run faster, which in turn means faster notifications and reduced response times for your security operations to investigate an event.
- Capturing ephemeral data
More frequently, logs are being created ephemerally—especially in a distributed, cloud-based environment using systems like Docker and Kubernetes. These logs must be captured within a certain period or the information will be lost. A log aggregation solution can gather these logs at creation time and ship them to a permanent store.
Aggregating logs in a central location increases the integrity of your logs. In the event of a breach, log messages can show the steps that an attacker took. Most attackers attempt to cover their tracks by deleting or manipulating logs. By centralizing data, log aggregation tools counteract these methods.
- Auditing and compliance
Many regulations, including PCI-DSS and HIPAA, require logs to be aggregated and stored for a set retention period. Your auditors may also require this of you. Log aggregation allows your organization to conform to these requirements. Raijin database is optimized for log data, integrates seamlessly with NXLog, and supports expression-based partitioning that can automate this task for you.
To create these tangible benefits for an organization’s log management strategy, a log aggregation solution should fulfill a set of essential requirements.
With the potential for thousands of devices across an organization’s infrastructure, a system for aggregating logs should be easy to deploy at enterprise scale.
- Multi-platform support
Most organizations' IT infrastructure employs multiple operating systems, containerized applications, embedded systems, a wide range of third-party solutions, and more. A log aggregation solution should be modular, and work equally on each system with the same degree of autonomy and accuracy.
- High throughput
Millions of events can be generated in short spaces of time. Any log management platform not capable of sustaining an ingestion rate that meets or exceeds the average rate of logs generated is not a viable solution. This is an important requirement to consider if you want to prevent loss of valuable data.
Integrating log aggregation with third-party solutions should be simple and complementary. The ideal log aggregation solution will need to support a wide variety of SIEM platforms, security tools like firewalls and IDPS systems, and embedded devices on both the input and output sides. They should work together to create a better log management service.
Enterprise-level software is often complicated to configure. Log aggregation platforms should be straightforward and intuitive, based on familiar concepts, languages, and configuration styles that are familiar to IT professionals.
- Documentation and user support
Incorporating log aggregation into an existing log management process requires careful planning and testing prior to deployment. As such, a solution should provide well-written, easy-to-understand documentation that is frequently updated and has a community of technical professionals ready to help.
NXLog is a leader in the log aggregation space. We offer both an open source NXLog Community Edition and a full-featured NXLog Enterprise Edition of our log aggregation platform, empowering organizations to utilize their log data to the fullest.
NXLog has a modular, multi-threaded architecture for high performance, low latency, and throughput of up to 100,000 events per second. We support all major operating systems with our highly-scalable logging agent that integrates with best-in-class SIEM and third-party analytics platforms. With NXLog’s emphasis on modular design and straightforward Apache-style code for creating configurations, even the most complex configurations are easy to create and to understand. In short, NXLog is an easy-to-use logging agent that requires only minimal system resources, yet delivers extraordinary results.
At NXLog, we put the same effort into our documentation that we put into our software products so that you can learn how to use our products to their fullest and find the answers that you need.
|
<urn:uuid:c91c128a-fa81-400b-b60b-a427a4425464>
|
CC-MAIN-2022-40
|
https://nxlog.co/log-aggregation-the-benefits-and-requirements
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00648.warc.gz
|
en
| 0.926586 | 2,210 | 2.8125 | 3 |
If you think that having a strong password is enough for your data security, think again!
Every time you log in to a host using your password, you are exposed to attacks and security threats. If the hackers can get their hands on your password and login as ‘you’, they will have complete access to all your data.
Kerberos is an authentication protocol that prevents unauthorized access. It authenticates the service requests between the users and the hosts through unsafe networks. Kerberos authentication is being used by top global companies like Microsoft Windows, Apple OS, Linux, and Unix.
Kerberos was developed by the Massachusetts Institute of Technology (MIT) as a protection protocol for its own projects in the 1980s. Kerberos was named after Cerberus, which is a Greek mythological creature with three heads. Kerberos was inspired by this name and the three heads signify the client, server, and the Key Distribution Center (KDC).
What Are the Components in the Kerberos Environment?
Before we move on to the actual working on Kerberos, let’s take a look at the basic components.
The agents are the principal entities involved in a typical Kerberos workflow.
- The client is the person who initiates the request for communication.
- The application server hosts the service that the client requests.
Key Distribution Center (KDC) consists of three parts for authentication: A database (DB), the Authentication Server (AS), and the Ticket Granting Server (TGS).
The tickets are the communications of permission sent to the users for performing a set of actions on Kerberos. There are two types:
- Ticket Granting Service (TGS) is encrypted with the service key and used to authenticate a service.
- Ticket Granting Ticket (TGT) is issued by the authentication server to the client for requesting the TGS.
Kerberos handles several keys that are encrypted securely to prevent The authentication server issues ticket Granting Ticket (TGT)corruption or access by hackers. Some of the encryption keys used in the Kerberos are:
- User key
- Service key
- Session key
- Service session key
- KDC key
How Kerberos Authentication Works?
The prime purpose of Kerberos authentication is to secure the access of a user in service through a series of steps that prevent security threats and password access. Essentially, the user needs to access a network server to get access to a file.
You can go to any company offering managed IT services to implement Kerberos encryption. Even so, it’s essential to have a basic idea of how security is implemented and how the data access is encrypted. So, here’s are the steps of Kerberos security and authentication:
1. Initial Authentication Request from the Client
As the client tries to login to the server, they send an authenticator to the KDC requesting a TGT from the authentication server.
This authenticator has information like the password, the client ID, as well as the date and time of authentication request. Part of the message with the password is encrypted, which the other part is plain text.
2. KDC Checks the Credentials
KDC is the Kerberos server that validates the credentials received from the client. The server first decrypts the authenticator message and checks against the database for the client’s information and the availability of the TGS.
After finding both these information, the server then generates a secret key for the user using the password hash. It then generates a TGT that contains the information about the client credentials like client ID, date and time stamp, the network address and a few more authentication details. Finally, the secret key is encrypted with a password that the server only knows and sends to the client.
The TGT is then stored in the Kerberos for a few hours. If the system crashes, the TGTs won’t be stored anywhere.
3. The decryption of the Key by the Client
The client decrypts the message received from the KDC by using the secret key. The client’s TGT is then authenticated and the message is extracted.
4. Using TGT to Access Files
If the client wants to access specific files on the server, it sends a copy of the TGT and the authenticator to the KDC requesting access.
When KDC receives this message, it notices that the client is already authenticated. So, it decrypts the TGT using the encryption password to check if it matches.
If the password is validated, then it considers it to be a safe request.
5. Creation of Ticket for File Access
To allow the client to access the specific files requested, KDC generates another ticket. It then encrypts the ticket with the secret key and the method of accessing the files is included in this ticket.
This ticket now lies in the Kerberos tray for the next eight hours. This means the client can access the file server as long as the ticket is valid.
6. Authentication Using the Ticket
The client decrypts the message using the key and this generates a new set of client information, including client ID, date and time stamp and network address.
This is sent to the server in the form of an encrypted service ticket. The server decrypts the ticket and checks if the client’s details match the authenticator and within the file access validity. Once the details match, the server sends a message of verification to the client.
Kerberos authentication is regularly updated to meet the new security threats. It is one of the top-used authentications by the tech giants, which means it’s been authenticated against rigorous security attacks. If you want to protect your server and your user data from the prying eyes of unscrupulous people, then go for Kerberos encryption.
Our data experts at LayerOne Networks can help you implement such security and authentication protocols to protect your data. Reach out to us for managed IT services and securing your company from any online security vulnerabilities.
|
<urn:uuid:32517618-94eb-4aff-b3cf-e502dae25688>
|
CC-MAIN-2022-40
|
https://l1n.com/how-kerberos-authentication-works/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00048.warc.gz
|
en
| 0.908267 | 1,272 | 3.1875 | 3 |
Riken has joined the Covid-19 High Performance Computing Consortium, an international effort to use supercomputing to help fight the Covid-19 pandemic.
The Japanese research institute operates Fugaku, which was this week named the world's most powerful supercomputer, at 415.53 petaflops of peak performance.
Every flop helps
Fugaku is currently being used for Covid-19 research under a program promoted by Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT).
As a member of the consortium, the Riken Center for Computational Sciences will collaborate with MEXT to provide computational and data resources to researchers around the world and to share the results of research carried out in the United States and Japan as well as other countries that are part of the consortium.
The consortium was originally set up by IBM, the Department of Energy, and the White House in March, but quickly grew to include hundreds of petaflops of compute power, major cloud providers, and more than 56 research teams.
Excluding Fugaku, the consortium currently boasts 485 petaflops of computing power across 5m CPU cores and 50,000 GPUs. Its work spans everything from looking for a complete vaccine, to therapeutic treatments, to modeling how the virus spreads.
|
<urn:uuid:81af0cbb-d018-42b1-8ed7-b0c86304fe2b>
|
CC-MAIN-2022-40
|
https://www.datacenterdynamics.com/en/news/riken-joins-covid-19-hpc-consortium-adding-fugaku-supercomputer/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00048.warc.gz
|
en
| 0.950107 | 267 | 2.640625 | 3 |
08 Feb Ars Technica Article Covers the Basics on Malware
Protect yourself! Learn about malware.
Viruses, worms, trojans, spyware and malware — what does it all mean? If you’re at all familiar with technology, you’ve probably heard a few of these words dozens of times. But what are they? What’s the difference between them? This lengthy, in depth Ars Technica article covers the basics.
We all try our best to keep our antivirus software up to date while practicing the “tried and true” methods of handling potentially malicious software. For example, you should never click on an email attachment, even if it looks innocuous. The problem is that some malware is very good at looking like perfectly safe file types, such as PDFs, Word documents or pictures, so these things aren’t always obvious (which is why they work so well at being spread like wild fire.)
The best thing you can do is educate yourself. Learn as much as you can about malware, how it works and how it spreads. Learn about the things that viruses can, and can’t, do. This article from Ars Technica is a great start!
Viruses, Trojans, and Worms, Oh My: The basics on malware is the second installment of Ars Technica’s Guide to Online Security, and a good read for anyone who wants to know more about malware and how to protect themselves from it.
From here, you can always look up information on Wikipedia or other security websites and blogs. There’s a wealth of information out there, so take advantage of it! And, if worse comes to worse, you can always give NEPA Geeks a call. We specialize in virus removal and would be more than happy to assist you with any questions you might have.
|
<urn:uuid:d4ba2076-ac5e-4213-b71e-ac627478234f>
|
CC-MAIN-2022-40
|
https://www.nepageeks.com/2013/02/08/ars-technica-article-covers-the-basics-on-malware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00048.warc.gz
|
en
| 0.932946 | 386 | 2.96875 | 3 |
Google continuously works on its Maps feature in order to make it easier for users to commute.
When Google Maps was initially released in 2004, it was only available in English. Since then, Google has gradually rolled out new language support and today, they have announced support for 39 new languages spoken by an estimated 1.25 billion people.
The update is available for Android, iOS and desktop users.
It’s trivia, but keep in mind there are a total of 6,909 living languages recorded in the Ethnologue catalogue. Many of the languages Google chose to add today are spoken by large populations. Swahili, in particular, has 8 percent of the African continent speaking it, while Turkish is spoken by 9 percent of people in Europe.
The add an address feature will enable the users to add places to Google Maps which were not there earlier. On the other hand, the smart address search uses AI’s prowess to recognise the nearby landmarks to the specific location that a user is searching for.
|
<urn:uuid:30e29123-ec15-4562-a75b-49177e5e2857>
|
CC-MAIN-2022-40
|
https://www.miadria.com/we-bet-yours-is-among-the-39-languages-added-to-google-maps/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00048.warc.gz
|
en
| 0.953453 | 206 | 2.671875 | 3 |
Shop By Application
Wi-Fi is the most common RF (radio frequency) technology in the world. With speeds up to 1.3Gbps (Gigabits per second) in 802.11ac and 4.6Gbps with the new 802.11ad specification, Wi-Fi is the most ubiquitous technology on the planet.
Switching and Routing
Switching and Routing is the backbone of all wide area data networks including the Internet. From the LAN (Local Area Network) to the WAN (Wide Area Network), Ethernet data can be efficiently distributed to servers, desktops, and fixed wireless equipment.
Fixed Wireless is a PTMP (Point to Multipoint) and PTP (Point to Point) wireless technology that leverages licensed or unlicensed microwave frequencies to wirelessly distribute data in a network. Where access to conventional broadband technologies like DSL ...
Become A Reseller
Apply for Financing
Educators and K12 IT Directors are running a race against time to deliver increased bandwidth to all buildings on campuses while extending the their network edge all the way to the desk. Wireless access points are the primary means of distributing Ethernet in classrooms Learn more
Utility companies have long utilized remote monitoring with SCADA and other low bandwidth applications over expensive satellite links Learn more
Wireless backhauls are an important part of telecommunications. They can allow small startups, and established LEC's as well, to penetrate unserved markets with speed and efficiency versus a comparable fiber option. LTE products from Baicells Learn more
Radio and Television Broadcasting
There isn’t much in the broadcast ecosystem that isn’t already IP, except the baseband tier of live production,” said Chuck Meyer, chief technology officer for Grass Valley. He points to the fact that most facilities follow a file-based workflow for production Learn more
Integrators are companies who make turnkey solutions based on unfinished components such as Mikrotik RouterBOARD and RouterOS. Their products include assembled CPE/AP devices, preinstalled Integrated antennas and rackmount solutions Learn more
One of the most important applications of IP based digital microwave systems is represented by the connection between buildings in hospital complexes and from the main electronic archive to remote laboratories and specialist health checking points Learn more
A microwave link is a communications system that uses a beam of radio waves in the microwave frequency range to transmit information between two fixed locations on the earth. They are crucial to many forms of communication and impact a broad range of industries Learn more
|
<urn:uuid:07c3aaf9-9cd7-4c67-926b-db6d82f369b0>
|
CC-MAIN-2022-40
|
https://www.ispsupplies.com/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00048.warc.gz
|
en
| 0.913911 | 538 | 2.859375 | 3 |
The desire for revenge can be the consequence of a feeling of anger.
But is this the case at the cerebral level?
What happens in the human brain when injustice is felt?
To answer these questions, researchers from the University of Geneva (UNIGE),
Switzerland, have developed an economic game in which a participant is confronted with the fair behaviour of one player and the unfair provocations of another player.
They then observed, through brain imaging, which areas were activated as the study participant experienced unfairness and anger.
In a second phase, scientists gave the participant the opportunity to take revenge.
They thus identified the location in the brain of activations that are related to the suppression of the act of revenge in the dorsolateral prefrontal cortex (DLPFC).
The more active the DLPFC is during the provocation phase, the less the participant takes revenge. These results have now been published in Scientific Reports.
Until now, research on anger and the vengeful behaviour that results from it has been based primarily on the recall of a feeling of anger by the participants, or on the interpretation of anger on photographed faces.
Olga Klimecki-Lenz, a researcher at UNIGE’s Swiss Center for Affective Science (CISA), wanted to locate live which areas of the brain reacted when the person became angry and how this feeling materialized into vengeful behaviour.
Getting angry playing the Inequality Game
25 people took part in the Inequality Game, an economic game created by Olga Klimecki-Lenz to trigger a feeling of injustice, then anger, before offering the “victim” the possibility of revenge.
“The participant has economic interactions with two players, whose behaviour is actually pre-programmed — which he doesn’t know about, explains Olga Klimecki-Lenz. One is friendly, offers the participant only mutually beneficial financial interactions and sends nice messages, while the other player makes sure to multiply only his own profits, going against the participant’s interest and sending annoying messages.”
The game takes place in three phases, during which the participant is installed in a magnetic resonance imaging (MRI) scanner allowing scientists to measure his brain activity.
The participant is then confronted with the photographs of the other two players and the messages and financial transactions that he receives and issues.
In the first phase, the participant is in control and chooses which profits he distributes to whom.
“We noticed that on average, participants here are fair towards both other players,” says Olga Klimecki-Lenz.
The second phase is that of provocation: the participant passively receives the decisions of the other two players, and especially the provocations and injustice of the unfair player, which induce a feeling of anger rated on a scale from 0 to 10 by the participant himself.
In the last phase, the participant is again the master of the game and can choose to take revenge or not by penalizing the other two players.
Overall, participants remained nice to the fair player, but took revenge for the injustices committed by the unfair player.
The amygdala again!
The provocation phase played a crucial role in localizing the feeling of anger in the brain.
“It was during this phase that we were able to identify which areas were related to feelings of anger,” adds Olga Klimecki-Lenz.
Thanks to MRI, researchers observed activity of the superior temporal lobe, but also of the amygdala, known mainly for its role in the feeling of fear and in processing the relevance of emotions, when participants looked at the photograph of the unfair player.
These two areas correlated with feelings of anger: the higher the level of anger reported by the participant, the stronger their activity.
Localized and defused revenge
“But the Inequality game allowed us above all to identify the crucial role of the dorsolateral prefrontal cortex (DLPFC), a zone which is key for the regulation of emotions and which is located at the front of the brain!”
Olga Klimecki-Lenz explains enthusiastically.
On average, participants took revenge on the unfair player. However, the researchers observed a variability in behaviour that shows that 11 participants nevertheless remained fair to the unfair player. But why so?
The CISA team observed that the greater the DLPFC activity during the provocation phase, the less participants punished the unfair player.
On the contrary, low DLPFC activity was associated with a more pronounced revenge on the participant following provocation by the unfair player.
“We observed that DLPFC is coordinated with the motor cortex that directs the hand that makes the choice of vengeful behavior or not,” continues the CISA researcher.
“There is therefore a direct correlation between brain activity in DLPFC, known for emotional regulation, and behavioural choices.”
Suppress revenge by stimulating DLPFC?
For the first time, the role of DLPFC in revenge has been identified and is distinct from concentrated areas of anger in the amygdala and superior temporal lobe.
“One can then wonder if an increase in the activity of DLPFC obtained through transmagnetic stimulation, would allow to decrease the acts of vengeance or even to suppress them,” says Olga Klimecki-Lenz.
- Olga M. Klimecki, David Sander, Patrik Vuilleumier. Distinct Brain Areas involved in Anger versus Punishment during Social Interactions. Scientific Reports, 2018; 8 (1) DOI: 10.1038/s41598-018-28863-3
|
<urn:uuid:56eaa22c-397e-4d40-9896-aed6c06e161e>
|
CC-MAIN-2022-40
|
https://debuglies.com/2018/08/22/researchers-find-how-the-brain-suppresses-the-act-of-revenge/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00248.warc.gz
|
en
| 0.945322 | 1,174 | 3.265625 | 3 |
How do human beings perceive their environment and take their decisions?
To successfully interact with the immediate environment, for human beings it is not enough to have basic evidence of the world around them.
This information by itself is insufficient because it is inherently ambiguous and requires integrating into a particular context to minimize the uncertainty of sensory perception.
But, at the same time, the context is ambiguous. For example, am I in a safe or a dangerous place?
A study published on 28 November in Nature Communications by Philipp Schustek, Alexandre Hyafil and Rubén Moreno-Bote, researchers at the Center for Brain and Cognition (CBC) of the Department of Information and Communication Technologies (DTIC) at UPF, suggests that the brain has a refined form of representation of uncertainty at several hierarchical levels, including context.
Hence, the brain has a very detailed, almost mathematical probabilistic representation of all that surrounds us we consider important.
“The notions of probability, though intuitive, are very difficult to quantify and use rigorously. For example, my statistics students often fail to solve some of the problems I pose in class.
In our study, we find that a complicated mathematical problem involving the use of the most sophisticated rules of probability can be solved intuitively if it is presented simply and in a natural context”, asserts Rubén Moreno-Bote, coordinator of the Research Group on Theoretical and Cognitive Neuroscience at the CBC.
Cognitive tasks of hierarchical integration
Let us suppose that a city airport is hosting a football final and we look at a few passengers who are leaving a plane.
If we note that four of them are fans of the red team and two of the blue team, we could conclude that more fans of the red team are attending the final than of the blue team.
This inference, based on incomplete sensory evidence, could be improved with contextual information.
For example, if worldwide there are more fans of the blue team than of the red team, despite our initial observation, we would review our inference counting how many supporters of each group are travelling on the plane to more accurately confirm whether more fans of the red team have really come to the city than of the blue team.
Or, we could also do the opposite, basing ourselves on the context inferring whether the sample observed follows the more general context or not.
The researchers designed their experiments presenting hierarchical integration tasks using the plane task.
“For the study, we told our participants that they are at an airport where planes can arrive carrying more of one type of person than of another, for example, more supporters of Barça than of Madrid.
On seeing a handful of passengers leaving several aircraft, the participants can predict with mathematical precision the likelihood that the next plane will be carrying more passengers of a certain type”, Moreno-Bote explains.
The researchers designed their experiments presenting hierarchical integration tasks using the plane task.
“In general, this structure of tasks creates hierarchical dependencies among the hidden variables to be solved bottom up (deducing the context of previous observations) and then passing the message top down (deducing the current status combining current observations with the inferred context)”, the authors explain.
The results showed that the participants, based on their preliminary observations, built a probabilistic representation of the context.
These results help to understand how people form mental representations of what surrounds us and how we assign and perceive the uncertainty of this context.
Over the past few decades, research of the brain’s spatial system advanced tremendously, providing insights into how the brain represents complex information and how these processes are impaired in disease states (e.g. Banino et al., 2018; Kunz et al., 2015; for reviews see Buzsáki and Moser, 2013; Epstein et al., 2017; Moser et al., 2008).
However, scientific investigations of spatial cognition in humans and animals are often limited to small scale environments such as single rooms or short walkable pathways. It is therefore unclear whether representation and processing of large-scale environments rely on the same neurocognitive systems (Wolbers and Wiener, 2014).
This question is of importance for several reasons. First, the lack of knowledge on how the brain’s spatial system treats different spatial scales affects interpretation of past investigations that used different types of experimental environments.
Second, disorientation is a prevalent symptom across neurological and psychiatric disorders, but remains poorly understood and diagnosed, in part because it may have different subtypes that manifest at different spatial scales (Peer et al., 2014). Finally, recent findings suggest that the brain’s spatial system is also used to represent conceptual knowledge (Behrens et al., 2018; Bellmund et al., 2018; Constantinescu et al., 2016; Gärdenfors, 2000). Since large-scale environments are often remembered in a schematic manner not consistent with Euclidean geometry (McNamara, 1986; Moar and Bower, 1983; Tversky, 1981), understanding their representation may provide clues to representation of abstract domains.
Previous neuroscientific evidence supports the idea that the brain’s spatial representations are not unified but separated into multiple scales.
Functional MRI studies in humans demonstrated that locations within rooms and their surrounding buildings are coded in different cortical regions (Kim and Maguire, 2018), and that directions are represented in the retrosplenial complex with respect to the local axis of a room irrespective of its large-scale context (Marchette et al., 2014).
Electrophysiological evidence in animals also points to separate representation of small scale regions and their large-scale context, as grid- and place-cells within the medial temporal lobe undergo remapping when crossing borders between rooms (Fyhn et al., 2007; Skaggs and McNaughton, 1998; Tanila, 1999), and form independent representations of different segments of the environment (Derdikman et al., 2009; Derdikman and Moser, 2010; Paz-Villagrán et al., 2004; Spiers et al., 2015). Recordings from the rat retrosplenial cortex also demonstrate coding of location both in the immediate small-scale region and in the large-scale surrounding environment (Alexander and Nitz, 2017; Alexander and Nitz, 2015).
Finally, evidence from patients with disorientation disorders shows that disorientation can be limited to a specific spatial scale according to the underlying lesion (Peer et al., 2014). Patients with lateral parietal cortex lesions are impaired in navigating their immediate, small-scale environment (‘egocentric disorientation’; Aguirre and D’Esposito, 1999; Stark, 1996; Wilson et al., 2005).
In contrast, patients with retrosplenial lesions (Aguirre and D’Esposito, 1999; Takahashi et al., 1997) and Alzheimer’s disease (Monacelli et al., 2003; Peters-Founshtein et al., 2018) show the opposite pattern – correct localization in the immediately visible environment but inability to navigate in the larger unseen environment. Despite this evidence, few neuroscientific studies directly contrasted between representation of different scales of space.
Several studies indicated a posterior-to-anterior progression from small to large scales along the hippocampal axis, manifested as larger spatial receptive fields, in both humans and animals (Brunec et al., 2018; Kjelstrup et al., 2008; Poppenk et al., 2013). However, these investigations only used routes ranging up to several meters, and focused only on the hippocampus and not on the rest of the brain’s spatial system.
Another fMRI study contrasted coarse- and fine-grained spatial judgments in one scale (city), finding increased hippocampal activity for fine-grained distinctions (Hirshhorn et al., 2012a).
In the current work, we sought to characterize human brain activity under ecological experimental settings, across a large range of spatial scales, when directly manipulating only the parameter of spatial scale. To this aim, we asked subjects to compare distances between real-world, personally familiar locations across six spatial scales (rooms, buildings, neighborhoods, cities, countries and continents; Figure 1), under functional MRI, and looked for differences in brain response for the different scales.Figure 1 with 2 supplements
Figure 1—figure supplement 2
Figure 1—figure supplement 1
Posterior-anterior gradients of spatial scale selectivity
To investigate spatial scale-selective activity, we looked for voxels showing difference in response to task performance at the different scales, and characterized their gradual response profiles by fitting a Gaussian function to the beta value graphs at each voxel (Figure 2—figure supplement 1). This analysis identified three cortical regions that displayed a continuous gradual shift in spatial scale selectivity: the medial temporal cortex, medial parietal cortex and lateral parieto-occipital cortex (Figure 2A–D, Figure 2—figure supplement 2).
Activity in these regions displayed a gradual shift from selectivity for the smallest spatial scales (room, building) in their posterior parts, followed by selectivity for medium scales (neighborhood, city) more anteriorly, and for the largest scales (country, continent) in the most anterior part of each gradient (Figure 2E; p<0.001 for all gradients, permutation test on linear fit slope, FDR-corrected).
The three scale-selective gradients were symmetric across the two hemispheres. Extraction of the scale with maximal response from each voxel (while disregarding the pattern of activity at other scales) also demonstrated posterior-to-anterior progression along the three abovementioned gradients (Figure 2E, Figure 2—figure supplement 3; p<0.001 for all gradients, permutation test on linear fit slope, FDR-corrected). To further characterize the scale selectivity of each region, we plotted the event-related activity and beta values for each spatial scale at each part of the three gradients.
Results showed the same gradual posterior-anterior shift from small to large spatial scales, with each part of the gradient having a preferred scale and gradually diminishing activity to other scales around it (Figure 2—figure supplement 4A–C). Finally, in light of previous findings of spatial scale selectivity changes along the hippocampal long axis (Brunec et al., 2018; Poppenk et al., 2013), we measured average spatial scale selectivity along the hippocampus. Activity shifted from small to large scales along the posterior-anterior axis of the hippocampus (Figure 2E; p<0.001 for average position of Gaussian fit peak, permutation test on linear fit slope, FDR-corrected).
Using the same analysis at the individual subject level, 16 of 19 subjects showed significant increase in preferred scale along the lateral parietal gradient, 17 of 19 along the medial temporal gradient, 17 of 19 along the medial parietal gradient, and 6 of 19 along the hippocampus (all p<0.05, permutation test on linear fit slope, FDR-corrected).Figure 2 with 5 supplements
Figure 2—figure supplement
Figure 2—figure supplement 4
Figure 2—figure supplement 3
Figure 2—figure supplement 2
Figure 2—figure supplement 1
In addition to the continuous gradients, several other brain regions displayed scale-specific activity not organized as a continuous gradient (Figure 3, Supplementary file 1). Clusters of activity at the supramarginal gyrus, posterior temporal cortex, superior frontal gyrus and dorsal precuneus displayed the highest activity levels for the smallest spatial scales (room and building), and their activity gradually diminished for larger scales (Figure 2—figure supplement 4D). In contrast, the lateral occipital cortex and the anterior medial prefrontal cortex clusters displayed the opposite pattern of higher activity for the largest spatial scales (city, country and continent), and gradually decreasing activity for the smaller scales (Figure 2—figure supplement 4D).Figure 3
The three cortical scale-selective gradients extend anteriorly from scene-responsive cortical regions
The three cortical gradients identified by our analyses are located in close proximity to known scene-responsive cortical regions – parahippocampal place area (PPA), retrosplenial complex (RSC) and occipital place area (OPA) (Epstein et al., 2017). To test the exact locations of these regions with respect to our findings, we used masks of these regions as previously defined on an independent sample (Julian et al., 2012).
The three regions (PPA, RSC and OPA) were found to be situated at the posterior part of the medial temporal, medial parietal and lateral occipito-parietal gradients, respectively. Accordingly, the scene-responsive regions were most active for the small and medium scales: room, building and neighborhood (Figure 4).
This finding suggests their stronger involvement in the processing of immediate visible scenes, compared to more abstract larger environments. However, these regions also showed activity for the larger scales, suggesting that their computational role may extend beyond the exclusive processing of the immediately visible environment, though to a lesser extent (Figure 4).Figure 4
The three cortical gradients indicate a shift between the visual and default-mode brain networks
To relate the three cortical gradients to large-scale brain organization, we compared their anatomical distribution to a parcellation of the brain into seven cortical resting-state fMRI networks, as identified in data from 1000 subjects (Yeo et al., 2011). Across the three gradients, the posterior regions (related to processing of small scales) overlapped mainly with the visual network, while the anterior regions (related to processing of large scales) mainly overlapped with the default-mode network (Supplementary file 1).
Differences in scale selectivity between the three cortical gradients
The previous analyses identified three cortical regions with gradual progression of scale selectivity. We next attempted to identify differences between these three regions that may be indicative of their functions. To this aim, we analyzed the number of voxels with preferential activity for each scale within each gradient (Figure 5, Figure 5—figure supplement 1). The medial parietal gradient was mostly active for the neighborhood, city and continent scales, indicating a role for this region in processing medium to large scale environments. In contrast, the medial temporal gradient contained mostly voxels sensitive to scales up to the city level, suggesting that this region is involved mostly in processing small to medium scales. Finally, the lateral occipito-parietal gradient was most active for the smallest scales (room, building) and the largest (continent) scale. These findings demonstrate that despite their similar posterior-anterior organization, the three scale-sensitive cortical gradients have different scale preferences, indicating possible different spatial processing functions.Figure 5 with 1 supplemen
Figure 5—figure supplement 1
Subjects’ behavioral ratings and their relation to the scale effects
Analysis of subjects’ ratings of emotional significance and task difficulty for each location indicated no significant differences between scales, except for difficulty difference between the continent and the room and neighborhood scales (Figure 1—figure supplement 2A–B; correlation between difficulty and scale, r = 0.39; p<0.05, two-tailed one-sample t-test across subjects). Familiarity ratings did significantly differ across scales, with larger average familiarity for the smaller scale environments (Figure 1—figure supplement 2C; average correlation of familiarity and scale increase, r = −0.72; p<0.05, two-tailed one-sample t-test across subjects). First-person perspective taking and third-person perspective taking ratings were also highly correlated with scale increase, indicating a gradual shift between imagination of locations from a ground-level view in small-scale environments to imagination from a bird’s-eye view in large-scale environments (r = −0.81, r = 0.80, respectively; both p<0.05, two-tailed one-sample t-test across subjects; Figure 1—figure supplement 2E, Supplementary file 1). Response times did not significantly differ between scales (Figure 1—figure supplement 2D). The verbal descriptions of task-solving strategy confirmed the trend of decrease in ground-level and increase in map-like (or ‘bird’s-eye’) imagination with increasing scale (Supplementary file 1). These descriptions also demonstrated that as the scale decreased, subjects increasingly relied on estimations of walking or driving times between locations, except for the room scale where this strategy was not used (Supplementary file 1).
To measure the effect of these different factors on the observed activations, we used parametric modulation using subjects’ ratings of emotion, familiarity, difficulty, perspective taking and strategy. The familiarity, perspective taking (first-person and third-person) and reports of use of a map strategy showed significant effects inside the scale-related gradients, in accordance with their high correlation to spatial scale (Figure 2—figure supplement 5). No other factor showed any significantly active regions in this analysis.
We next contrasted the activity for the experimental task with that for the lexical control task at each region. Within the three gradients, this contrast revealed significantly higher activity for the spatial task compared to the lexical control task (GLM contrast, all p-values<0.05, FDR corrected for multiple comparisons across regions), except for the anterior city, country- and continent-related regions in the medial temporal gradient and the continent region in the occipito-parietal gradient. Among the other scale-sensitive regions outside of the gradients, only the supramarginal and lateral occipital cortex clusters did not show a significant activity above that of the lexical control task.
Press Office – UPF Barcelona
|
<urn:uuid:4bcebb56-e0b9-4c17-b0ce-a6f6bd4122f7>
|
CC-MAIN-2022-40
|
https://debuglies.com/2019/12/03/the-human-brain-has-a-mathematical-probabilistic-representation-of-what-it-considers-to-be-important-in-our-surroundings/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00248.warc.gz
|
en
| 0.888192 | 3,703 | 3.453125 | 3 |
The Challenges of Fire Suppression in a Data Centre Environment
Rarely do fires occur inside Data Center facilities. If there is, the fire suppression systems inside should take care of it. The problem here is that the fire solution itself could be as devastating, if not more so, than the fire.
With that in mind, there is a universal requirement for fire suppression equipment in Data Centers all over the world. Keeping the Data Center safe from a devastating fire can be challenging. It’s not like purchasing and installing a few fire extinguishers in locations all over the building. Data Centers have delicate equipment and heat-generating components that could burst into flames spontaneously. Also, Data Centers have people working inside the facility whose lives are in danger. Saving them is the topmost priority, with the second priority being the computer equipment and the building itself.
Data Center Fire Suppression Systems and Priorities
Around the world, different facilities have different guidelines for fire suppression systems. Fire officials and government regulators have rules as to what kind of suppression should be placed in Data Centers. Naturally, it shouldn’t harm the people inside the buildings.
More often than not, these systems can damage the equipment more than saving it. An example of this would be when a downtime incident triggers the accidental discharge of a fire suppression system and damages the servers.
An incident, for example, which took place while testing the fire suppression system on 10 September 2016 at an ING facility in Bucharest has compromised the operations of the data center by destroying dozens of hard drives. This forced the ING to find a nearby backup facility to continue its Romanian operations. The event provided insight to Uptime Institute’s EMEA Network because of the universal requirement for fire protection in Data Centers.
Fires originating from within Data Centers are rare and are mostly caused by electrical failures. Having an effective and functioning fire suppression system, is essential. The system should save lives and protect expensive equipment and critical data. As with any other fire suppression delivery system, it can pose a danger to operations when accidentally activated during testing and maintenance. It can also, when deployed, cause damage to a facility.
To prevent damage to the equipment, inert gas fire suppression systems are preferred among Data Center owners. An inert gas system uses nitrogen and argon to remove the oxygen in a given space. It essentially deprives the fire of oxygen, so the fire will die. A good fire suppression system can extinguish a fire in under a minute. Other automatic fire suppression systems use water or foam to put extinguish fires in Data Centers, but they will inevitably damage sensitive computer equipment.
An inert gas fire suppression system is the best option for a Data Center to use. The biggest hurdle with inert gas is that it is deployed under pressure.
For an inert gas system to work, it must flood the space with nitrogen and argon immediately. Releasing it at too slow a rate won’t stop the fire before it does extensive damage. Using a pressurized deployment system, the gas is released via an explosive shock wave. The concussive aftermath could damage servers.
Accidental Discharge Issues
According to the Uptime Institute, about one-third of all Data Center operators have at one time or another encountered accidental discharges, occasionally during testing.
The issue here is that an accidental discharge can damage sensitive equipment just as much as an actual fire. Even though it poses no immediate danger to the workers, it could still do some serious damage to the actual Data Center building and easily destroy a lot of computer equipment in mere seconds.
There are solutions. Data Centers in collaboration with their vendors can redesign nozzles to minimize the shock produced by activation and release. Some systems install sensors to alert if a discharge is about to happen. Finally, operators can adjust the equipment they use.
Sound-insulated cabinets, solid-state servers, and racks with barriers can all be used to reduce the damage to the equipment. Operators can move servers away from fire suppressant nozzles to reduce direct impact from the shock wave.
Hazards also include server damage from loud noise during the discharge of inert gas fire suppression systems.
These designs mean that the fire suppression system must meet the standards and potential threats the facility will face in the future.
It’s recommended that facilities have fire and smoke detectors in all areas of the Data Center so, wherever there is a fire, detectors can send out alarms as soon as possible. An adequate supply of fire suppression agent and sufficient dispersion nozzles are ideal to properly protect valuable equipment and data.
Fire suppression systems should be able to control fire without damaging a Data Center’s contents, and protect its occupants, with minimum residue or pools of water on the floor.
Clean agent fire suppression systems use inert gases that are safe for the environment and displace the oxygen around a fire to suppress it, while others put out the fire through cooling or heat absorption.
A clean agent by The National Fire Protection Association (NFPA) is “an electrically nonconducting, volatile, or gaseous fire extinguisher that does not leave a residue upon evaporation.” Clean agent systems discharge the extinguisher agent from storage cylinders to quickly suppress a fire before it activates the flame/heat nozzles inside the building’s code-required sprinkler system. Once a fire is out, the suppressing agent gases are removed by ventilation. They leave no residue and no equipment damage, and while clean-up is not needed.
The suppressing agents usually include halocarbons, fluorinated ketones, or inert gases (such as nitrogen, argon, or blends thereof). Halocarbons and fluorinated ketones suppress fires by a combination of physical (80%), and chemical (20%) mechanisms without removing the room oxygen. Inert gases suppress fires by means of taking away the oxygen content within a room to the point at which fire is starved.
Water Mist Fire Suppression
Water is the most used suppression agent in all buildings since water sprinklers are usually part of local city fire codes. Sprinkler systems are designed to save structures and save lives. That kind of fire suppression system, however, can be where the cure is worse than the disease. There is an alternative water-based fire suppression system which can minimize the amount of water needed to suppress the fire, and that has been specifically designed for Data Center fire suppression.
Water mist consists of finely atomized water droplets. A high-pressure water mist system produces water droplets, 99% of which measure less than 100 microns in size. These tiny droplets provide a larger surface area for heat transfer. This causes more of the water mist to be vaporized into steam, giving off radiant heat, increasing the cooling effect, to extinguish the fire.
Since water is vaporized during the suppression process, the water mist does not reach the systems’ critical assets, thus minimizing the damage. These systems do not require large amounts of water, as opposed to conventional sprinklers. This is practical especially in locations with limited water supplies, or where municipal water pressure is low. Since the old sprinkler systems rely on using the same water supply as fire hydrant lines (water that can contain sediment and other impurities), water mist systems use potable water which is free of contaminants.
Suppression System Design Approaches
Data Centers are not all the same sizes and configurations, which is important in determining the proper design of a fire suppression system. Areas requiring a fire suppression system can range from specific equipment or assets, a single floor, a room, or an entire building.
Total Flooding Suppression
Data Centers with special computer rooms are protected by a “total-flooding” clean agent systems. When a fire occurs, the entire room is spontaneously flooded with the fire suppression agent. The quantity of fire suppression agent required is based on the overall volume of the protected room. The inert gas must be blended homogeneously throughout that space to achieve an effective concentration.
To completely contain the fire, the room or enclosure must be airtight. All the walls, floors, return air ducts, and ceiling slabs, including doorways and other possible openings, must be completely sealed off. If the room has air ducts, it will make the suppression system ineffective. There must be a means to close them.
Facility Wide Suppression
Clean agent and water mist systems can be used together in facility-wide fire suppression. Choosing between the two is based on potential hazards, economics, and overall fire protection goals. Usually, adding gaseous clean agent systems to supplement standard building fire sprinkler systems is an effective first line of defense for expensive electronics. Gaseous systems are most economical in spaces of under 8,500 square feet and with 15-foot ceilings.
Water mist can be used in the same spaces as clean agents and will cover spaces within the facility not protected by clean agents, such as offices or storage. The water mist system is the facility’s primary fire suppression system, eliminating the need for sprinklers.
Instead of protecting an entire data facility, fire suppression systems can be built to protect a smaller, localized part, or specific equipment inside the area. This saves money over building an entire total-flood suppression system.
Local targets within a Data Center can include areas with potential fire sources, such as HVAC, power and communication cables (often located in subfloors or above floor cable trays) and power rooms (including backup generators, UPS, and battery rooms).
During the design and installation of a localized water mist system, closed, fusible-link, discharge heads must be carefully chosen and positioned to provide maximum coverage. Calculations should include consideration for activation temperature, flow, and distribution patterns for the concerned area. In the event of a fire, the system should only deploy water mist in the spaces where a heat signature is detected.
Many Data Centers prefer to deploy inert gas fire suppression systems. Generally, these systems protect extremely expensive gear. High-end computers, for example, are far more costly to replace than standard X86 servers. In practice, inert gas fire suppression systems replace the water via sprinkler nozzles. The discharge from an inert gas system, however, has also been shown to damage Data Center servers. Inert gas systems are far superior for protecting IT equipment because they do not compromise electronic circuits, even under full operation. In addition, inert gas systems can extinguish deep-seated fires, including those inside the racks.
Sellers of the system have worked to improve the delivery system for the release of the inert gasses by redesigning nozzles and improving sensors to reduce false signals. The Uptime Institute agrees that improvements have been made.
Recommendations From Vendors Regarding the Use of Inert Gas Systems:
- Installing racks that have doors to muffle the noise
- Installing sound-insulating cabinets
- Using high-quality servers, or solid-state servers and memory
- Slowing the rate of inert gas discharge
- Installing walls and ceilings that incorporate sound-muffling materials
- Aiming gas discharge nozzles away from servers
- Removing power from IT gear before testing inert gas fire suppression systems
- Muffling alarms during testing
- Replicating data to off-site disk storage
Despite improvements of inert gas fire suppression systems, pre-action fire suppression (which are water-based or carbon dioxide) systems have become more common. The use of water means that facility owners are insured against the total loss of a data center, and the dry-pipe feature—protects facilities from an accidental discharge in white spaces. It is because they are more economical choices, especially as local codes and ordinances require the use of a water suppression system, whereas the inert gas system is a fairly expensive back-up option.
Still, inert gas fire suppression systems have some followers, and they may make business sense for some companies. Data Center operators can use inert gas applications where water is scarce, or when a Data Center has very expensive and unique IT gear, such as supercomputers in HPC facilities, or old school tape-drive storage. In these instances, it is prudent that organizations would be better off developing improved backup and business continuity plans.
Those Centers considering inert gas suppressions are happy to learn that vendors have made considerable revisions to minimize damage from discharges of inert gas, and improved sensors that register fewer false positives. In addition, they have developed stricter procedures to minimize inadvertent discharge due to human error, which is the most common cause of accidental discharges.
It is recommended that IT management teams work with risk managers to make sure that all stakeholders have an understanding of a facility’s fire suppression requirements and alternatives before choosing a fire suppression system. Operational considerations should also be included so that the system is fitted to an organization’s risk exposure and the business requirements.
Most Data Centers should take advantage of a combination of pre-action (dry pipe) sprinkler system and high-sensitivity smoke detection. Most authorities having jurisdiction (AHJs), risk managers, and insurance companies will agree with this choice as long as other operating requirements are met, such as having a highly-trained staff providing building safety protocols. Local and government safety agencies are quite familiar with water-based fire suppression systems, as they are used by a vast majority of installations in the U.S. They may not, however, always be well-versed with pre-action systems.
Finally, Data Center operators should regularly examine their fire suppression systems and remove inert gas systems from spaces where sensitive equipment is no longer located. Deploying the system where it is no longer needed is no longer practical.
|
<urn:uuid:80c7027a-ed9e-4603-9e06-a2f60fcf5348>
|
CC-MAIN-2022-40
|
https://www.akcp.com/blog/data-center-fire-suppression-risks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00248.warc.gz
|
en
| 0.927139 | 2,832 | 3.15625 | 3 |
Security researchers are continuing to analyze a new worm that began spreading rapidly through e-mail systems worldwide earlier today.
Known as W32.Mimail.A, the worm attempts to exploit a vulnerability in Internet Explorer that allows a scripting on a users computer. Researchers at Symantecs security response center rated the worms damage capabilities as “low” though they said the worm is being widely distributed and that not all of its attributes are yet known.
Mimail.A arrives as a zipped file named “message.zip” in an e-mail with the subject line “your account.” The message, which often appears to come from an administrative account within the users domain, includes the message “Hello there, I would like to inform you about important information regarding your email address. This email address will be expiring. Please read attachment for details. Best regards, Administrator.”
Read Microsofts response
to this new worm.
“The creators of threats like Mimail continually look for ways to trick the average computer user into launching their malware surprises,” said Ian Hameroff, security strategist at Computer Associates International Inc. in Islandia, N.Y. “As such, all users need to keep a constant guard up against these tactics, taking a moment to validate the authenticity of any e-mail with an attachment. Its like the cyber equivalent of looking both ways before crossing the road.”
If the worm is launched, the malware copies itself to %Windir%videodrv.exe, amends the registry and runs when Widows is restarted. Mimail uses its own SMTP server to propagate further, security experts said.
The Mimail.A file is approximately 16KB and affects systems running Microsoft Windows 95, Windows 98, Windows NT, Windows 2000, Windows XP and Windows Me.
Researchers at Trend Micro, who rated Mimail a “medium” risk, published manual removal instructions for infected users available here.
|
<urn:uuid:794f685b-ac54-43f2-a044-a8ea272c7545>
|
CC-MAIN-2022-40
|
https://www.eweek.com/news/new-worm-spreading-via-e-mail/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00248.warc.gz
|
en
| 0.926001 | 415 | 2.53125 | 3 |
Protect Your Network and Subscribers – Put a Stop to Phishing
The GSMA defines phishing as an attempt “to steal consumers’ personal identity data and financial account credentials”. SMS phishing (also known as SMiShing) tricks mobile phone owners into downloading a Trojan horse, virus or other malware onto their phones.
How do fraudsters phish?
Emails have been a stable phishing method but, over the years, we have become savvier to the signs of email phishing attempts. SMS phishing attacks, on the other hand, are relatively newer than email phishing schemes and are proving to be a successful venture for criminals as:
- There are more than 7.3 billion mobile phone subscriptions worldwide;
- Approximately two thirds of adult mobile phone owners use text messaging;
- More than 90% of text messages are opened and read within seconds of being received.
What do fraudsters phish for?
Cell phone owners are duped into sharing confidential information, which may include passwords and credit card details. The fraudsters go on to use these details for malicious purposes such as monetary and identity theft.
What are the repercussions of SMS phishing for MNOs?
Since it is virtually impossible to pin down phishing criminals, victims of phishing scams may be quick to project feelings of anger toward their network provider. This could potentially lead to financial losses brought on by subscriber disgruntlement, reputational damage and subscriber churn.
How can HAUD Systems assist MNOs in putting a stop to SMS phishing attacks?
At HAUD, we have developed a complete suite of proprietary modules intended to systematically control all incoming and outgoing traffic on your network and as a result, put a stop to SMS phishing attacks, amongst others.
Together, the modules form a robust SMS firewall, allowing MNOs to monitor, block and analyse all the SMS traffic entering or leaving the network.
Here’s a short overview about each module:
BulkGuard – A pre-emptive solution that singles out and detects machine generated or bulk traffic by identifying traffic patterns.
MapScreen – Screens GSM MAP packets based on op code and type based error suppression.
HardBlock GT – Enables the whitelisting of roaming and interconnect partners whilst blocking problematic GT ranges.
HardBlock SID – Permits traffic screening based on Sender IDs and screens traffic according to pre-configured parameters.
PhraseBlock – Analyses the message body of the SMS and screens it by using pre-selected keywords or phrases.
HardBlock IMSI – Offers the flexibility and control to manage traffic according to destination IMSI.
BasicStatistics – Provides MNOs with up to date statistics in graphical format of incoming or outgoing traffic.
To learn more about how each module can counter the negative effects of SMS phishing attacks whilst protecting your network and subscribers, contact a HAUD specialist today on [email protected] for more information.
|
<urn:uuid:312d4c94-61f4-48dd-b910-3884a77ea4df>
|
CC-MAIN-2022-40
|
https://haud.com/blog/protect-your-network-and-subscribers-put-a-stop-to-phishing/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00248.warc.gz
|
en
| 0.900523 | 618 | 2.546875 | 3 |
We often think of innovation as something that causes disruptive, big-bang changes in the way people work, or distinctly new products (or services) offered to customers - but this is only half of the story. Innovations can be categorized as either disruptive or continuous. Continuous innovation is a process of making small incremental improvements, whereas a disruptive innovation is a “quantum leap” to something fundamentally different and better. Incremental innovation is, by nature, easy for end user communities to digest – as the changes are small and everything else remains familiar.
Disruptive innovations are precisely that: disruptive. They “level up” productivity, but at the cost of taking end users out of their comfort zones. This adds a larger element of risk. The bigger the change, the more resistance you will meet. The more resistance you meet, the larger the chance of failure if end users or customers reject the innovation.
Innovation may be highly visible to the customer or the end user community: new technology-enabled products or services, new ways of doing business with customers, or a new way of operating the business. Or it may involve less perceptible improvements “under the hood” of IT – designed to deliver better service performance or IT operations efficiency. Innovations may involve fundamental changes to a user interface, or the automation of manual labor to shave waste from an internal IT process. They may transform the way business is done and have a seismic impact on company revenue. Or simply be one of many incremental improvements that push down the cost of IT operations. Whatever shape or form an innovation takes, there will always be barriers and landmines – issues that you will need to face in order to turn a new concept of idea into a tangible and sustained benefit.
|
<urn:uuid:7a9e81b5-1e60-4e71-b471-94be459838dd>
|
CC-MAIN-2022-40
|
https://assyst.ifs.com/blog/5-bad-habits-that-kill-it-innovation-and-how-to-avoid-them
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00248.warc.gz
|
en
| 0.932788 | 358 | 2.546875 | 3 |
Quatum Chemical Calculations on Quantum Computers
(ScienceDaily) For the first time, a new quantum algorithm has been implemented for quantum chemical calculations on quantum computers; giving exact solutions of Schroedinger Equations for atoms and molecules. The solving of the Schroedinger Equation (SE) of atoms and molecules is one of the ultimate goals in chemistry, physics and their related fields. SE is “First Principle” of non-relativistic quantum mechanics, whose solutions an predict their physicochemical properties and chemical reactions.
Researchers from Osaka City University (OCU) in Japan, Dr. K. Sugisaki, Profs. K. Sato and T. Takui and coworkers have found a quantum algorithm enabling us to perform full configuration interaction (Full-CI) calculations for any open shell molecules without exponential/combinatorial explosion.
The OCU group said, “This is the first example of practical quantum algorithms, which make quantum chemical calculations realizable on quantum computers equipped with a sizable number of qubits. These implementations empower practical applications of quantum chemical calculations on quantum computers in many important fields.”
|
<urn:uuid:f6caca85-4f7f-4ab5-a476-8bbd028db230>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/quatum-chemical-calculations-quantum-computers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00448.warc.gz
|
en
| 0.85294 | 233 | 2.84375 | 3 |
Researchers Use Quantum Patterns of Twisted Light to Send Across Single Mode Fiber Optic Cable
(ISPreview.co.uk) A team of South African and Chinese scientists have figure out how to harness multiple quantum patterns of twisted light (from a laser) so that it can be sent across a conventional single mode fibre optic cable (these can usually only support one pattern), which wouldn’t ordinarily be possible without a custom made fibre.
Quantum links are an attractive prospect for future networks because they’re often said to be virtually “un-hackable” as they rely on the use of single particles of light (photons), to transmit data encryption “keys” (QKD – Quantum Key Distribution) across an optical fibre.
The teams from the University of the Witwatersrand (South Africa) and Huazhang University of Science and Technology (China) sought to circumvent current restrictions for single mode fibre. The way they did this was by “entangling the spin-orbit degrees of freedom of a biphoton pair, passing the polarization (spin) photon down the SMF while accessing multiple orbital angular momentum (orbital) subspaces with the other” (i.e. multidimensional entanglement transport).
|
<urn:uuid:021a310d-ee27-425e-9532-c20c43cfe51a>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/researchers-use-quantum-patterns-of-twisted-light-to-send-across-single-mode-fiber-optic-cable/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00448.warc.gz
|
en
| 0.863457 | 258 | 2.84375 | 3 |
IT industry is one of the fastest growing industries across the globe. Big thanks to cloud computing due to which a large quantity of data is generated and interchanged over the network for better capitalization of companies significant investments.
Cloud has allowed enterprises to take the benefits of computing resources which are usually shared in a virtualized environment. Most of the companies have now switched over to cloud-based services.
Cloud Computing brings up the concept of load balancing, which is important to be known by all those who are using cloud computing or about to turn into it.
What Is Load Balancing In Cloud Computing?
Thousands of users have accessed a website at a particular time. It is challenging for applications to manage the load that comes from all these requests at a time. Sometimes, it may result in a breakdown of your entire system.
Load balancing in cloud computing is the process in which workloads and computing resources are distributed across more than one servers. The workload is divided among two or more servers, network interfaces, hard drives and other computing resources which result in better utilization and system response time.
High traffic web site requires highly efficient load balancing for a smooth operation of their business. Load balancing helps in maintaining system firmness, performance and protection against system failures.
Working Of Load Balancing?
Firstly, I would like to clear that load in load balancing refers not only to website traffic but also comprises of memory capacity, network and CPU load on the server. The primary function of load balancing technique is to ensure that each system of the network is equipped with the same amount of work. It means neither of the system goes overloaded or underutilized.
The load balancer equally distributes the data depending on how busy the server is. Without load balancer, the client would wait long to process their data that might be frustrating for them.
During this load balancing process, information like job arrival rate and CPU processing rate are exchanged among the processors. Any failures in the application of load balancers can head to some severe consequences such as data loss.
Various companies use different load balancers along with multiple load balancing algorithms. One of the most commonly used methods or algorithms is the "Round Robin" load balancing.
Importance of Load Balancing
1. Better Performance
Load balancing techniques are less expensive and easy to implement as compared to its counterparts. Organizations can work on their client's applications much more faster and deliver better performance at relatively lower costs.
2. Maintain Website Traffic
Cloud Balancing provides scalability to control website traffic. With the help of effective load balancers, you can easily manage high-end user traffic with the presence of servers and network devices.
Cloud balancing plays a crucial role for e-commerce websites like Amazon and Flipkart, who are dealing with millions of visitors every single second. Load balancers help them distribute and manage workloads at the time of promotional and sale offers.
3. Handle Sudden Traffic Burst
Load balancers have this ability to handle any sudden traffic received at a particular time. For example, a College or University website can shut down during result declaration due to too many requests arrivals at the same time.
If they are using load balancers they do not have to worry about any amount of traffic burst. No matter how big is the traffic, load balancers equally divide entire website load into different servers for maximum results in a minimum response time.
The main objective of using a load balancer is to protect the website from a sudden mishap. When the workload is distributed among a number of network units or servers, even if one node fails, the load could be shifted to another node. This shows scalability, flexibility and the handling ability of traffic.
I think this post is enough to provide you a complete information about load balancing in cloud computing which is an essential part of server management. Today every organization is using load balancers for a smooth functioning of their websites and if you are planning to create your website as well, your first task should be to find perfect load balancers.
|
<urn:uuid:2a60683f-d4a7-4ceb-8bbf-123f7875eb6c>
|
CC-MAIN-2022-40
|
https://www.xcellhost.cloud/blog/importance-load-balancing-cloud-computing-environment
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00448.warc.gz
|
en
| 0.940034 | 832 | 3.203125 | 3 |
Toshiba announced it has developed a chip-based quantum key distribution (QKD) system. This advance will enable the mass manufacture of quantum security technology, bringing its application to a much wider range of scenarios including to Internet of Things (IoT) solutions.
QKD addresses the demand for cryptography which will remain secure from attack by the supercomputers of tomorrow. In particular, a large-scale quantum computer will be able to efficiently solve the difficult mathematical problems that are the basis of the public key cryptography widely used today for secure communications and e-commerce. In contrast, the protocols used for quantum cryptography can be proven secure from first principles and will not be vulnerable to attack by a quantum computer, or indeed any computer in the future.
The QKD market is expected to grow to approximately $20 billion worldwide in FY2035. Large quantum-secured fibre networks are currently under construction in Europe and South-East Asia, and there are plans to launch satellites that can extend the networks to a global scale.
In October 2020, Toshiba released two products for fibre-based QKD, which are based on discrete optical components. Together with project partners, Toshiba has implemented quantum-secured metro networks and long-distance fibre optic backbone links in the UK, Europe, US and Japan.
For quantum cryptography to become as ubiquitous as the algorithmic cryptography we use today, it is important that the size, weight and power consumption are further reduced. This is especially true for extending QKD and quantum random number generators (QRNG) into new domains such as the last-mile connection to the customer or IoT. The development of chip-based solutions is essential to enabling mass market applications, which will be integral to the realisation of a quantum-ready economy.
Toshiba has developed techniques for shrinking the optical circuits used for QKD and QRNG into tiny semiconductor chips. These are not only much smaller and lighter than their fibre optic counterparts, but also consume less power.
Most significantly, many can be fabricated in parallel on the same semiconductor wafer using standard techniques used within the semiconductor industry, allowing them to be manufactured in much larger numbers. For example, the quantum transmitter chips developed by Toshiba measure just 2x6mm, allowing several hundred chips to be produced simultaneously on a wafer.
Andrew Shields, Head of Quantum Technology at Toshiba Europe, remarked, “Photonic integration will allow us to manufacture quantum security devices in volume in a highly repeatable fashion. It will enable the production of quantum products in a smaller form factor, and subsequently allow the roll out of QKD into a larger fraction of the telecom and datacom network.”
Taro Shimada, Corporate Senior Vice President and Chief Digital Officer of Toshiba Corporation comments, “Toshiba has invested in quantum technology R&D in the UK for over two decades. This latest advancement is highly significant, as it will allow us to manufacture and deliver QKD in much larger quantities. It is an important milestone towards our vision of building a platform for quantum-safe communications based upon ubiquitous quantum security devices.”
Part of this work was funded by the Innovate UK Collaborative R&D Project AQuaSeC, through the Industrial Strategy Challenge Fund. The details of the advancement are published in the scientific journal, Nature Photonics.
QKD systems typically comprise a complex fibre-optic circuit, integrating discrete components, such as lasers, electro-optic modulators, beam-splitters and fibre couplers. As these components are relatively bulky and expensive, the purpose of this work was to develop a QKD system in which the fibre-optic circuit and devices are written in millimetre scale semiconductor chips.
Toshiba has developed the first complete QKD prototype in which quantum photonic chips of different functionality are deployed. Random bits for preparing and measuring the qubits are produced in quantum random number generator (QRNG) chips and converted in real-time into high-speed modulation patterns for the chip-based QKD transmitter (QTx) and receiver (QRx) using field-programmable gate arrays (FPGAs).
Photons are detected using fast-gated single photon detectors. Sifting, photon statistics evaluation, time synchronisation and phase stabilisation are done via a 10 Gb/s optical link between the FPGA cores, enabling autonomous operation over extended periods of time. As part of the demonstration, the chip QKD system was interfaced with a commercial encryptor, allowing secure data transfer with a bit rate up to 100 Gb/s.
To promote integration into conventional communication infrastructures, the QKD units are assembled in compact 1U rackmount cases. The QRx and QTx chips are packaged into C-form-factor-pluggable-2 (CFP2) modules, a widespread form-factor in coherent optical communications, to ensure forward compatibility of the system with successive QKD chip generations, making it easily upgradeable. Off-the-shelf 10 Gb/s small-form-factor pluggable (SFP) modules are used for the public communication channels.
Taofiq Paraiso, lead author of the Nature Photonics paper describing the chip-scale QKD system, says: “We are witnessing with photonic integrated circuits a similar revolution to that which occurred with electronic circuits. PICs are continuously serving more and more diverse applications. Of course, the requirements for quantum PICs are more stringent than for conventional applications, but this work shows that a fully deployable chip-based QKD system is now attainable, marking the end of an important challenge for quantum technologies. This opens a wide-range of perspectives for the deployment of compact, plug-and-play quantum devices that will certainly strongly impact our society.”
|
<urn:uuid:29512a33-4ee8-4354-b1e3-e66e6cccc315>
|
CC-MAIN-2022-40
|
https://www.helpnetsecurity.com/2021/10/23/toshiba-chip-based-qkd-system/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00448.warc.gz
|
en
| 0.929003 | 1,227 | 2.734375 | 3 |
MIT researchers claim to have created an AI model that sets a new standard for understanding how a neural network makes decisions.
The team from MIT Lincoln Laboratory’s Intelligence and Decision Technologies Group has developed a neural network that performs human-like reasoning to answer questions about the content of images.
As it solves problems, the Transparency by Design Network (TbD-net) shows its workings, by visually rendering its decision-making process, allowing the researchers to see the reasoning behind its conclusions.
Unusually, not only does this model achieve new levels of transparency, it also outperforms most of today’s best visual-reasoning neural networks.
The research is presented in a paper called Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning.
The complexity of brain-inspired neural networks makes them remarkably capable, yet it also renders them opaque to human understanding, turning them into so-called ‘black-box’ systems. In some cases, it’s impossible for researchers to trace the course of a neural network’s calculations.
A transparent neural network
In the case of TbD-net, its transparency allows researchers to correct any erroneous assumptions the system may have made. Its developers say this type of corrective mechanism is missing from other leading neural networks today.
Self-driving cars, for example, must be able to rapidly and accurately distinguish pedestrians from road signs. Creating a suitable AI to do that is hugely challenging, given the opacity of many systems. Even with a capable enough neural network, its reasoning process may be unclear to developers – a problem that MIT’s new approach is set to change.
Ryan Soklaski, who created TbD-net with fellow researchers Arjun Majumdar, David Mascharka, and Philip Tran, said:
Progress on improving performance in visual reasoning has come at the cost of interpretability.
The team took a modular approach to their neural network – building small sub-networks that are specialised to carry out subtasks. TbD-net breaks down a question and assigns it to the relevant module. Each sub-network builds on the previous one’s conclusion.
“Breaking a complex chain of reasoning into a series of smaller sub-problems, each of which can be solved independently and composed, is a powerful and intuitive means for reasoning,” said Majumdar.
The neural network’s approach to problem solving is similar to a human’s reasoning process. As a result, it is able to answer complex spatial reasoning questions such as, “What colour is the cube to the right of the large metal sphere?”
The model breaks this question down into its component concepts, identifying which sphere is the large metal one, understanding what it means for an object to be to the right of another one, and then finding the cube and interpreting its colour.
The network renders each module’s output visually as an ‘attention mask’. A heat-map is layered over objects in the image to show researchers how the module is interpreting it, allowing them to understand the neural network’s decision-making process at each step.
Despite designing the system for greater transparency, TbD-net also achieved state-of-the-art accuracy of 99.1 percent, using a dataset known as CLEVR. And thanks to the system’s transparency, the researchers were able to address faults in its reasoning and redesign some modules accordingly.
The research team hopes that such insights into a neural network’s operation may help build user trust in future visual reasoning systems.
Internet of Business says
The opaque nature of many neural networks risks creating systemic and ethical problems, not to mention allowing bias into the system unchecked.
However, when visual reasoning neural networks are made more transparent, they typically perform poorly on complex tasks, such as on the CLEVR dataset.
Past efforts to overcome the problem of black box AI models, such as Cornell University’s use of transparent model distillation, have gone some way to tackling these issues, but TbD-net’s overt rendering of its reasoning process takes neural network transparency to a new level – without sacrificing the accuracy of the model.
The system is capable of performing complex reasoning tasks in an explicitly interpretable manner, closing the performance gap between interpretable models and state-of-the-art visual reasoning methods.
With computer vision and visual reasoning systems set to play a huge part in autonomous vehicles, satellite imagery, surveillance, smart city monitoring, and many other applications, this represents a major breakthrough in creating highly accurate, transparent-by-design neural networks.
|
<urn:uuid:7666348f-d3c5-40ca-89c3-d869c8fcfb04>
|
CC-MAIN-2022-40
|
https://internetofbusiness.com/transparent-neural-network/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00448.warc.gz
|
en
| 0.940077 | 966 | 3.453125 | 3 |
In our previous tutorials, we have considered the two major architectures that have been developed to support Voice over IP (VoIP) networks: H.323, developed by the International Telecommunications Union Telecommunications Standard Sector (ITU-T), and the Session Initiation Protocol (SIP), developed by the Internet Engineering Task Force (IETF). In our first tutorial on H.323, we considered the history and architecture of that standard, and looked at the four key components of an H.323 system: Terminals, Gateways, Gatekeepers, and Multipoint Control Units (MCUs). This tutorial extends the discussion of H.323 by focusing on the architecture of the H.323 terminal, and the protocols that are required to support the terminal functions.
The actual title of H.323, Packet-Based Multimedia Communications Systems yields a few clues regarding the protocols that will be required to support a H.323-compatible terminal. The Packet-Based part indicates that we are considering a packet switched, not circuit switched network environment, such as the Internet and its foundation protocol, the Internet Protocol (IP). The Multimedia part indicates that support for voice, data and/or video systems may be required, depending upon the end user objectives and the channel deployed in support of that end user communication. The Systems part indicates that connectivity between H.323 environments will be required, and with that comes issues of multivendor interoperability, that raises the immediate attention of many network managers.
To support these functions requires an intelligent terminal that takes audio, video and/or data inputs, begins a dialogue with another compatible system, and reliably transports that information across a packet network to the other system. TheH.323 terminal therefore requires several key components:
- Audio coder/decoder (codec): takes the analog audio signal from a microphone and converts it into a digital format that can be transmitted across the packet network. At the receiver, the opposite function (decoding) is performed, thus reconstructing the analog signal for human consumption. H.323 references other ITU-T standards for audio codecs that have been previously developed. These include: G.711, G.722, G.723, G.728 and G.729, all of which have specific encoding algorithms, data rates, and related technical specifications. At least one audio codec (G.711, operating at 64 Kbps) is required within the H.323 system, and other codecs may be optionally included.
- Video codec: takes the video information from a camera, and converts it into a transmittable form. At the receiver, an inverse function is performed (again called decoding), so that it can be displayed for human consumption. Two ITU-T video codec standards are referenced in H.323: H.261 and H.263. Video is an optional media type for H.323, therefore the video codec may or may not be included in the system. (This brings about the first interoperability challenge. If my H.323 terminal supports both audio and video, but yours only supports audio, we can therefore talk, but not share video information. Both of our terminals are compliant with H.323, however, but not completely compatible, since the video support is an optional feature.)
- User Data Channel: provides a data channel within the H.323 system to support applications such as still images, file transfers, audiographics conferences, and database access. To support the audiographics conferencing function, the ITU-T has developed the T.120 standard, which complements H.323.
- Registration, Admission, and Status (RAS): Gatekeepers are optional devices that provide for network management functions, such as bandwidth management. When a Gatekeeper is active on the network, endpoints register with that Gatekeeper using a process called RAS, which stands for Registration, Admission, and Status. When no Gatekeeper exists on the network, RAS is not used. The RAS signaling function is defined in the complementary H.225 standard.
- Call Signaling: the process of establishing or taking down a connection between two communicating entities is called signaling, which finds its historical roots in the on-hook, off-hook, dial tone, busy tone, and other signals that have been incorporated into the telephone network for decades. H.323 endpoints must also establish a connection between themselves prior to further communication, and this call signaling process for H.323 endpoints is defined in the H.225 standard.
- End-to-End Control Signaling: the H.323 systems must have a means to govern the operation of the two communicating endpoints. Another complementary standard, designated H.245, provides functions such as opening and closing logical channels, exchanging station capabilities, requesting a particular mode of operation, controlling the end-to-end flow of information, determining round trip delay, and so on. Note that the End-to-End Control Signaling is separate from the H.225 Call Signaling, and that the Call Signaling channel operates first (to establish the connection) prior to the Control Signaling functions (which control the communication once the channel has been established).
- Media Stream Packetization: the voice, data, and/or video information (which, in the case of voice and video, will have passed through a codec) must be placed inside a packet for delivery to the remote H.323 system. The H.225 standard specifies that the IETF Real-time Transport Protocol/Real-time Transport Control Protocol (RTP/RTCP) be used for these functions.
- Network Interface: the packet-based network interface is implementation-specific, and therefore outside the scope of H.323. From a product standpoint, however, many H.323-compliant devices support the Ethernet/IEEE 802.3 standard at the network interface, making for simpler integration of the H.323 device with most local area networks.
As demonstrated above, the H.323 standard is really an “umbrella” under which other standards, such as H.225, H.245 and RTP/RTCP fit. In our next tutorial, we will examine how all of the standards work together to provide end-to-end signaling and endpoint communication.
Copyright Acknowledgement: © 2005 DigiNet ® Corporation, All Rights Reserved
Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons.
|
<urn:uuid:0a1dfb71-1232-4620-a569-e411d2f57d24>
|
CC-MAIN-2022-40
|
https://www.enterprisenetworkingplanet.com/unified-communications/understanding-h-323%EF%BF%BDpart-ii-protocols-supporting-terminals/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00448.warc.gz
|
en
| 0.905833 | 1,372 | 3.203125 | 3 |
A production plant must be plannable and flexible: It is important to know when a production order will be completed or how quickly a new production order can be started. Possible errors and slowdowns should be easy to identify and to correct, potential improvements should be recognizable. For this purpose, parameters can be defined which provide general information about the plant. An example for the quality of the plannability is the variance of the production time for similar products. An example for plannability and flexibility is the throughput rate. It describes how many workpieces are produced in a plant per time unit (e.g. hour).
Such parameters aggregated on plant level do not allow a direct drilldown, e.g. to potential causes of a greater variance of production times or to sub-processes that slow down the production speed. Process Mining can start at this point, because it allows a drilldown to individual process sections.
Process Mining is the visualization and analysis of processes on the basis of event logs. In general, event logs are protocols of IT-based processes. During a manufacturing process, data is generated at workstations where workpieces are scanned and processed. If you follow the path of individual workpieces from workstation to workstation, you get their production paths. It is possible to filter for specific sections of the production paths, analyze and understand them individually.
In production processes with a high degree of standardization, many workpieces have the same production route. Therefore, the most frequent production paths over a longer period of time can be interpreted as a digital image of the production plant. The time period should be at least such that the majority of the production processes described in the data have been completed.
This digital image of the plant learned from the data is not perfect, because there may be missing production routes with rarely used workstations. However, with little effort you can quickly obtain a lot of information about a production plant, information about the majority of the data and useful insights (Pareto principle). Only very little data is needed to create an image of the production plant. It is sufficient to know when which work piece was observed at which workstation.
If you now look at the frequency of individual events for a selected time period, you get the throughput rate of the workstation described by the event. It describes the “speed” at which workpieces are processed. But Process Mining offers even more information. Since the production route is also learned from the data, changes in throughput rates can be detected.
For example, let’s assume a process in which a workpiece has to pass through four workstations A to D. All stations have a target throughput rate of 65 pieces per hour (65/h), shown in the process model below.
Between 3 and 4 hours after the start of work, the rate of finished workpieces drops to 57/h, even though not all planned workpieces have been produced. If one now filters to the time interval of 3 to 4 hours after the start of work, one can immediately see at which point in the production plant the throughput rate fell.
The rate first decreased when machining at or waiting at workstation C. Now it is possible to identify the causes of the problem in a targeted manner and prevent it for future production orders.
With filter settings like these, the soon to be launched Appian AI-Driven Process Mining will give you the possibility to drill down to individual process key figures and sections in a whole new way. The tool thus provides you with an insight into your processes and findings that would not be possible in this form with conventional methods of process analysis.
Learn how process mining can provide valuable insights into your processes.
|
<urn:uuid:925f1136-1b8f-4f45-8c9c-79172876674d>
|
CC-MAIN-2022-40
|
https://appian.com/blog/2021/process-mining-in-manufacturing.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00448.warc.gz
|
en
| 0.940143 | 757 | 2.921875 | 3 |
Virginia Tech: EMBERS
Forecasting the Future: The EMBERS Predictive Analytics Success StoryDownload PDF
Would Scotland be an independent country today if it had not scheduled a vote? Some say the United Kingdom narrowly held onto Scotland because the British government pledged “more self-rule” to Scots less than two weeks before they were to vote on Scottish independence. Polls indicated the vote was “too close to call” for weeks leading up to the referendum on September 18, 2014.
Advance warning was the key. The UK government was able to act because they were forewarned, but it is rare that imminent, major, disrupting events come scheduled with so much lead time. Until now.
For about two years, the EMBERS project has been forecasting civil unrest events in Latin America with an average of seven days lead time. This project led by Virginia Polytechnic Institute and State University (Virginia Tech) is one of the most positive and significant examples of the much-touted power of big data living up to its reputation. That is, if properly mined and harnessed, open source big data can reveal startling insights with real-world impacts.
Throwing Down The Gauntlet
Following the Arab Spring—a series of populist upheavals in the Middle East from early 2011—government analysts in the Office of the Director of National Intelligence (ODNI) asked “Could we have foreseen these events?” That question became an initiative put forth by the Intelligence Advanced Research Projects Activity (IARPA) called the Open Source Indicators (OSI) Program, which challenged applicants “to develop methods for continuous, automated analysis of publicly available data in order to anticipate and/or detect significant societal events, such as political crises, humanitarian crises, mass violence, riots, mass migrations, disease outbreaks, economic instability, resource shortages, and responses to natural disasters.” Essentially to “beat the news.”
Taking Up The Gauntlet
In April 2012, Dr. Naren Ramakrishnan, Director of the Discovery Analytics Center at Virginia Tech organized a multidisciplinary team from academia and industry to launch the EMBERS (Early Model-Based Event Recognition using Surrogates) project, with an initial focus on forecasting population-level events (civil unrest, elections, disease outbreaks, and domestic political crises) in Latin America. EMBERS was to realize the aims of the OSI Program by automating the generation of alerts so that analysts could focus on interpreting the discoveries, rather than the mechanics of integrating information.
Dr. Ramakrishnan has over 20 years experience working with big data, including his PhD work in computer sciences at Purdue University, current professorship at Virginia Tech, and leadership of its Discovery Analytics Center. The center brings together researchers from computer science, statistics, mathematics, and electrical and computer engineering to tackle knowledge discovery problems in intelligence analysis, sustainability, and electronic medical records. He is also an active contributor and reviewer for numerous academic publications, and has received many awards for teaching and research excellence.
The approach of Dr. Ramakrishnan’s team was a human/computer collaboration.
Models were setup to monitor increased chatter about the shortage of essential goods. Sure enough, a shortage of toilet paper was connected to protests there.
They combined human expertise from subject matter experts (SMEs)—to devise and seed models—with computational power and natural language processing—to cope with the sheer enormity of examining open source big text.
For Venezuela, SMEs pointed out that because prices are heavily controlled by the Venezuelan government, shortages naturally arise from this structure. Therefore models were setup to monitor increased chatter about the shortage of essential goods. Sure enough, a shortage of toilet paper was connected to protests there.
As a second example, Ecuador is a popular country for referenda because the various constitutions of the past 20 years permit any laws proposed by the president and rejected by the Congress to be taken to a referendum. Thus close attention to lawmaking and popular votes was important in Ecuador. In these ways, SMEs shared very local, country-specific issues, that helped the engineers design better models.
May The Best Algorithm Win
From the outset, it was clear to Dr. Ramakrishnan that EMBERS needed to be nimble, to test models quickly, and then rapidly incorporate what they learned with each iteration. Thus, instead of one master algorithm that tries to forecast everything, the team at Virginia Tech took an ensemble approach.
A master “fusion” module probabilistically combines the forecasts of the various individual algorithms into one final forecast.
Different content — medical news, tweets, political news, activist blogs — is fed to a diverse set of modules that focuses on forecasting different events: elections, civil unrest, disease outbreaks, etc. This works out to 6-8 algorithms developing alerts for each event class, with each algorithm having different biases, and using different combinations of data and models to produce competing forecasts.
In the end, a master “fusion” module probabilistically combines the forecasts of the various individual algorithms into one final forecast. Perhaps two algorithms of the six tend to be more accurate in Argentina, while some others have a better understanding of El Salvador, and the fusion module learns to weigh the predictions of these algorithms accordingly.
“The fusion engine combines different forecasts from competing modules, like a mix of expert analysts. We’ve found this to be one of the best ways to keep improving the system, rather than making one magic model that tries to do everything, but does nothing really well,” Dr. Ramakrishnan said.
Extracting Signals From The Noise: The Unstructured Data Challenge
The obvious challenge of Big Data is the sheer quantity of “stuff” that has to be examined to find the useful bits that form a pattern or coalesce into “pictures” that support a forecast.
Imagine a beach where each grain of sand is like a mosaic tile; however, many of the tiles are tied up in bags both large (news articles) and small (tweets). In some cases the bags may be neatly identified as “square, blue, glass”—what we call “structured data”—but in other cases, it takes opening the bags and dumping out the contents to sort them by color, material, shape, or other criteria—analogous to “unstructured data.”
For Latin America, at least 60% of EMBERS’ alerts are generated from unstructured data: 35% from social media (including tweets) and 25% from news stories.
For Latin America, at least 60% of EMBERS’ alerts are generated from unstructured data: 35% from social media (including tweets) and 25% from news stories. The remaining 40% come from a mix of sources including historical data and, highly structured data (such as food and commodity prices, economic indicators), and other reports.
Message Enrichment: Responding To The Challenge
So, how does EMBERS label all these tiles of information for its forecasting modules? A “message enrichment” step in EMBERS structures the unstructured data with the help of BasisTech’s Rosette® text analytics platform. Rosette is the “bag-opener and sorter,” enriching the text and applying metadata to feed the next steps in the process. For example, Rosette combs through the Twitter feed, news feeds, and blogs, sorting them into categories: “Spanish, Portuguese, English, French” or “noun, verb, adjective” or “date/time, person, location, organization.”
Rosette is the “bag-opener and sorter,” enriching the text and applying metadata to feed the next steps in the process.
Additional enrichment modules may add yet more information to the Rosette output. For example, taking extracted dates and times, such as “el sábado próximo,” (=“next Saturday”) and converting them to actual dates. Or passing mentions of locations to a geocoder to convert them into geographic coordinates.3
Not every EMBERS module uses data from Rosette, but for those that do, the enrichment is indispensable. Once each “tile” has been tagged and labeled, the modules can pick out the necessary ones from among the trillions of pieces of data.
From the start, BasisTech engineers worked closely with the Virginia Tech team to configure Rosette, making small adjustments to accommodate the needs of the various forecasting modules.
“It was good that BasisTech was adaptable in meeting our needs. This is an iterative process, and if something is not working, we need to adjust,” said Dr. Ramakrishnan. “We made several changes to Rosette in the beginning to have it take into account the various types of data. But once we were happy with the output, it became a convenient black box for integration, supporting many different languages and many different language processing functions.”
“We’re proud to be part of this groundbreaking project,” said Bill Ray, VP of Federal Sales at BasisTech. “Because of the flexible nature of our Rosette linguistics platform, we’ve been able to adapt it to the needs of the EMBERS project, and in the process gain new insights and share best practices.”
The Embers System In Action
EMBERS is a fully automatic system running 24×7 without human intervention that digests nearly 20GB of open source data a day. The data comes from over 19,000 blog and news feeds, tweets, Healthmap alerts and reports, Wikipedia edits, economic indicators, opinion polls, weather data, Google Flu Trends, and even some non-traditional data sources, like parking lot imagery and online restaurant reservations.
EMBERS began operation in November 2012, focusing on 20 Latin American countries and producing “warnings” that forecast these events.
EMBERS is a fully automatic system running 24×7 without human intervention that digests nearly 20GB of open source data a day.
For instance, a civil unrest warning, comes with several pieces of information:
- When: predicted date of event
- Where: location of event to the city-level
- Who: population segment
- Why: reason for unrest
- Probability: confidence level of the prediction
- Forecast date: date the warning was produced
Flexible Phrase and Keyword Matching
Some of the warnings are generated by detecting mentions of future dates and times. Others are generated by machine learning models. In both cases, fine-grained information extraction is key.
For instance, calls for protests may be embedded in a short phrase in social media messages, containing key information about the date, time, location, and significance of an event. Therefore, knowing the role of each word in unstructured text is invaluable.
The modules that process flexible phrase and keyword matching are looking for similar messages in social media. Rosette’s language identification detects the language of each message, and then Rosette applies a language-specific module to tag each word of say, a tweet, with part-of-speech information. Based on that structure, EMBERS modules can match “chamar protesto” to “chamar um protesto” or “chamada para um protesto” (“call protest” to “call a protest” or “call for a protest”). Then the similarly phrased messages can be screened for dates and times.
The phrase dictionary is systematically enlarged by an algorithm that screens the output from Rosette to find new vocabularies for use in future runs.
Dates and Times
It’s not surprising that the ability to extract and use dates and times plays a leading role in forecasting. Some time and date information is structured, such as timestamps in some messages, but when they occur in unstructured text, these dates and times are found by Rosette’s entity extraction function. The TIMEN module then resolves these temporal mentions to absolute values, such as turning “dentro de quince días” (“in a fortnight”) into “05 de Octubre 2014.”
Location, Location, Location
EMBERS searches for three types of location information2 to identify:
- the source location of a given message or piece of data
- the topical location that is under discussion in the material, and
- the user location of primary affiliation of the author
Some of the location information may be structured within a tweet (e.g., Twitter geotags), but other locations are mentioned in the unstructured Twitter message, e.g. “#UnidadEnlaCalle MAÑANA protesta Jueves #16Oct a las 12:15 en la Calle Principal Briceno Iragorri, Caracas” (“#UnityInTheStreets TOMORROW protest Thursday #16Oct at 12:15 in the Main Street Iragorri Briceno, Caracas”).
Even locations within the names of organizations may give hints to the source or topical location, e.g., “Malaga FC” in a tweet “Roque Santa Cruz, jugador del Malaga FC visito hoy Santa Cruz en Chile.” (“Roque Santa Cruz, FC Malaga player, today visited Santa Cruz in Chile.”).
Through entity extraction, locations under discussion can be discovered from the text of the tweet. Structured location data—from Twitter geotags or Twitter places and text fields in the user profile—are also added to each message during the data enrichment phase.
Names of people and organizations are also extracted by Rosette to support an EMBERS module that detects mentions of key players. Human analysts compile a list of these significant people (limited to public figures) and organizations for the module. By using the results of entity extraction, the system is more accurate than just conducting a word-to-word match.
Consider that words used for personal names can also represent things that are not people.
The Path From 0 To 50 Alerts/Day
To evaluate the success of forecasts from the Virginia Tech team and the two competing teams, the accuracy of warnings were judged by an independent, third-party group, MITRE. From the start MITRE was tasked to develop “ground truth” by looking at newspaper articles for reports of civil unrest. The MITRE team generated these gold standard reports (GSRs) which were used both as training data for the various team’s models, and as a criteria for measuring success.
EMBERS started delivering warnings within six months (in November 2012) for Latin America, and by the end of the first year, was demonstrating some predictive power, but not enough to call it an unqualified success. The minimum quality standard as determined by the IARPA challenge was a 3.0 on a 4 point scale (with 4 being perfect). At the end of year one, EMBERS was flirting with this minimum quality score, but not exceeding it.
By the second year, EMBERS was consistently producing forecasts rated well above 3.0 with better lead times for civil unrest in particular.
According to Chris Walker, project manager of EMBERS, about 18 months into the project, new approaches to tune and optimize the generation of warnings were developed and that led to a big improvement in performance. The team developed a suppression engine that learns to estimate the quality of warnings and automatically suppresses those that are deemed to be of poor quality.
By the second year, EMBERS was consistently producing forecasts rated well above 3.0 with better lead times for civil unrest in particular. In addition to the suppression engine, the second factor for success was that the team had figured out which data sources added the most efficiency to forecasting, and how to adjust the ensemble of models to capitalize on this insight. For example, restaurant cancellations at OpenTable.com were highly linked with flu, and satellite photos showing fuller hospital parking lots were linked with disease spread.
The Fruits Of Labor
By March of 2014, after 17 months of producing alerts, EMBERS was beating the news and the competition:
- Over 10,000 warnings delivered
- Around 40-50 warnings per day
- Correctly forecasted the protests during the “Brazilian Spring” in the summer of 2013, which were spread out over three weeks, involving hundreds of protests.
- Correctly forecasted student-led protests in Venezuela in early 2014. EMBERS also correctly forecasted that the Venezuelan protests would turn violent, as they did.
- EMBERS exceeded its two-year metrics goals in three criteria, met on one, and underperformed—by very little—in a fifth criterion.
By March of 2014, after 17 months of producing alerts, EMBERS was beating the news and the competition.
Future For Embers: The Middle East and Beyond
In June 2014, fresh off their success in Latin America, the EMBERS team started work on forecasting events in the Middle East. This new regional focus on seven countries in the Middle East presents different challenges to both the human and computer aspects of the collaboration.
Key to producing reliable alerts in the Middle East will be understanding the influence of cultural issues on forecasting. The EMBERS team is learning to model for things that have no parallels in Latin America. For example, lessons into how Latin American citizens express discontent do not quite hold for the Middle East.
“The Middle East is a big circle, and each country has a different cultural and historical context which has to be modeled, so our subject matter experts are crucial in figuring out what to look out for and model,” Dr. Ramakrishnan said. “We have to understand what a protest means in a particular country, because the cultural context may imply something different in a different country.”
The addition of Arabic has required some adjustments on the text processing components of EMBERS.
Existing geolocation tools need to have Arabic location entities in Arabic script translated to the Roman A-to-Z script. To address that, the name matching and name translation functions of Rosette have been added to the stack of Basis Technology analytics working with EMBERS.
The accurate translation of words for geolocation is tricky as certain Arabic words and phrases have shifting meanings from one dialect to the next, and common nouns may be spelled identically to proper nouns.
One Arabic word can mean “two uncles” or “Amman” (the capital of Jordan) depending on the context.
Arabic adds a unique twist in that social media users may write in Arabizi (Arabic words written with Western A-to-Z letters and numbers standing in for Arabic characters). Before any natural language processing software can analyze Arabizi, it must be translated into standard Arabic script.
If enough Arabic social media data is in Arabizi, then Rosette can help tackle the very thorny task of converting Arabizi to standard Arabic script.
|
<urn:uuid:c8cb05cb-d021-47b5-8600-e127bf2d32bd>
|
CC-MAIN-2022-40
|
https://www.basistech.com/case-study/virginia-tech-embers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00448.warc.gz
|
en
| 0.933838 | 4,041 | 2.59375 | 3 |
spchapi.exe is the file that can cause system issues
spchapi.exe is an executable file whose primary purpose is to start a process or launch the program on the machine. Unfortunately, a parasite can use to spread threats on the machine or launch viruses. Once executed, the possibly malicious file runs a process that is responsible for the parasite's payload. The executable file can be a significant part of a dangerous threat, but it can also work on its own. It can be installed by the trojan, keylogger, or different threats. Stay away from the file.
|Possible issues||Speed issues, internet issues, crashes, and freezing of the PC|
|Removal||You should use an antivirus tool and check if the piece is malicious before deleting it fully|
|Repair||Run ReimageIntego and repair any possible damage on the machine|
spchapi.exe is one of the many executable files that can be found on the system running in the background. Executables are legitimate file processes developed by Microsoft Corporation generally. But some of them can be misused by malicious actors. You can locate these safe files in C:\Program Files, so if you notice the piece in a different file – be concerned.
The virus can be using the DLL or EXE file to hide malicious processes. If it is created by malware authors, it can still be named after the spchapi.exe file to mask the purpose. If so happens you can notice symptoms:
- Unstable internet connection
- Browser redirects to unwanted websites
- Poor PC performance
- System slows downs.
You are highly advised to scan the system before you choose to delete executable spchapi.exe and terminate all the processes it started. It can be a file installed by harmless legitimate software and therefore may not pose any threat to your privacy and the system. Run an anti-malware tool to make sure it is possible to remove the piece.
|
<urn:uuid:09bdc06a-68a7-49b7-b72b-30f7e1f82c87>
|
CC-MAIN-2022-40
|
https://www.2-spyware.com/file-spchapi-exe.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00648.warc.gz
|
en
| 0.913747 | 422 | 2.6875 | 3 |
With the condition putting people at risk of further attacks of toxoplasmosis that can progressively damage the retina and lead to vision loss, experts are calling for increased awareness of the risks of eating raw and undercooked meat.
Closely associated with cats, Toxoplasma is a parasite that causes the infectious disease known as toxoplasmosis. Many animals around the world are infected, generally contracting the disease in environments soiled by infected cats or by consuming other infected animals.
“Considering Australia’s substantial population of feral cats that are known to be infected, alongside high levels of farming and diets rich in meat, it’s imperative we understand the prevalence of the disease across the country,” says study senior author Professor Justine Smith, Strategic Professor in Eye & Vision Health at Flinders University.
“While there is no cure or vaccine, the symptoms of toxoplasmosis vary depending on the age, health and genetics of the infected individual. Many people are asymptomatic, but the most common disease that we see in the clinic is retinal inflammation and scarring known as ocular toxoplasmosis.
In the study, published in the journal Ophthalmology Retina, Professor Smith and her team analyzed retina photographs of more than 5,000 people living in the Busselton area in Western Australia, previously collected to evaluate the prevalence of glaucoma and age-related macular degeneration for a long-term healthy aging study.
“Among the 5,000 people, we found eight participants with blood test-confirmed toxoplasmic retinal scars. Add to that that about three-quarters of the retinal lesions would be in a position not visible in these particular photographs, we were able to estimate the prevalence of ocular toxoplasmosis to be one per 149 persons,” says Professor Smith.
With previous research showing the infection can lead to reduced vision in more than 50% of eyes and even blindness, the authors say it is important for people understand the risk factors of toxoplasmosis and ways to avoid it.
“While people are often familiar with pregnant women needing to avoid cat litter trays, we also need everyone to know that preparation of meat is an important risk factor,” says Professor Smith.
Research by Professor Smith in 2019 highlighted a high prevalence of Toxoplasma in Australian lamb sold in supermarkets.
“Add to that that it’s now becoming more common to prepare meat in and out of restaurants to be purposefully undercooked or raw, then the likelihood of people becoming infected with Toxoplasma increases.
“We need people to be aware this disease exists, so they can make informed decisions about how they prepare and eat their meet. The parasite can be killed easily by cooking the meat to an internal temperature of 66ºC (or medium) or by freezing it prior to cooking.”
The research follows on from a series of papers recently published by Professor Smith and team on the condition, including one that uses new technology retinal imaging to show the changes that occur in ocular toxoplasmosis at the tissue-level, and another that highlights the best clinical practice for managing the disease.
“Prevalence of Toxoplasmic Retinochoroiditis in an Australian Adult Population: a Community-Based Study,” by Lisia B. Ferreira, João M. Furtado, Jason Charng, Maria Franchina, Janet M. Matthews, Aus A.L. Molan, Michael Hunter, David A. Mackey and Justine R. Smith will be published in the journal Ophthalmology Retina.
Treatment of ocular toxoplasmosis remains controversial. Some clinicians do not treat small peripheral retinal lesions, while others treat all patients in order to reduce recurrences and complication rates. Typically, toxoplasmic retinochoroiditis in immunocompetent patients is expected to resolve within 1 to 2 months .
Taking into account the benign natural course and the possibility of toxicity to the antiparasitic drugs, the therapeutic approach of each individual with active infection would probably lead to unnecessarily high rates of drug-induced morbidity. Subsequently, treatment is adjusted to each patient individually. The decision of commencing treatment in cases of active retinochoroiditis is based on several parameters. Some of the most important are the following:
Patients’ immune status
Characteristics of the active lesion (i.e., location and size)
- Visual acuity
- Clinical course
- Grading of vitreous haze
- Macular edema
- Edema of the optic disk
- Vascular occlusion
- Possible adverse effects of available drugs
- Other parameters (newborns, pregnancy, allergies).
The treatment of ocular toxoplasmosis includes both antimicrobial drugs (Table (Table3)3) and corticosteroids (topical and oral) and is maintained for 4–6 weeks. The main target of the antimicrobial treatment at the stage of active retinitis is to control the parasites’ multiplication . Currently, the number of randomized control trials in the setting of toxoplasmic retinochoroiditis is restricted . Another issue regarding the therapeutic approach of chronic infections is the fact that antiparasitic drugs may be ineffective against tissue cysts .
Available drug options for toxoplasmosis
|Medication||Adult dose||Pediatric dose|
|Pyrimethamine||Loading dose: 100 mg (1st day)Treatment dose: 25 mg twice daily for 4–6 weeks||Infants1 mg/kg once daily for 1 yearChildrenLoading dose: 2 mg/kg/day divided into 2 daily doses for 1–3 days (maximum: 100 mg/day)Treatment dose: 1 mg/kg/day divided into 2 doses for 4 weeks; (maximum: 25 mg/day)|
|Folinic acid||15 mg daily||5 mg every 3 days|
|Trimethoprime—sulfamethoxazol||One tablet twice daily for 4–6 weeks||6–12 mg TMP/kg/day in divided doses every 12 h|
|Sulfadiazine||4 g daily divided every 6 h||Congenital toxoplasmosisNewborns and Children < 2 months: 100 mg/kg/day divided every 6 hChildren > 2 months: 25–50 mg/kg/dose 4 times/dayToxoplasmosis in children > 2 monthsLoading dose: 75 mg/kgTreatment dose: 120–150 mg/kg/day, divided every 4–6 h (maximum dose: 6 g/day)|
|Clindamycin||150–450 mg/dose every 6–8 h (maximum dose: 1.8 g/day) (usually 300 mg every 6 h)||8–25 mg/kg/day in 3–4 divided doses|
|Azithromycin||Loading dose: 1 g (1st day)Treatment dose: 500 mg once daily for 3 weeks||Children ≥ 6 months: 10 mg/kg on first day (maximum: 500 mg/day) followed by 5 mg/kg/day once daily (maximum: 250 mg/day)|
|Spiramycin||2 g per day in two divided doses||15 kg = 750 mg20 kg = 1 g30 kg = 1.5 g|
|Atovaquone||750 mg every 6 h for 4–6 weeks||40 mg/kg/day divided twice daily (maximum dose: 1500 mg/day)|
|Tetracycline||Loading dose: 500 mg every 6 h (first day)Treatment dose: 250 mg every 6 h for 4–6 weeks||Children > 8 years: 25–50 mg/kg/day in divided doses every 6 h|
|Minocycline||100 mg every 12 h not to exceed 400 mg/24 h for 4 to 6 weeks||Children > 8 yearsInitial: 4 mg/kg followed by 2 mg/kg/dose every 12 h (Oral, I.V.)|
Modified from: Bonfioli and Orefice and readjusted according to the protocols of the Department of Ophthalmology (Ocular Inflammation Service) of the University Hospital of Ioannina, Greece
The first choices include one of the following combination regimens: (1) pyrimethamine, sulfadiazine, folinic acid and prednisone; (2) pyrimethamine, clindamycin, folinic acid and prednisone; (3) pyrimethamine, sulfadiazine, clindamycin, folinic acid and prednisone. Trimethoprim-sulfamethoxazole can also be a good alternative of sulfadiazine concerning first choice combination regimens. Alternative combination regimens include: (1) trimethoprim—sulfamethoxazole and prednisone; (2) clindamycin, spiramycin, and prednisone; (3) clindamycin, sulfadiazine, and prednisone; (4) pyrimethamine, azithromycin, folinic acid and prednisone; (5) pyrimethamine, atovaquone, folinic acid and prednisone; (6) sulfadiazine, atovaquone and prednisone; (7) tetracycline and prednisone; (8) minocycline and prednisone [93, 121]. The exact therapeutic drug regimens are summarized in Table Table22.
The ‘classic therapy’ consists of pyrimethamine, sulfadiazine and a systemic corticosteroid (most commonly prednisone) . It was found that none of three therapies (i.e., Classic therapy; Clindamycin with sulfadiazine and oral steroid; Trimethoprim with sulfamethoxazole and oral steroid) reduced the duration of posterior pole retinitis compared to control subjects with peripheral lesions that received no treatment . Additionally, treatment did not affect the rates of recurrence. However, it was shown that the classic regimen was more effective in the reduction of the size of the lesion(s) in comparison with treatments or no treatment. The same study reported that the classic treatment may be more suitable for foveal or adjacent to the fovea lesions. .
The use of pyrimethamine and sulfadiazine for treating ocular toxoplasmosis was introduced in the 1950s . The possibility of medication-related adverse events (including gastrointestinal and dermatological side effects, leukopenia and thrombocytopenia) should always be taken into account. Therefore, blood testing should be carried out every week throughout treatment and folinic acid must be also prescribed . Sulfadiazine is a sulfonamide antimicrobial that can cause hypersensitivity reactions, such as skin rashes.
Trimethoprim-sulfamethoxazole is defined by good tolerability, wide availability and low cost. However, sulfonamide-related reactions may occur . Trimethoprim-sulfamethoxazole with prednisone was found to be relatively well-tolerated, but as effective as the classic therapy in the reduction in lesions’ size . In contrast, a relevant study reported comparable outcomes among trimethoprim-sulfamethoxazole with prednisolone and classic therapy in two randomized groups . Additionally, the role of trimethoprim-sulfamethoxazole in preventing the recurrences of toxoplasmic retinochoroiditis calls for further investigation .
Clindamycin can be added to the triple regimen, converting it to ‘quadruple therapy’ , which has been found to improve vision and/or intraocular inflammatory markers . On the contrary, Rothova et al. reported a smaller reduction in lesion size in those treated with clindamycin, sulfadiazine and corticosteroid compared to the classic therapy. Pseudomembranous colitis can be caused by clindamycin, and diarrhea consists an indication for cessation of the drug. The intravitreal use of clindamycin and dexamethasone has been also assessed by recent studies. [126–129]. A substantially larger reduction in size of lesions was found in T. gondii IgM-positive patients who were treated with classic treatment in comparison with those who received intravitreal treatment . Topical treatment seems to be suitable for individuals with recurrent infection, due to the concerns regarding systemic drug toxicities. On the other hand, this approach would not be recommended in patients with immunodeficiency (e.g., HIV-patients) due to the risk of fulminant disease . Intravitreal treatment (1 mg clindamycin with or without 400 μg dexamethasone) may also be necessary in cases with fovea involvement or active lesion(s) within zone 1 as an adjunctive to systemic therapy [129, 131].
Two other antiparasitic drugs, atovaquone and azithromycin [93, 122], were found to have promising results in experimental studies, but do not show favorable outcomes in preventing recurrences of retinochoroiditis in humans.
The comparison of the efficacy of classic therapy and pyrimethamine plus azithromycin showed no difference between the two groups but the adverse events in those treated with azithromycin were less frequent and less severe .
Although their benefit has been completely delineated, systemic steroids can be added to the therapeutic regimen against toxoplasmic retinochoroiditis. However, the doses prescribed and timing of administration may widely differ among uveitis specialists. Corticosteroids are usually initiated 3 days after the start of antibiotic therapy and must be suspended at least 10 days before the antimicrobial drugs . If given without antimicrobials (e.g., in cases of initial misdiagnosis or atypical presentation), systemic steroids can lead to legal blindness in most patients . Systemic corticosteroids are usually avoided in immunocompromised patients . This category of patients is treated with a maintenance antimicrobial therapy while being immunocompromised (e.g., trimethoprim-sulfamethoxazole). Periocular corticosteroid injections are generally unpopular , as their administration has been correlated to detrimental results, especially in patients that have not received an antiparasitic therapy . Intravitreal administration of relatively short-acting dexamethasone has been successfully combined with clindamycin. Intravitreal injection of triamcinolone acetonide, which is longer acting, has not been widely practiced , and therefore, there is no standard consensus on this approach .
Steroid eye drops are widely prescribed for controlling anterior uveitis . Their frequency depends on the severity of inflammatory activity in the anterior segment. Apart from topical steroids, mydriatics and hypotensive agents are also added when required. Mydriatics are important for the prevention of posterior synechiae (or breaking them if they have already developed) and for pain relief.
Immunocompromised patients are treated with the antimicrobial regimens described above, for 6 or more weeks. After complete resolution of the lesions, the patient starts on secondary prophylaxis, with sulfadiazine, pyrimethamine and folinic acid or clindamycin, pyrimethamine and folinic acid. In asymptomatic individuals with a CD4 count above 200 cells/μL for six months or more, prophylaxis for toxoplasmosis can be stopped, but patients must be followed up for detecting signs of recurrence . In HIV patients with toxoplasmic retinochoroiditis, neuroimaging is crucial to rule out central nervous system (CNS) toxoplasmosis lesions. Treatment includes ongoing suppressive therapy with pyrimethamine and sulfadiazine .
In pregnancy, the highest risk regarding the adverse effects of antiparasitic drugs is during the first trimester . Consequently, a multidisciplinary assessment between the ophthalmologist, the obstetrician and an infectious disease physician is vital in cases where an intervention is required.
A serological investigation is necessary in women with toxoplasmic chorioretinitis during pregnancy, to define when the infection was acquired. Reactivation of a latent infection (acquired before gestation) leading to toxoplasmic chorioretinitis does not present a higher risk for transmission of T. gondii to their offspring compared to pregnant women with an acquired infection before gestation but no signs of active ocular toxoplasmosis .
When a toxoplasmic retinochoroiditis is attributed to a recently acquired infection, treatment must be administered not only for treating the ophthalmic disease but also for reducing the risk of transmission to the fetus . During pregnancy, the therapeutic regimens are: (1) First trimester: spiramycin, and sulfadiazine; (2) Second trimester (> 14 weeks): spiramycin, sulfadiazine, pyrimethamine, and folinic acid; (3) Third trimester: spiramycin, pyrimethamine and folinic acid. Medications are given in lower doses for three weeks and can be repeated, if required, after 21 days .
Moreover, treating the mother lessens the possibility of congenital transmission. Classic therapy is contraindicated as pyrimethamine is considered to be teratogenic and sulfadiazine can cause bilirubin encephalopathy . Clindamycin and azithromycin or clindamycin and atovaquone (± systemic corticosteroid) are discussed as alternatives . The recurrences of toxoplasmic retinochoroiditis pose minimal risk to the embryo. Thus, preventing vertical transmission alone is not an indication for treatment .
When a toxoplasmic infection occurs during or immediately before pregnancy, the risk of transmission to the fetus and congenital toxoplasmosis is significantly higher. This condition requires coordinated management together with a perinatologist, for a more detailed approach, the reader is referred to the study of Montoya and Remington .
The severity of toxoplasmic retinochoroiditis is multifactorial and varies widely in different geographical areas. Due to the increased risk of detrimental intraocular complications, the lack of large controlled studies does not justify changes to the standard therapy for this clinical entity.
Two surveys of the American Uveitis Society (AUS), in 1991 and 2001 , highlighted a substantial shift in favor of treating both mild and severe disease . Atypical presentations and immunocompromisation are considered as an indication for commencing treatment . Well-designed large interventional studies are required to shed more light on the therapeutic approach of ocular toxoplasmosis.
reference link :https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8351587/
More information: Lisia B. Ferreira et al, Prevalence of Toxoplasmic Retinochoroiditis in an Australian Adult Population: a Community-Based Study, Ophthalmology Retina (2022). DOI: 10.1016/j.oret.2022.04.022
|
<urn:uuid:98cf325d-7e3c-47d1-aa66-a69ae704db94>
|
CC-MAIN-2022-40
|
https://debuglies.com/2022/05/12/one-in-150-australians-have-retinal-scars-caused-by-the-toxoplasma-parasite/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00648.warc.gz
|
en
| 0.896204 | 4,181 | 3.53125 | 4 |
The security industry and trade press have directed a lot of attention toward the “Zero-day attack,” promoting it as THE threat to guard against. According to the marketing hype, the Zero-Day attack is the one that you should most fear, so you must put in place measures (i.e., buy stuff) to defend your organization from it.
The Zero-Day threat is born the moment a vulnerability is publicly announced or acknowledged. But what about the period of time that the threat existed before being announced. At StillSecure we call this class “Less-Than-Zero” threat. In this two-part series I´ll examine this Less-Than-Zero threat, compare it to the Zero-Day threat, and discuss ways to protect yourself from Less-Than-Zero attacks and vulnerabilities for which patches, signatures, etc. do not yet exist.
Zero-Day vs. Less-Than-Zero
Once a vulnerability is publicly announced, the zero-day clock starts ticking. The announcement is typically followed by some period of time before a patch is made available. This is the Zero-Day period. According to accepted wisdom, organizations face the greatest danger when an attack or exploit targeting the vulnerability is verified in the “wild.”
Some believe this is a flawed argument. As evidence, they point to “underground” vulnerabilities and exploits that are equally as dangerous and much more difficult to detect and protect against because they are “unknown.” At StillSecure we call this class Less-Than-Zero Threat. The chart below shows the relationship between the Less-Than-Zero threat and the Zero-Day threat and the level of risk they pose to the organization. It also takes into account such factors as responsible disclosure, patch deployment, etc.
Typically Less-Than-Zero threats have a different genesis than Zero-Day threats. Most Zero-Day threats are discovered through the standard bug testing process, and the vulnerability is known prior to an exploit for it being seen in the wild. Less-Than-Zero attacks, on the other hand, are first detected through evidence of attacks that have exploited them.
Where many Zero-Day vulnerabilities are discovered by White Hats, most Less-Than-Zero attacks are true Black Hat attacks. It is, however, possible that an underground threat evolves into a zero day attack. This is a natural evolution of Less-Than-Zero vulnerabilities and threats. Often a Less-Than-Zero attack becomes widely known, and a patch issued. It becomes a Zero-Day type of attack at that point.
Hopefully you see my point: just because the Less-Than-Zero threat doesn´t get a lot of media attention, it presents a real danger, and true security-conscious organizations will take steps to protect themselves from it.
In Part 2 of this series we´ll look at each stage of a threat and determine what defenses are applicable and what can be done to shorten and reduce the time of highest risk.
|
<urn:uuid:364b4e26-b044-4b49-b37a-56e9807eb34e>
|
CC-MAIN-2022-40
|
https://it-observer.com/less-than-zero-threat-part-1.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00648.warc.gz
|
en
| 0.958944 | 626 | 2.5625 | 3 |
Enterprise resource planning (ERP) is a system of integrated software applications that manages day-to-day business processes and operations across finance, human resources, procurement, distribution, supply chain, and other functions. ERP systems are critical applications for most organizations because they integrate all the processes necessary to run their business into a single system that also facilitates resource planning. ERP systems typically operate on an integrated software platform using common data definitions operating on a single database.
ERPs were originally designed for manufacturing companies but have since expanded to serve nearly every industry, each of which can have its own ERP peculiarities and offerings. For example, government ERP uses contract lifecycle management (CLM) rather than traditional purchasing and follows government accounting rules rather than GAAP.
Benefits of ERP
ERP systems improve enterprise operations in a number of ways. By integrating financial information in a single system, ERP systems unify an organization’s financial reporting. They also integrate order management, making order taking, manufacturing, inventory, accounting, and distribution a much simpler, less error-prone process. Most ERPs also include customer relationship management (CRM) tools to track customer interactions, thereby providing deeper insights about customer behavior and needs. They can also standardize and automate manufacturing and supporting processes, and unify procurement across an organization’s business units. ERP systems can also provide a standardized HR platform for time reporting, expense tracking, training, and skills matching, and greatly enhance an organization’s ability to file the necessary compliance reporting across finance, HR, and the supply chain.
Key features of ERP systems
The scale, scope, and functionality of ERP systems vary widely, but most ERP systems offer the following characteristics:
- Enterprise-wide integration. Business processes are integrated end to end across departments and business units. For example, a new order automatically initiates a credit check, queries product availability, and updates the distribution schedule. Once the order is shipped, the invoice is sent.
- Real-time (or near real-time) operations. Because the processes in the example above occur within a few seconds of order receipt, problems are identified quickly, giving the seller more time to correct the situation.
- A common database. A common database enables data to be defined once for the enterprise with every department using the same definition. Some ERP systems split the physical database to improve performance.
- Consistent look and feel. ERP systems provide a consistent user interface, thereby reducing training costs. When other software is acquired by an ERP vendor, common look and feel is sometimes abandoned in favor of speed to market. As new releases enter the market, most ERP vendors restore the consistent user interface.
Types of ERP solutions
ERP systems are categorized in tiers based on the size and complexity of enterprises served:
- Tier I ERPs support large global enterprises, handling all internationalization issues, including currency, language, alphabet, postal code, accounting rules, etc. Tier I vendors include Oracle, SAP, Microsoft, and Infor.
- Tier I Government ERPs support large, mostly federal, government agencies. Oracle, SAP, and CompuServe PRISM are considered Tier I with Infor and CGI Momentum close behind.
- Tier II ERPs support large enterprises that may operate in multiple countries but lack global reach. Tier II customers can be standalone entities or business units of large global enterprises. Depending on how vendors are categorized there are 25 to 45 vendors in this tier.
- Tier II Government ERPs focus on state and local governments with some federal installations. Tyler Technologies and UNIT4 fall in this category.
- Tier III ERPs support midtier enterprises, handling a handful of languages and currencies but only a single alphabet. Depending on how ERPs are categorized, there are 75 to 100 Tier III ERP solutions.
- Tier IV ERPs are designed for small enterprises and often focus on accounting.
The top ERP vendors today include:
Selecting an ERP solution
Choosing an ERP system is among the most challenging decisions IT leaders face. In addition to the above tier criteria, there is a wide range of features and capabilities to consider. With any industry, it is important to pick an ERP vendor with industry experience. Educating a vendor about the nuances of a new industry is very time consuming.
To help you get a sense of the kinds of decisions that go into choosing an ERP system, check out “The best ERP systems: 10 enterprise resource planning tools compared,” with evaluations and user reviews of Acumatica Cloud ERP, Deltek ERP, Epicor ERP, Infor ERP, Microsoft Dynamics ERP, NetSuite ERP, Oracle E-Business Suite, Oracle JD Edwards EnterpriseOne ERP, Oracle Peoplesoft Financial Management and SAP ERP Solutions.
Most successful ERP implementations are led by an executive sponsor who sponsors the business case, gets approval to proceed, monitors progress, chairs the steering committee, removes roadblocks, and captures the benefits. The CIO works closely with the executive sponsor to ensure adequate attention is paid to integration with existing systems, data migration, and infrastructure upgrades. The CIO also advises the executive sponsor on challenges and helps the executive sponsor select a firm specializing in ERP implementations.
The executive sponsor should also be advised by an organizational change management executive, as ERP implementations result in new business processes, roles, user interfaces, and job responsibilities. Reporting to the program’s executive team should be a business project manager and an IT project manager. If the enterprise has engaged an ERP integration firm, its project managers should be part of the core program management team.
Most ERP practitioners structure their ERP implementation as follows:
- Gain approval: The executive sponsor oversees the creation of any documentation required for approval. This document, usually called a business case, typically includes a description of the program’s objectives and scope, implementation costs and schedule, development and operational risks, and projected benefits. The executive sponsor then presents the business case to the appropriate executives for formal approval.
- Plan the program: The timeline is now refined into a work plan, which should include finalizing team members, selecting any external partners (implementation specialists, organizational change management specialists, technical specialists), finalizing contracts, planning infrastructure upgrades, and documenting tasks, dependencies, resources, and timing with as much specificity as possible.
- Configure software: This largest, most difficult phase includes analyzing gaps in current business processes and supporting applications, configuring parameters in the ERP software to reflect new business processes, completing any necessary customization, migrating data using standardized data definitions, performing system tests, and providing all functional and technical documentation.
- Deploy the system: Prior to the final cutover, multiple activities have to be completed, including training of staff on the system, planning support to answer questions and resolve problems after the ERP is operational, testing the system, making the “Go live” decision in conjunction with the executive sponsor.
- Stabilize the system: Following deployment, most organizations experience a dip in business performance as staff learn new roles, tools, business processes, and metrics. In addition, poorly cleansed data and infrastructure bottlenecks will cause disruption. All impose a workload bubble on the ERP deployment and support team.
Four factors are commonly underestimated during project planning:
- Business process change. Once teams see the results of their improvements, most feel empowered and seek additional improvements. Success breeds success often consuming more time than originally budgeted.
- Organizational change management. Change creates uncertainty at all organization levels. With many executives unfamiliar with the nuances of organization change management, the effort is easily underestimated.
- Data migration. Enterprises often have overlapping databases and weak editing rules. The tighter editing required with an ERP system increases data migration time. This required time is easy to underestimate, particularly if all data sources cannot be identified.
- Custom code. Customization increases implementation cost significantly and should be avoided. It also voids the warranty, and problems reported to the vendor must be reproduced on unmodified software. It also makes upgrades difficult. Finally, most enterprises underestimate the cost of customizing their systems.
Why ERP projects fail
ERP projects fail for many of the same reasons that other projects fail, including ineffective executive sponsors, poorly defined program goals, weak project management, inadequate resources, and poor data cleanup. But there are several causes of failure that are closely tied to ERPs:
- Inappropriate package selection. Many enterprises believe a Tier I ERP is by definition “best” for every enterprise. In reality, only very large, global enterprises will ever use more than a small percentage of their functionality. Enterprises that are not complex enough to justify Tier I may find implementation delayed by feature overload. Conversely, large global enterprises may find that Tier II or Tier III ERPs lack sufficient features for complex, global operations.
- Internal resistance. While any new program can generate resistance, this is more common with ERPs. Remote business units frequently view the standardization imposed by an ERP as an effort by headquarters to increase control over the field. Even with an active change management campaign, it is not uncommon to find people in the field slowing implementation as much as possible. Even groups who support the ERP can become disenchanted if the implementation team provides poor support. Disenchanted supporters can become vicious critics when they feel they have been taken for granted and not offered appropriate support.
Over the past few years, ERP vendors have created new systems designed specifically for the cloud, while longtime ERP vendors have created cloud versions of their software. Cloud ERP There are a number of reasons to move to cloud ERP, which falls into two major types:
- ERP as a service. With these ERPs, all customers operate on the same code base and have no access to the source code. Users can configure but not customize the code.
- ERP in an IaaS cloud. Enterprises that rely on custom code in their ERP cannot use ERP as a service. If they wish to operate in the cloud, the only option is to move to an IaaS provider, which shifts their servers to a different location.
For most enterprises, ERP as a service offers three advantages: The initial cost is lower, upgrades to new releases are easier, and reluctant executives cannot pressure the organization to write custom code for their organization. Still, migrating to a cloud ERP can be tricky and requires a somewhat different approach than implementing on on-premises solution. See “13 secrets of a successful cloud ERP migration.”
|
<urn:uuid:e933411a-fdee-43d1-b78e-d970c4d2873b>
|
CC-MAIN-2022-40
|
https://www.cio.com/article/272362/what-is-erp-key-features-of-top-enterprise-resource-planning-systems.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00648.warc.gz
|
en
| 0.922179 | 2,241 | 2.5625 | 3 |
A lot of people have probably heard about now about the earthquake that hit Taiwan during the holiday break. While the natural disaster caused a lot of discomfort to the Taiwanese community, the world has suffered as well in terms of Internet connection speed.
Taiwan is among the chains or gateways to which the Internet also passes through. Similar to that of a hose that allows water to pass through from one point to another; one hole in it will lower the pressure of the amount of water that should be traveling. In the same way, the underground cables to which help transmit Internet connections from one point to another were damaged and thus today, many are experiencing lousy connection speeds. This has thoroughly disrupted the flow of operations, becoming a discomfort that has left surfers and professionals totally helpless.
Natural disasters are hard to predict. The best that technology personnel can do is come up with better cable durability, but this is no promise for disruptions as we are experiencing today. Among the millions that have been damaged by this untimely event, technology based companies and organizations are surely suffering the most for the gapping whole in the entire connection the web provides.
[tags]internet, gateway, connection, fiber optic, cabling[/tags]
|
<urn:uuid:68c661e5-ae4a-4db7-933f-63abbdd30c85>
|
CC-MAIN-2022-40
|
https://www.it-security-blog.com/tag/gateway/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00648.warc.gz
|
en
| 0.971743 | 246 | 2.640625 | 3 |
When future historians look back at the early 21st century, one of the hallmarks of this era will be the pervasive extent of network connectivity. Today there are more than 4 billion Internet users, exceeding 50% of the world’s population. Network access is nearly ubiquitous throughout the developed world via a combination of fixed broadband, cellular data, WiFi and satellite coverage in remote areas.
Despite this impressive achievement, we’re just getting started. IoT and 5G promise to revamp the mobile networking landscape but also present service providers with operational challenges driven by sheer scale.
Explosive Growth in Connected Devices
The Ericsson Mobility Report forecasts the number of smartphone-based mobile broadband subscriptions to reach almost 8.3 billion by 2023. Gartner forecasts that the number of IoT devices will top 11 billion this year and Statistica projects that number to exceed 75 billion in 2025. It’s no stretch to say that we’re looking at an order of magnitude increase in the total number of connected devices over the next ten years.
5G to the Rescue
Of course, mobile operators are aware of the looming IoT tsunami and the race has already begun to build out 5G networks in anticipation of the expected demand. 5G technology is multi-faceted and conceived to address the fundamental problem of scaling wireless networks in several dimensions.
First, there is the need to accommodate the vast number of devices. Network densification efforts will result in a huge increase in the number of cells. The Small Cell Forum forecasts the installed base of small cells to reach 70 million worldwide in 2025, which is roughly ten times the current number of cell sites.
Second, 5G incorporates narrowband technology for connecting IoT sensors distributed across broad geographic areas using battery-efficient, low power radio signals. This is a critical requirement for instrumenting the physical world to monitor weather, the environment, agriculture production, and transportation networks, etc.
Third, mobile operators envision that 5G networks utilizing millimeter wave technology will enable a new generation of high bandwidth, low latency applications at multi-gigabit speeds. Fixed wireless access at 28 GHz promises to alleviate the cost burden associated with fiber or cable last mile connections. Speculation about other 5G use cases ranges widely from smart cities to self-driving cars to virtual/augmented reality.
Pushing the Cloud to the Edge
Extreme connectivity and capacity at the edge of networks will drive the adoption of edge computing to distribute processing, memory and storage resources closer to devices and the sources of data. Downscaled cloud-native infrastructure deployed near the edge will offload centralized hyperscale data centers and conserve bandwidth by distributing workloads, which will improve application latency and performance.
Sounds amazing. Too good to be true. There must be a catch, right? Of course!
Scaling Up Involves Heavy Lifting
Network service providers are facing a lot of heavy lifting in order to scale their networks up and out by an order of magnitude in terms of capacity, the number of connected devices and vast numbers of networks. The operational challenges are daunting.
Fortunately, network operators are already moving to software-driven infrastructure based on SDN and NFV, which will reduce CAPEX and OPEX while increasing service agility. By adopting a software-centric approach based on DevOps practices and a continuous integration / continuous delivery model, operators will be better positioned to scale and to respond rapidly to changing network conditions and market needs.
However, SDN/NFV and DevOps are just “jacks for openers.” Service providers are already excessively burdened with the operational costs of their existing networks. Scaling up and out will only increase that burden. Throwing more people at the problem is a non-starter because it is simply too costly.
Scaling Depends on Automation
The obvious solution is automation, enabling operators to streamline workflows and automate the majority of routine operations. Operators are starting to deploy a new generation of software tools for automated service provisioning and network configuration. Open source communities are hard at work developing the various components that comprise sophisticated lifecycle service orchestrations solutions. A leading example is the Linux Foundation’s Open Network Automation Platform (ONAP).
Yet key pieces of the puzzle are still missing. How to derive the real-time, actionable intelligence needed to drive automated service orchestration?
Automation Depends on Machine Learning and AI
A key benefit of software-driven infrastructure is that it can be easily instrumented to generate streaming telemetry data that can be fed into Big Data analytics engines. However, given the constant flood of a diverse array of telemetry data collected from many different sources across the network, operators need to move beyond dashboards and visualizations as the primary output of monitoring and analytics tools. Alarm fatigue, which is all too common in existing networks, is only going to get worse.
Machine-based intelligence that offloads human operators is critical to achieve the degree of automation needed to significantly reduce operating expenses. Machines will have the ability to detect gray failures, subtle anomalies and keep pace with constantly changing network conditions.
Machine intelligence encompasses advanced machine learning algorithms that can rapidly correlate information from multiple data sets in order to extract real-time insights for driving network and service orchestration. Cognitive AI techniques that emulate the decision-making processes of the human mind need to be applied for automating operator workflows.
Filling in the Missing Puzzles Pieces
We’re not there yet, but a lot of smart people are hard at work filling in the last few pieces of the 5G scalability puzzle. Both service providers and vendors are making progress applying machine learning and AI to many aspects of network and service operations. Yet much work remains be done until operators will be able to support the delivery of a new generation of IoT and 5G applications at vast scale.
With the race to 5G well underway, it is imperative that service providers ramp up fast on machine learning and AI to establish partnerships with the leading vendors and integrators who have proven expertise in applying these new technologies in large-scale operational environments.
Image attribution: bigstockphotos.com
|
<urn:uuid:5bf5db28-057a-44e9-9077-382f3c06739b>
|
CC-MAIN-2022-40
|
https://www.guavus.com/ai-the-missing-piece-to-the-iot-and-5g-scalability-puzzle/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00648.warc.gz
|
en
| 0.913626 | 1,255 | 2.5625 | 3 |
A leading technology firm of San Francisco name as "Arstechnica" have conducted a experiment with some hackers to crack the password. On this experiment hackers were given a list of password which contains 16,449 hash passwords. The main thing is that hackers have cracked 14,800 passwords in less then an hour. Hackers have cracked the 6 letters small passwords along with 16 letters password very easily.
How do they did it !
This may be the first time that scientific method of hacking has been published over the website. post of the arstechnica sites.
Hackers have described all the process for cracking the hash password there. For hacking this passwords hackers have made a list of online hash cracking database.
As by the expert, for hashing they took the users plain text password and after passing through a one way mathematical function a series of unique letters has been generated for that plain text password. This series of unique letters are know as hash. There are many kinds of hash. some of the common types are MD5, salt hash, SHA1, MD5 salt..and so on. Here you can get some of the type of hashes. And there are many site where we can crack the hash to a plain text.
Some of the Passwords!
Some of the password that had been cracked by the hackers and which are quite enough called as strong passwords. But now we cant call them as strong as they are.
Some passwords are
k1araj0hns0n, Sh1a-labe0uf, Apr!l221973, Qb3sanc0n321, [email protected]%, [email protected]%, wind3rmer32o32, tmdmmj17, BrandGeek2o14, Philippians4;13, Philippians4:6-7
|
<urn:uuid:e69c2428-02eb-4a9b-a426-c1338d71332b>
|
CC-MAIN-2022-40
|
https://www.cyberkendra.com/2013/05/easy-to-crack-strong-password.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00648.warc.gz
|
en
| 0.953894 | 374 | 2.71875 | 3 |
When developing a mobile app or testing for mobile compatibility, it can be difficult, time-consuming, and expensive to build a collection of all the mobile devices your testing team needs to test its software. After all, there are tons of mobile devices on the market today — the iPhone, Android phones, Windows phones, Google phones — and that doesn’t even account for different operating system versions! Though it can be tempting to skip all that with mobile emulators, we strongly advise against it. Here are several reasons mobile emulators are not a reliable option for use in mobile software testing.
Mobile Emulators Reduce Data Accuracy
For the greatest accuracy in software testing, make sure the testing environment closely matches the production environment. When you use an emulator instead of an actual device to test your mobile app, you are not working with real data.
An emulator is not a mobile device; it is a program you run on a computer to mimic a mobile device. However, emulators do not have all the capabilities of a real phone. For example: if your app uses the phone’s accelerometer, where is your emulator getting that information? The computer on which you’re running the emulator doesn’t have an accelerometer, but the emulator has to get that information from somewhere to properly test your app. To accommodate, the emulator will run the software (or a relevant part of its own software) to create a pretend environment and then feed the data from that environment into the emulator.
Essentially, you determine if your software works properly using fake environmental data run through a fake phone, then assuming the (fake) data will be accurate. Of course, you could test on an actual mobile device and get real data on a real device.
Mobile Emulators Add Extra Layers of Error
Software is complex and interacts with other software in equally complex ways. That’s why regression testing is so important; make one change to your software’s code (or reference separate, updated software) and you end up with a mess that was once a flawlessly functioning app. When you use an emulator, you add layers of complexity which increase your chance of conflicting bits of code.
It’s easy to see how this snowballs in action. First, you have your mobile app. That’s one layer of software with its own quirks. Then, let’s say, your app is meant to reference the weather in your user’s locality. Your app then needs to work with the information from another source, like the weather widget on your user’s phone. That’s a second layer, increasing the chances of a bug. Then, thirdly, you add the emulator which is mimicking both the phone itself and the weather data the “phone” is accessing. That’s four layers of potential error! If even one of those layers malfunctions during testing — especially if the malfunctioning software is the emulator itself, — you end up trying to work out a bug that might not even be there if with a real phone.
Always Test on Mobile Devices Before Release
Ultimately, the user experience from an emulated mobile device does not compare to the actual end-user experience. Using an emulator to test your mobile app or compatibility requires guesses and assumptions, not on real performance data.
Some situations, however, require emulators. Maybe an in-house testing team doesn’t have the proper devices. Maybe the program requires environments you can’t artificially recreate for extended periods. Whatever the case, it’s important to ensure that the test environment is as close to the actual user experience as possible.
Today’s takeaway: always test mobile software on real devices prior to release. After all, if you only test on emulators, you only gain emulated data. And, if you release your mobile app to the App Store built on assumptions, your customers may find the problem before you do.
|
<urn:uuid:a6561241-b5f4-499b-91cd-b918c83fa2aa>
|
CC-MAIN-2022-40
|
https://www.ibeta.com/why-mobile-emulators-arent-great-for-mobile-testing/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00648.warc.gz
|
en
| 0.910362 | 812 | 2.734375 | 3 |
As we all know, 4G and LTE are designed to improve capacity, user data-rates, spectrum usage, and latency.
5G represents more than just an evolution of mobile broadband.
It will be a key enabler of the future digital world and the next generation of ubiquitous ultra-high broadband infrastructure that will support the transformation of processes in all economic sectors. It will also represent a step-change in being able to meet the growing scale and complexity of consumer market demands.
While 5G is still in its evolutionary stage, its development will clearly be influenced by a need to support three specific use-cases (all of which will have an impact on emerging fields like autonomous vehicles, telemedicine and the Internet of Things):
Bringing these to life will require adaptation on both the radio and network side. For example, services may be centralized and in some cases distributed. This will depend both on the service function itself (some service functions will be naturally centralized or distributed) and, from a use-case point of view, access to technology and the type of performance required.
Having a technology that is potentially able to apply functions independent from the underlying protocol gives service providers the flexibility to implement services almost everywhere in the network.
Mobile Edge Computing (MEC) – which enables the edge of the network to run in an isolated environment from the rest of the network and creates access to local resources and data – is likely to have an impact here. Indeed, Research and Markets has identified it as a $80 billion market opportunity by 2021.
The optimization and acceleration of transport protocols will become even more important for networks requiring low latency and capability to hit high performance in a short amount of time. In this case, it is recommended to have a TCP optimization function capability running in different points of the network and, in particular, as close as possible to the end-user in terms of RTT/Latency. This will enable faster reactions in case of changes of network conditions, as well as service/applications requests.
Delving further into the detail, TCP optimization could become hierarchical and distributed where different proxies talk each other, creating “reliable” point-to-point intermediate connections. The purpose here is to enable faster re-transmission in case of any network drop irrespective of cause (congestion, IP traffic rerouting, temporary loss of connection on radio or fixed connection etc.)
Another important element to consider is the deploy-ability of policy enforcement and traffic steering functions on 5G networks in different parts of the network. From an architectural prospective, the same concepts and capabilities for TCP optimization apply here. In other words, the capability to distribute the functions can happen at any point of the network and for any kind of traffic. This can include traffic steering, manipulating video, or working as a gateway function for IoT-based services can be orchestrated by F5 technologies, removing and re-adding existing tunneling protocols.
F5 is starting to stand out from the crowd in this space due to its capability to manage, analyze and manipulate the traffic from Layer 4 up to Layer 7, injecting, removing or changing content. This includes both application layer traffic (such as HTTP, SSL etc.), as well as network protocols (like GPRS Tunneling Protocol-encapsulated traffic for mobile network transport). By running a Virtual Network Function (VNF), it becomes possible to achieve high levels of distribution and, ultimately the ability to better monetize, secure and optimize service providers’ networks.
|
<urn:uuid:5d0de215-abd9-4240-94b7-1849c736106c>
|
CC-MAIN-2022-40
|
https://www.f5.com/company/blog/the-role-of-optimization-in-5g-networks
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00648.warc.gz
|
en
| 0.92835 | 716 | 2.65625 | 3 |
Carrots are about 10% carbs, consisting of starch, fiber, and simple sugars.
They are extremely low in fat and protein.
An excellent source of vitamin A in the form of beta carotene.
They are also a good source of several B vitamins, as well as vitamin K and potassium.
A great source of many plant compounds, especially carotenoids, such as beta carotene and lutein.
Eating linked to a reduced risk of cancer and heart disease, as well as improved eye health.
The vegetable may be a valuable component of an effective weight loss diet.
It may cause reactions in people allergic to pollen.
Additionally, carrots grown in contaminated soils may contain higher amounts of heavy metals, affecting their safety and quality.
|
<urn:uuid:d8f506db-8c90-4c35-ba54-f9913024102e>
|
CC-MAIN-2022-40
|
https://areflect.com/2019/09/21/todays-health-tip-benefits-of-carrots-4/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00648.warc.gz
|
en
| 0.968427 | 159 | 3.296875 | 3 |
Cybersecurity breaches and threats are a significant concern for businesses all around the world. Cybersecurity is the technology, technique, and practice concerned with safeguarding electronic data and the systems that support it from compromise and attacks.
As we have all become more and more reliant on computers, networks, and data storage systems, we risk losing or compromising our sensitive information, documentation and data to illegal cyber elements. The demand for experts with knowledge and experience in security implementation and management has never been higher, and it is constantly increasing.
One such infosec professionals in-demand are the Certified Information Systems Security Professional (CISSP) experts. This article will provide you with the details on how to become a CISSP expert.
What is a CISSP Expert?
CISSP experts strengthen cutting-edge information security systems by protecting data from unauthorized access and infringement. Organizations entrust CISSP experts with defining, designing, managing and controlling their security architecture. CISSP experts with a lot of experience are frequently regarded as the best professionals for protecting sensitive data in an organization.
How to Become a CISSP Expert?
In terms of knowledge and skills, CISSP experts are at the top of the cybersecurity game. To become a CISSP expert, one must have a solid understanding of information systems, networks, and cybersecurity trends. A graduate degree in computer science, information technology, or a related field is desirable to be a CISSP expert. You can follow the below-mentioned steps to become a CISSP expert.
1. Understand the basics of cybersecurity: To become a CISSP expert, you must first become familiar with the current cybersecurity landscape and understand key tools for evaluating and managing security protocols in information processing systems. Gain an understanding of cybersecurity fundamentals, threat actor attacks, mitigation, security policies, secure architecture, wireless networks, network security controls, security testing, and more.
2. Gain the required experience: CISSP experts are not entry-level professionals. A CISSP expert must have at least five years of paid work experience in two or more of the CISSP CBK’s eight domains. The eight domains of CISSP certification are Security and Risk Management, Asset Security, Security Architecture and Engineering, Communication and Network Security, Identity and Access Management (IAM), Security Assessment and Testing, Security Operations, and Software Development Security.
Additionally, anyone who does not have the needed experience to become a CISSP expert but passes the CISSP exam can become an Associate of (ISC)2. After that, the Associate of (ISC)2 will have six years to complete the five years of experience required.
3. Get entry-level cyber security certifications: If you don’t have enough relevant job experience or a firm knowledge of cybersecurity concepts to become a CISSP expert, CompTIA offers entry-level A+, Network+, and Security+ certifications. With that foundation in place, you can apply for a security-related job and gain some much-needed IT experience.
Consider seeking the (ISC)2 Systems Security Certified Professional (SSCP) certificate if you’ve been working in IT security for a year or two. The SSCP is a precursor to the CISSP, covering many of the same issue categories, even though it is not an official prerequisite.
4. Get yourself certified: The most excellent way to demonstrate your expertise is to gain a professional badge. So, to become a CISSP expert, you need to clear the CISSP certification exam. The Certified Information Systems Security Professional accredited as CISSP is a worldwide recognized certification for IT security professionals. Obtaining this credential verifies an individual’s knowledge and abilities in the field of information security. It enhances one’s credibility and makes it easier for the candidate to land a better job with a higher pay-grade. A CISSP certification is also a requirement for many high-level security roles. Below are a few details about the CISSP certification exam.
Domains of CISSP
CISSP Exam Details
|Exam Name||CISSP CAT||CISSP Linear|
|Exam Duration||4 hours||6 hours|
|Number of items||175||250|
|Exam Format||Multiple-choice and advanced innovative items||Multiple-choice and advanced innovative items|
|Passing Score||700 out of 1000 points||700 out of 1000 points|
|Language||English||French, German, Brazilian Portuguese, Spanish-Modern, Japanese, Simplified Chinese, Korean|
|Testing Center||(ISC)2 Authorized PPC and PVTC Select Pearson VUE Testing Centers||(ISC)2 Authorized PPC and PVTC Select Pearson VUE Testing Centers|
5. Get help from a professional: Enrolling in a training course is one of the most acceptable ways to prepare for the CISSP certification exam. Formal CISSP training gives you a well-organized overview of the latest technologies, threats, practices, regulations, and standards. You will get the help of a professional in your CISSP expert journey.
6. Earn a CISSP endorsement: To become a CISSP expert, you must subscribe to the (ISC)2 Code of Ethics and complete an endorsement form after passing the CISSP certification exam. Another (ISC)2 certified expert who verifies your professional work experience must sign the endorsement form. Because passing the exam does not automatically award you certification status, you must submit the completed form within nine months of completing the exam to become completely certified.
CISSP with InfosecTrain
CISSP experts are the most well-known professionals in the field of information security. If you are sure that being a CISSP expert, one of the hottest IT job profiles, is the career you wish to pursue, you can check out and enroll in the CISSP certification training course at InfosecTrain. CISSP serves as a benchmark against which security executives are judged. Our CISSP Certification training course is intended for security professionals who want to gain a comprehensive understanding of the current cybersecurity and information system security services.
|
<urn:uuid:54100632-766c-4f8e-b9f5-e7db18e9ea07>
|
CC-MAIN-2022-40
|
https://www.infosectrain.com/blog/how-to-become-a-cissp-expert/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00648.warc.gz
|
en
| 0.912838 | 1,239 | 2.703125 | 3 |
Digital labor is a term that applies to the automation of tasks that are performed by computer applications. These tasks were formerly performed by humans. Digital labor can be used for data entry, for warehouse operations by a robot, or in call centers to solve the problems that humans are having with a particular product or service.
One of the big aspects of the rise of AI is understanding the shift to digital labor. Bots, or fully-featured digital assistants, can be looked at as a digital labor element. Today, nearly all contact centers are staffed by humans, but bots are rising as they get better at handling specific repeatable use cases.
Prediction: By YE 2021, digital labor will become a key feature of intelligent contact center offerings. This will force enterprises to do planning for the ratio of human labor to digital labor.
|
<urn:uuid:e23182b3-2abe-40f4-bef6-afc0bafe2911>
|
CC-MAIN-2022-40
|
https://aragonresearch.com/glossary-digital-labor/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00648.warc.gz
|
en
| 0.958382 | 176 | 2.921875 | 3 |
There are many different types of objects that we can work with in Python. Dates are one of those objects that are infamous in nearly every language. Date types are notoriously hard to wrangle but Python has quite a few ways to manipulate dates and time.
A date can be represented a number of different ways. If today were February 4th, 2018, we could potentially represent this X number of ways each being recognizable:
- February 4th, 2018
- Feb 04, 2018
That's a lot of ways to represent the same date! Let's see how we bend these dates to our will in Python to make the language do all of this hard work for us.
Python Module for Handling Dates
To get Python to manipulate how a date is formatted, we need to import the native datetime module. This module contains all of the methods we need to take care of a majority of the formatting needs we may have. We can import it with a simple import statement. We're using the from here so that we can reference the functions without using dot notation.
>>> from datetime import datetime
Once we do that, we now have access to a number of methods. First, let's assume we're working with some date we made up. For this example, I'm going to use 2/4/18 and to start out with I'm going to represent it as a simple string.
>>> strDate = '2/4/18'
We can't do much with a string so we need to cast this string to a datetime object so Python can understand that string is actually a date. One way to do that is to use the strptime method. This method on the datetime object allows us to pass in a date and time and use a number of different format operators to manipulate how it works.
Converting Strings into Dates
Taking our example date, perhaps I want to change it from 2/4/18 to Feb 04, 2018. To do that, I'd pass the the original string as the first argument to the strptime() method. This method converts a string into a date object that Python can understand. The strptime() method allows you to pass in a simple string like 2/4/18 and a simple formatting string to define where the day, month and year is.
>>> objDate = datetime.strptime(strDate, '%m/%d/%y')
datetime.datetime(2018, 2, 4, 0, 0)
Converting Dates into Strings
Now that Python understands this string is an actual date, we can either leave it as-is or convert it back to a string in a different format. In this case, I'd like to convert it back to Feb 04, 2018. To do that, I can use the strftime() method that does the opposite of what we just did. This method converts a datetime object back to a string.
>>> datetime.strftime(objDate,'%b %d, %Y')
'Feb 04, 2018'
We're now back at a simple string again but this time it's been converted to a different format. How about just getting the year from date?
Maybe we just want the day but without the leading zero? That one requires a little help from the string method lstrip().
Using a combination of the strptime() and strftime() methods, we can change up how a date is represented in a nearly infinite number of ways. The key is to understanding each of the formatting operators and what they represent.
|
<urn:uuid:87e33f47-f0a6-44d9-aae9-cdc926567ae6>
|
CC-MAIN-2022-40
|
https://www.ipswitch.com/blog/date-formatting-in-python
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00648.warc.gz
|
en
| 0.893242 | 812 | 4.15625 | 4 |
Before we dive into the Deep Web and Dark Web, it’s important to understand the Surface Web and how internet search engines operate. Search engines utilize spiders that crawl and read content of websites to determine what information to show for specific search requests. A spider is essentially a special software robot that searches a website page returning to the search engine with information that is contained in it. However, if a site or page is not indexed (not allowed to be crawled), these spiders will not have access to them and therefore those sites/pages won’t show up in surface web search results.
Why Access the Deep Web and/or Dark Web?
Both the Deep Web and Dark Web are potential sources of a wealth of information when used mindfully and with knowledge of each. However, each serve different purposes.
The Deep Web contains information varying from academic journals to databases to blog articles that aren’t published yet. The Deep Web can be accessed if you know the URL and have the authority to access it or know where and how to search. Some reasons search engines might not be able to access these sites include:
- Password access
- Robots blocked – a specific file can be placed into the main directory of a website to block spiders from crawling the site
- Hidden pages – no hyperlinks to take you to the page
- Form controlled entry – the site requires human based action to turn up results i.e. dropdown menus
The Dark Web is another story. This can only be accessed through Tor (The Onion Router) or I2P (Invisible Internet Protocol), which utilize masked IP addresses in order to keep users and site owners anonymous. Tor is downloadable software and works by building encrypted connections on servers around the world, creating multiple layers of encryption creating an “onion effect,” hence its name. Only at the very end does the traffic come through unencrypted.
While many associate the Dark Web with the more nefarious deeds, such as human trafficking and drug sales, it can also be used for some very legitimate purposes. Dissidents who fear prosecution from their government or a particular group can use the Dark Web to anonymously search and post without fear of repercussion. Journalists also find it a safe haven when their sources want to remain private.
The Dark Web – Legality and Anonymity
Many wonder if merely entering the Dark Web could be considered a criminal offense. The answer is a resounding no, it is legal to surf the Dark Web. However, it’s important to use caution when visiting sites or clicking links. The Dark Web is rife with sites offering hit men, firearms and forged papers. While searching online is not illegal in and of itself, the actions you take while on the sites could be perceived as illegal based on the content you are viewing. If you are looking up child adoption, a link could take you to a site involving child pornography — a situation where the act of viewing is an illegal offense.
When it comes to the Dark Web, it is unwise to assume you are completely anonymous without taking additional precautions to prevent being traced. Leaked IP addresses and man in the middle attacks (where a third party intercepts and sometimes alters communications between two parties who think they’re directly communicating with each other) can also put users at risk for exposure. Use caution and be aware of the risks associated when using Tor:
- Exposing your computer to malware: people operating one of the nodes can use the device to add malware. So, users who download through Tor expose their network to malware infections.
- Information theft: Traffic can be sniffed at the exit node, or the point where information leaves the encrypted network and becomes readable again. People operating the nodes can monitor the traffic and capture sensitive information.
- Attention of Law Enforcement: Using Tor may draw the attention of the NSA, FBI or other law enforcement agencies that specifically target Tor users.
The Deep Web and Dark Web’s beneficial information should not be overlooked. However, the Dark Web, should be approached with a level of caution due to potentially serious security and legal implications.
|
<urn:uuid:838553e4-48ba-4f6b-b092-754870a407be>
|
CC-MAIN-2022-40
|
https://www.cybintsolutions.com/what-is-the-dark-web/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00048.warc.gz
|
en
| 0.9284 | 836 | 2.96875 | 3 |
The intent of spam messages is to get a user to reply to an email, to visit a web site, or to call a phone number. Intent analysis involves researching email addresses, web links (URLs), and phone numbers embedded in email messages to determine whether they are associated with legitimate entities. Phishing emails are examples of Intent.
Frequently, Intent Analysis is the defense layer that catches phishing attacks. The Barracuda Email Security Service applies the following forms of Intent Analysis to inbound mail, including real-time and multi-level intent analysis.
- Intent Analysis – Markers of intent, such as URLs, are extracted and compared against a database maintained by Barracuda Central.
- Real-Time Intent Analysis – For new domain names that may come into use, Real-Time Intent Analysis involves performing DNS lookups against known URL block lists.
- Multilevel intent analysis – Use of free websites to redirect to known spammer websites is a growing practice used by spammers to hide or obfuscate their identity from mail scanning techniques such as Intent Analysis. Multilevel Intent Analysis involves inspecting the results of Web queries to URLs of well-known free websites for redirections to known spammer sites.
Intent Analysis can be enabled or disabled on the Inbound Settings > Anti-Phishing page. Domains found in the body of email messages can also be blocked based on or exempt from Intent Analysis on that page. See also Anti-Fraud and Anti-Phishing Protection .
|
<urn:uuid:5e0d2773-03b6-4e86-a9dd-3c81c34ee90f>
|
CC-MAIN-2022-40
|
https://campus.barracuda.com/product/essentials/doc/3211276/intent-analysis-inbound-mail/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00048.warc.gz
|
en
| 0.899829 | 305 | 2.515625 | 3 |
What is Encryption?
Encryption is a security control that alters information from a readable to random format to prevent unauthorized access.
Encryption mechanisms convert a human-readable plain text to incomprehensible ciphertext.
- Encryption is a process scrambling data to prevent unauthorized parties from accessing or modifying information
- Encryption uses a cryptographic key that a sender and receiver use to decode information
- Symmetric and Asymmetric encryption are the two main types of encryption
- Some of the benefits of encryption include improving privacy, enhancing security, protecting data integrity, and supporting compliance
The Encryption Process
Encryption uses a cryptographic key, which is a mathematical value that both a sender and recipient have to encode and decode information.
The message sender data owner must decide the cipher or encryption algorithm that will best alter the encoding of the message. The cipher generates a variable that the sender uses as a key to make the encoded message unique. The most widely used is the Advanced Encryption Standard (AES).
A random number generator or a computer algorithm that works as a random number generator creates encryption keys.
Reliable encryption uses a complex key, making it difficult for third-parties to crack and access readable data. When an attacker intercepts encrypted data, they have to guess the cipher the sender used to encrypt the message, as well as the encryption keys. The process is complicated and requires time, making encryption a valuable security tool.
Symmetric and Asymmetric encryption are the two types of encryption.
- Symmetric Encryption – Symmetric encryption uses one key for encryption and decryption. The symmetric key is sometimes referred to as a shared secret because the sender must share the private key with authorized entities. The most widely used symmetric-key cipher is the AES
- Asymmetric Encryption – this form of encryption is also known as public key encryption. There are two keys for encryption and decryption in asymmetric encryption. The decryption key in asymmetric encryption is kept private, while the encryption key is shared publicly. Asymmetric encryption is the foundational technology in transport layer security (TLS). The Rivest-Shamir-Adleman (RSA) encryption algorithm is currently the most widely used public-key cipher.
Importance of Encryption
You can enhance information security by encrypting data at rest or in transit. Encryption offers the following benefits:·
- Encryption offers privacy – converting plaintext to ciphertext prevents unauthorized parties from reading data. The security measure prevents attackers, internet service providers, and other agencies from intercepting and retrieving sensitive information
- Encryption enhances security – you can encrypt data to prevent breaches when sharing information over the Internet. Encryption ensures your data remains secure in case you lose your device.
- Encryption protects data integrity – encryption protects the integrity of data transmitted over public networks. A recipient receives untampered information
- Encryption provides authentication – you can use public-key encryption to establish the real websites using site owners private key listed in the TLS certificate
- Encryption supports compliance – industry and government regulations require businesses to encrypt sensitive information. Encryption helps meet requirements like HIPAA, PCI DSS, and GDPR.
Hackers deploy brute force to attack encryption. This security threat tries random keys until the hacker finds the right encryption and decryption key. Encryption strength is directly proportional to the key size. Long encryption keys require more time and resources to crack.
Hackers can also break encryption using side-channel attacks and cryptoanalysis. These attacks target the implementation of the cipher to detect and exploit system design errors.
Downside – Hackers Use Encryption to Commit Cybercrime
Cybercriminals also use encryption to target victims. For instance, ransomware encrypts systems and devices until a target pays a ransom. Ransomware attacks feature an encryption and decryption key that attackers use to lock or open files.
|
<urn:uuid:0a114e9b-c480-4320-ac08-27f25127003c>
|
CC-MAIN-2022-40
|
https://cyberexperts.com/encyclopedia/encryption/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00249.warc.gz
|
en
| 0.864396 | 849 | 3.875 | 4 |
What is a Virtual Private Network (VPN)?
A virtual private network (VPN) is a software or hardware device that creates a private network, giving you online privacy and anonymity in a public network.
A VPN masks your internet protocol (IP) address to make it difficult to trace your online activities.
Besides, a VPN establishes a secure and encrypted connection to secure your information and systems from frequent and sophisticated cyber threats.
- A VPN gives you privacy and anonymity by enabling a private network in the public internet
- The appliance masks your IP address, making you online activities difficult to trace
- A VPN establishes an encrypted connection to prevent cybercriminals from stealing confidential information
- You can use a VPN to bypass geographic restrictions on websites
- A VPN also protects you from being logged while torrenting
- Consider factors like speed, encryption features, server locations, user support, and logging policies while selecting a VPN
How Does a VPN Work?
A VPN creates a data tunnel between your local network or computer and a node in another location, which could be thousands of miles away. This strategy makes your network traffic appear like its originating from a different location (the node’s IP address).
A VPN uses encryption mechanisms to covert data you send over a Wi-Fi or any other network connection. Encrypting the information, including your online transactions, makes it unreadable for an intruder who connects to the same Wi-Fi network. A hacker eavesdropping your network activities will not decrypt your information and requests without the key.
Apart from the security benefits, a VPN offers privacy. Even your internet service provider cannot know your browsing history, since a VPN hides your online search history. The security system masks your actual IP address and associates all your online activities with the VPN server’s IP address.
Many VPN vendors offer several servers globally. You can select a different location where you want your activity to appear to originate from. Even search engines that track your search history with associate the information with the VPN IP address, not yours.
A VPN can hide the following information to preserve your online privacy:
- Browsing History – without a VPN, your internet service provider and the web browser can track everything you do online. Besides, most of the sites you visit keep a history of your activities for “personalized experience.” Third-parties collect and sell this information for marketing purposes. Using a VPN prevents ISPs and websites from tracking your search history and tying it to your IP address
- IP Address and Location – ISPs, website owners, and hackers can discover your online activities and location by capturing your IP address. Fortunately, a VPN masks your address and replaces it with an IP address in a different area, allowing you to maintain privacy and browse anonymously
- Service Availability – some online services are not accessible in various regions. For instance, it isn’t easy to access local streaming services outside your country due to contractual terms and regulations in other areas. The service providers use your IP location to block your requests. In such an instance, you can connect to a VPN server with an IP address in your home country to gain access to local services while in other countries
- Your Devices and Personal Information – a VPN establishes encryption and secure connection to prevent hackers from gaining unauthorized access to information and connected computers
- Your Web Activity – are you a candidate for government surveillance? A VPN prevents third parties, including government agencies, from tracking your online activities. A VPN vendor that does not log your browsing history protects your internet freedom. Even your internet service provider cannot supply accurate records of your web activity to government agencies
Do I Need a VPN?
Connecting to the internet and transacting on public Wi-Fi networks exposes your confidential information, online transactions, and applications to cyberthreats.
Today, people are increasingly working remotely, accessing company networks at coffee shops, checking bank account while at a mall, and working using home’s Wi-Fi connection that has weak security controls. Sharing any sensitive information in this manner is vulnerable to risks like a stranger eavesdropping by connecting to the same network.
Fortunately, a VPN provides encryption and anonymity to protect your online activities. You can install the security appliance to send emails, shop online, access bank accounts, and work with corporate systems from any network while keeping your browsing actions private and secure.
Can a VPN Protect Against Identity Theft?
A VPN also protects you against identity theft.
Identity theft occurs when hackers steal your personal information and use it to commit more crime under your name. Criminals in possession of your data can easily open new accounts, file tax returns, rent, or buy property in your name.
A VPN encrypts your online transactions and makes it challenging for hackers to access personal details.
Factors to Consider When Selecting a VPN
Selecting a VPN that conforms with your specific needs and business values is crucial. Currently, there are many VPN sellers, and it is challenging to choose the best fit. This list presents vital factors to consider before selecting a VPN for your business or personal use.
You can consider the following six factors when selecting a VPN:
- Encryption – ensure that the VPN solution offers dependable encryption features that protect information and systems from hackers
- Speed – a VPN encryption process can take a toll on the connection speed. You should purchase a VPN service that values speed as much as they value privacy and security
- Premium Services – since one size does not fit all, you can get a VPN from vendors that offer a more accommodating service plan for your specific business needs
- Logging Policies – if you are serious about privacy, you should avoid VPN services in areas where governments legally bind VPN providers to collect and make user logs accessible to various agencies
- Servers – the number of servers in different parts of the world is an essential factor to consider. Many locations give several IP addresses to hide your online activities. Also, the VPN server’s proximity to your site plays a vital role in the connection efficiency and network failure control
- Support – it is imperative to purchase a VPN product from a provider that offers after-sale services and tech support. The best VPN vendor provides expert representatives to respond swiftly to any VPN errors.
|
<urn:uuid:31a6b6ea-1ce2-4e87-90d0-68373be5030f>
|
CC-MAIN-2022-40
|
https://cyberexperts.com/encyclopedia/virtual-private-network-vpn/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00249.warc.gz
|
en
| 0.91001 | 1,274 | 3.359375 | 3 |
There are always more than five thousand planes on the skies every single minute and most of these aircraft rely on software for their operations. Any failure in the airline industry or even just a single airline could lead to a massive grounding of planes or worse!
Air traffic management has embraced the use of digital technologies in airports and for a supply chain in order to improve efficiencies.
Cyber criminals may have taken advantage of this to create an access point to the systems in order to steal data or to create damages. There is a greater need for all the aviation stakeholders to come together and boost the security efforts to ensure that their customers travel safely.
The airline industry has been taking the cybersecurity risks very seriously and is actively working to mitigate the possible risks.
How to keep safe from cyber threats in an airplane
There are several actions needed to ensure safety in the aircraft industry and at an independent organization level.
Aircraft industries should conduct independent cyber security audits. An effective audit will identify all of the necessary cyber security controls and document them. The audit findings will identify the issues that need to be addressed. These findings should then be prioritized and steps should be taken to mitigate the risks related to these findings.
There should be a clear framework set in place by the industry with domain-specific steps that can be used to mitigate and manage cyber threats.
A good cybersecurity framework should be based on five principles: identification, protection, detection, responding and recovery.
A proper cyber risk management framework should also take care of four basic elements: adequate infrastructure for monitoring and detection, the proper process of following the procedures, clear identified roles and responsibilities and built-in oversight and proper documentation.
There is great power and strength in teamwork. Aircraft industries must collaborate and come together to ensure there is safety in the industry. The industry should also work with other industries to share best practices, strengthen IT systems, and create a security-minded culture.
Supply partners and all involved stakeholders must work together as a team to develop trust so that they are able to identify and mitigate cyber risks.
|
<urn:uuid:d1db3200-94ce-45cc-bb4a-6ef8336777ce>
|
CC-MAIN-2022-40
|
https://cyberexperts.com/how-airplanes-should-protect-themselves-from-cyber-threats/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00249.warc.gz
|
en
| 0.946414 | 418 | 2.609375 | 3 |
What is a SOC (Security Operations Center)?
A security operations center (SOC) is responsible for protecting an organization against cyber threats. SOC analysts perform round-the-clock monitoring of an organization’s network and investigate any potential security incidents. If a cyberattack is detected, the SOC analysts are responsible for taking any steps necessary to remediate it.
SIEM: An Invaluable Tool for a SOC Team
SOC analysts need a variety of tools to perform their role effectively. They need to have deep visibility into all of the systems under their protection and to be able to detect, prevent, and remediate a wide range of potential threats.
The complexity of the networks and security architectures that SOC analysts work with can be overwhelming. SOCs commonly receive tens or hundreds of thousands of security alerts in a single day. This is far more than most security teams are capable of effectively managing.
A security information and event management (SIEM) solution is intended to take some of the burden off of SOC analysts. SIEM solutions aggregate data from multiple sources and use data analytics to identify the most probable threats. This enables SOC analysts to focus their efforts on the events most likely to constitute a real attack against their systems.
Advantages of SIEM Systems
A SIEM can be an invaluable tool for a SOC team. Some of the primary benefits of SIEM solutions include:
- Log Aggregation: A SIEM solution will integrate with a wide variety of different endpoints and security solutions. It can automatically collect the log files and alert data that they generate, translate the data into a single format, and make the resulting datasets available to SOC analysts for incident detection and response and threat hunting activities.
- Increased Context: In isolation, most indications of a cyberattack can be easily dismissed as noise or benign abnormalities. Only by correlating multiple data points does a threat become detectable and identifiable. SIEMs’ data collection and analytics help to provide the context required to identify more subtle and sophisticated attacks against an organization’s network.
- Reduced Alert Volume: Many organizations use an array of security solutions, which creates a deluge of log and alert data. SIEM solutions can help to organize and correlate this data and identify the alerts most likely to be related to true threats. This enables SOC analysts to focus their efforts on a smaller, more curated set of alerts, which reduces the time wasted on false positive detections.
- Automated Threat Detection: Many SIEM solutions have built-in rules to help with the detection of suspicious activity. For example, a large number of failed login attempts to a user account may indicate a password guessing attack. These integrated detection rules can expedite threat detection and enable the use of automated responses to certain types of attacks.
Despite their many benefits, SIEMs are not perfect solutions to the challenges faced by SOC analysts. Some of the main limitations of SIEMs include:
- Configuration and Integration: A SIEM solution is designed to connect to a variety of endpoints and security solutions within an organization’s network. Before the SIEM can provide value to the organization, these connections need to be set up. This means that SOC analysts will likely spend a significant amount of time configuring and integrating a SIEM solution with their existing security architecture, which takes away from detecting and responding to active threats to the network.
- Rules-Based Detection: SIEM solutions are capable of automatically detecting some types of attacks based on the data that they ingest. However, these threat detection capabilities are largely rule-based. This means that, while a SIEM may be very good at identifying certain types of threats, it is likely to overlook attacks that are novel or do not match an established pattern.
- No Alert Validation: SIEM solutions collect data from an array of solutions across an organization’s network and use this data for threat detection. Based on the collected data and data analysis, SIEMs can generate alerts regarding potential threats. However, no validation of these alerts is performed, meaning that the SIEM’s alerts – while potentially higher-quality and more context-based than the data and alerts that it ingests – can still contain false positive detections.
Horizon: Working Together with SIEM Solutions
SIEMs are valuable tools, but they have their limitations. These limitations mean that SOC analysts lack the certainty that they require to do their jobs.
Check Point Horizon was developed to complement SIEM solutions, providing solutions to some of these limitations. WIth 99.9% precision, Horizon provides SOC teams with visibility into the true threats to their network and systems without wasting valuable time and resources chasing false positives.
To see how Check Point Horizon achieves this unrivaled accuracy, check out this demo. Then, try out Horizon for yourself with a free trial.
|
<urn:uuid:8e0b9b9f-2aa9-4a2f-bae0-37d66ffea489>
|
CC-MAIN-2022-40
|
https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-soc/the-role-of-siem-solutions-in-socs/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00249.warc.gz
|
en
| 0.936783 | 990 | 2.625 | 3 |
FortiToken is a disconnected one-time password (OTP) generator. It is a small physical device with a button that when pressed displays a six digit authentication code. This code is entered with a user’s username and password as two-factor authentication. The code displayed changes every 60 seconds, and when not in use the LCD screen is blank to extend the battery life.
You can attach a lanyard to the FortiToken and wear it around your neck, or store it with other electronic devices. Do not put the FortiToken on a key ring as the metal ring and other metal objects can damage it.
Any time information about the FortiToken is transmitted, it is encrypted. When the FortiGate receives the code that matches a particular FortiToken's serial number, it is delivered and stored encrypted.
The following illustrates the FortiToken two-factor authentication process:
- The user attempts to access a network resource.
- FortiOS matches the traffic to an authentication security policy and prompts the user for their username and password.
- The user enters their username and password.
- FortiOS verifies their credentials. If valid, it prompts the user for the FortiToken code.
- The user views the current code on their FortiToken. They enter the code at the prompt.
- FortiOS verifies the FortiToken code. If valid, it allows the user access to network resources.
If the FortiToken has drifted, the following must take place for the FortiToken to resynchronize with FortiOS:
- FortiOS prompts the user to enter a second code to confirm.
- The user gets the next code from the FortiToken. They enter the code at the prompt.
- FortiOS uses both codes to update its clock to match the FortiToken.
If you attempt to add invalid FortiToken serial numbers, there is no error message. FortiOS does not add invalid serial numbers to the list.
|
<urn:uuid:48154dcc-89bc-4082-b60a-71aca13b4ca4>
|
CC-MAIN-2022-40
|
https://docs.fortinet.com/document/fortigate/6.4.0/administration-guide/323465/fortitokens
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00249.warc.gz
|
en
| 0.854331 | 414 | 2.59375 | 3 |
Data Quality, Accuracy and Reliability
Big data is not immune to inaccuracies. For example, according to a recent report from Experian Data Quality, 75% of businesses believe their customer contact information is incorrect.
When organizations use big data that houses bad data as part of their strategy to make customer relationships stronger, it can lead to big problems. From small embarrassments to complete customer dissention, overconfidence in the accuracy of data can lead to:
- Overall poor business decisions
- Predicting outcomes that never come to pass
- Not capitalizing on, or a misunderstanding of, customer purchase trends and habits
- Moving a customer relationship along at an improper pace
- Conveying a wrong or misguided message to a customer
- Decreased customer loyalty and trust that, in turn, leads to customer retention issues and revenue loss
- Wasted marketing efforts
- Inaccurately assessing various risks
Not only can big data hold wrong information, but it can also include contradictions and duplicate itself as well. Having a database full of inaccurate data wouldn’t lend itself to providing the necessary precision insight needed to support innovation and growth initiatives. But because of the massive volume of data that’s involved from so many sources, it would be a bit surprising if big data was 100% accurate 100% of the time.
Reasons for bad data
How does big data wind up in such bad shape? There are countless possible reasons given that there could be multiple causes in combination that result in a specific error. While human error, criminal behavior and collection errors stand as examples of general reasons for data errors, here are some more targeted examples:
- Incorrect conclusions about customer interests
- Usage of biased sample populations
- Lack of proper big data governance processes that would identify data inconsistencies
- Evaluative or leading survey questions that skew true opinion, behavior or belief
- Usage of outdated or incomplete information
- Multiple data sources improperly linking data sets
- Cybercrime activity that alters or corrupts data
So while it’s no secret that big data can be inaccurate, it doesn’t mean that you shouldn’t do whatever you can to control the accuracy and reliability of your data. Eliminating or minimizing the various ways data inaccuracy festers within your network is key to combating this issue.
While many factors can contribute to the quality, accuracy and reliability of your data, here are a few common problem areas to consider:
A data silo is a warehouse of information under the control of a single department, closed off from outside visibility and isolated from the rest of an organization. It’s not unlike a farm silo. We can all see it from the road and we know it’s there, but those without a key have no idea what’s inside. Instead of grain or corn, however, a data silo houses business-critical information.
The issue with data silos is their isolation. They store data in disparate units that can’t share information with each other. There is simply no integration on the back end and therefore the data you’ve collected can’t provide the meaningful, comprehensive insights that should you should gain from it.
Essentially, data silos are catalysts for inefficiency and redundancy that cause resources to be misused and productivity to be reduced. They’re a breeding ground for inaccurate data that prevent you from seeing the big picture.
What impact do data silos have on your organization?
Basically, there are two results from data silos; the same data is stored by multiple teams, or teams store complementary, but separate, data. Neither situation yields positive results.
There is obviously cost associated with the storage of data, and paying extra to store the same data in multiple areas is not only inefficient, but it also soaks up valuable resources that could be better utilized in other areas of your business.
There’s also risk involved. There is the possibility that the “same” data collected in two different data silos can vary slightly. How would you decide which dataset is correct? Or more appropriately, how would you decide which dataset is the most accurate or up-to-date? If the wrong one is chosen, you risk relying on insight driven by outdated information.
Data Silos – An overwhelming challenge
In a 2016 survey, F5 Networks, Inc. asked organizations how many applications were in their portfolio. 54% of those respondents said they have as many as 200 on their networks. 23% said as many as 500 and 15% said as many as 1,000. Additionally, 9% said between 1,001 and 3,000. Forbes reported through a separate study by Netskope that the typical enterprise has more than 500 applications in place.
With those staggeringly high numbers, the thought of investigating a data problem and the process of checking each data silo to make sense of relevant information, is overwhelming at best.
In this very real scenario, issue resolution is dreadfully slow not only because each silo must be sifted through, but also because you must determine which fragments of information are relevant to the problem at hand.
How do you solve the data silo problem?
Adding new big data initiatives typically heightens isolation issues, thereby increasing data silos and the problems that come with them.
But adding agnostic big data architecture can enable access to data across your organizational silos and provide comprehensive visibility of that segmented information. This essentially breaks down the data silos and eliminates their negative impact, while providing you with the ability to effectively leverage all your data investments across any deployment platform or technology stack.
As you know, data isn’t always usable as it’s received. Preparing it so it can be used for whatever purpose, otherwise known as data cleaning or data cleansing, is normally a slow and difficult process.
There are some estimates that state poor-quality data costs the U.S. economy up to $3.1 trillion per year. That’s certainly a high number, but not necessarily a surprising one given that weak data quality can lead to incorrect results from big data analytics and can also lead to unwise decision making. Additionally, it can potentially open businesses up to issues with compliance because the regulatory requirements of some industries require data to be as accurate and current as possible.
Appropriate design and management of processes can help lessen the potential for poor data quality at the front end, but they can’t wipe it out. The solution is to make bad data usable through the removal or correction of errors and inconsistencies in a dataset. More specifically, the solution is data cleansing.
The Data Cleansing Challenge
Data cleansing is a tedious, time-consuming task that requires multiple complex steps. According to a survey by CrowdFlower, data scientists spend nearly 80% of their time preparing and managing data for analysis.
A detailed analysis of the data must be performed to uncover existing data errors or inconsistencies that ultimately need to be resolved. While this can be done manually, it typically requires the help of analytics tools and programs to streamline the process and make things more efficient.
Depending on the number and type of data sources, part of the data cleansing process may also include:
- Steps to format the data to gain a consistent structure
- Transforming bad data into better quality, usable data
- Evaluation and testing of formatting and transformation definitions and workflows
- Repetition of analysis, design and verification steps
To minimize the potential for working the same data twice, once the data is cleaned, it should be placed back into the original sources to replace its inaccurate, error-rich counterpart.
To be effective, the process of data cleansing must be repeated each time your data is accessed or anytime values change, making it far from a one-off task.
Best Practices to Clean and Preserve Your Data
While we’ve established that data cleansing is a labor-intensive process, there are some best practices you can use up front to help minimize the workload. Here are a few to consider:
Keep Your Data Updated
Set standards and policies for updating data and utilize technology to simplify this task, such as the use of parsing tools to scan incoming emails and automatically update contact information.
Validate Any Newly Captured Data
Set organizational standards and policies to verify all new data that is captured before it enters your database.
Reliable Data Entry
Implement policies to ensure all necessary data points are captured at the applicable time and ensure all employees are aware of these standards.
Duplicate Data Removal
Utilize tools to help remove any potential duplicate data generated by data silos or various other data sources.
Every insight potentially has value, but the challenge of finding the right one at the right time within a huge (and growing) lump of data often proves to be quite difficult.
If uncovered, a few bits of information could provide the invaluable business intelligence you need to push past your competitors. But those bits often get lost amongst the irrelevant information that surrounds them. Knowing that the information you need to establish dominance within your industry is right at your fingertips, but you’re unable to grab it, can be frustrating and maddening.
Maksim Tsvetovat, author of the book “Social Network Analysis for Startups”, points out that in order to use big data, “There has to be a discernible signal in the noise that you can detect and sometimes, there just isn’t one. You approach (big data) carefully and behave like a scientist, which means if you fail at your hypothesis, you come up with a few other hypothesizes and maybe one of them turns out to be correct.”
Leaning on the expertise of a seasoned data scientist can help you discover the source of the noise within your big data ecosystem more quickly, giving you the chance to gain the actionable insight you need to make better business decisions and capitalize on growth opportunities.
|
<urn:uuid:05b98035-5e69-4a92-b80a-100d76d97942>
|
CC-MAIN-2022-40
|
https://entint.com/blog/big-data/the-ultimate-list-of-big-data-problems/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00249.warc.gz
|
en
| 0.933361 | 2,069 | 2.75 | 3 |
A Private Branch Exchange interconnects devices and allows them to make internal calls without using the public switched telephone network. These internal calls are free of charge. When calling other subscribers in other locations via the telephone network the telephone system assigns one of the existing connections with the public switched telephone network to the respective device. This has the advantage of not requiring a separate connection to the public switched telephone network for each phone. The existing lines can therefore be used quite efficiently. The maximum number of parallel calls which can be made depends on the number of exchange lines.
From outside, the extensions on the telephone system can either be called via an extension suffix to the main number or a central switchboard. Call centers often use the queue function of a telephone system. This will hold a call in a queue until an agent connected to the telephone system is available and then transfers the call.
Many telephone systems are able to record a large volume of data and statistics to precisely analyze a company’s call volume. Some telephone systems allow the system administrator to assign different rights to different extensions, for instance for outward dialling. So expensive international calls, for example, can only be placed from specific extensions.
Telephone systems have evolved from mechanical devices in the early telephone age to highly complex, digital devices. Due to Voice-over-IP (VoIP) telephony the Private Branch Exchange is evolving more and more into a software-based solution which can be hosted on a local server or at a computer center in the cloud. Cloud-based telephone systems offer maximum flexibility, can be used regardless of location and do not require the company to provide hardware to operate the system.
|
<urn:uuid:f2e459f2-63d8-4546-8281-9cc52c5ba26e>
|
CC-MAIN-2022-40
|
https://www.nfon.com/en/get-started/cloud-telephony/lexicon/knowledge-base-detail/private-branch-exchange-pbx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00249.warc.gz
|
en
| 0.902534 | 333 | 2.6875 | 3 |
In this chapter, you learn about the following topics:
- Fundamental concepts in network security, including identification of common vulnerabilities and threats, and mitigation strategies
- Implementation of a security architecture using a lifecycle approach, including the phases of the process, their dependencies, and the importance of a sound security policy
The open nature of the Internet makes it vital for businesses to pay attention to the security of their networks. As companies move more of their business functions to the public network, they need to take precautions to ensure that the data cannot be compromised and that the data is not accessible to anyone who is not authorized to see it.
Unauthorized network access by an outside hacker or a disgruntled employee can cause damage or destruction to proprietary data, negatively affect company productivity, and impede the capability to compete. The Computer Security Institute reported in its 2010/2011 CSI Computer Crime and Security Survey (available at http://gocsi.com/survey) that on an average day, 41.1 percent of respondents dealt with at least one security incident (see page 11 of the survey). Unauthorized network access can also harm relationships with customers and business partners, who might question the capability of a company to protect its confidential information. The definition of “data location” is being blurred by cloud computing services and other service trends. Individuals and corporations benefit from the elastic deployment of services in the cloud, available at all times from any device, but these dramatic changes in the business services industry exacerbate the risks in protecting data and the entities using it (individuals, businesses, governments, and so on). Security policies and architectures require sound principles and a lifecycle approach, including whether the data is in the server farm, mobile on the employee’s laptop, or stored in the cloud.
To start on our network security quest, this chapter examines the need for security, looks at what you are trying to protect, and examines the different trends for attacks and protection and the principles of secure network design. These concepts are important not only for succeeding with the IINS 640-554 exam, but they are fundamentals at all security endeavors on which you will be embarking.
Building Blocks of Information Security
Establishing and maintaining a secure computing environment is increasingly more difficult as networks become increasingly interconnected and data flows ever more freely. In the commercial world, connectivity is no longer optional, and the possible risks of connectivity do not outweigh the benefits. Therefore, it is very important to enable networks to support security services that provide adequate protection to companies that conduct business in a relatively open environment. This section explains the breadth of assumptions and challenges to establish and maintain a secure network environment.
Basic Security Assumptions
Several new assumptions have to be made about computer networks because of their evolution over the years:
- Modern networks are very large, very interconnected, and run both ubiquitous protocols (such as IP) and proprietary protocols. Therefore, they are often open to access, and a potential attacker can with relative ease attach to, or remotely access, such networks. Widespread IP internetworking increases the probability that more attacks will be carried out over large, heavily interconnected networks, such as the Internet.
- Computer systems and applications that are attached to these networks are becoming increasingly complex. In terms of security, it becomes more difficult to analyze, secure, and properly test the security of the computer systems and applications; it is even more so when virtualization is involved. When these systems and their applications are attached to large networks, the risk to computing dramatically increases.
Basic Security Requirements
To provide adequate protection of network resources, the procedures and technologies that you deploy need to guarantee three things, sometimes referred to as the CIA triad:
- Confidentiality: Providing confidentiality of data guarantees that only authorized users can view sensitive information.
- Integrity: Providing integrity of data guarantees that only authorized users can change sensitive information and provides a way to detect whether data has been tampered with during transmission; this might also guarantee the authenticity of data.
- Availability of systems and data: System and data availability provides uninterrupted access by authorized users to important computing resources and data.
When designing network security, a designer must be aware of the following:
- The threats (possible attacks) that could compromise security
- The associated risks of the threats (that is, how relevant those threats are for a particular system)
- The cost to implement the proper security countermeasures for a threat
- A cost versus benefit analysis to determine whether it is worthwhile to implement the security countermeasures
Data, Vulnerabilities, and Countermeasures
Although viruses, worms, and hackers monopolize the headlines about information security, risk management is the most important aspect of security architecture for administrators. A less exciting and glamorous area, risk management is based on specific principles and concepts that are related to asset protection and security management.
An asset is anything of value to an organization. By knowing which assets you are trying to protect, as well as their value, location, and exposure, you can more effectively determine the time, effort, and money to spend in securing those assets.
A vulnerability is a weakness in a system or its design that could be exploited by a threat. Vulnerabilities are sometimes found in the protocols themselves, as in the case of some security weaknesses in TCP/IP. Often, the vulnerabilities are in the operating systems and applications.
Written security policies might also be a source of vulnerabilities. This is the case when written policies are too lax or are not thorough enough in providing a specific approach or line of conduct to network administrators and users.
A threat is any potential danger to assets. A threat is realized when someone or something identifies a specific vulnerability and exploits it, creating exposure. If the vulnerability exists theoretically but has not yet been exploited, the threat is considered latent. The entity that takes advantage of the vulnerability is known as the threat agent or threat vector.
A risk is the likelihood that a particular threat using a specific attack will exploit a particular vulnerability of a system that results in an undesirable consequence. Although the roof of the data center might be vulnerable to being penetrated by a falling meteor, for example, the risk is minimal because the likelihood of that threat being realized is negligible.
An exploit happens when computer code is developed to take advantage of a vulnerability. For example, suppose that a vulnerability exists in a piece of software, but nobody knows about this vulnerability. Although the vulnerability exists theoretically, there is no exploit yet developed for it. Because there is no exploit, there really is no problem yet.
A countermeasure is a safeguard that mitigates a potential risk. A countermeasure mitigates risk either by eliminating or reducing the vulnerability or by reducing the likelihood that a threat agent will be able to exploit the risk.
To optimally allocate resources and secure assets, it is essential that some form of data classification exists. By identifying which data has the most worth, administrators can put their greatest effort toward securing that data. Without classification, data custodians find it almost impossible to adequately secure the data, and IT management finds it equally difficult to optimally allocate resources.
Sometimes information classification is a regulatory requirement (required by law), in which case there might be liability issues that relate to the proper care of data. By classifying data correctly, data custodians can apply the appropriate confidentiality, integrity, and availability controls to adequately secure the data, based on regulatory, liability, and ethical requirements. When an organization takes classification seriously, it illustrates to everyone that the company is taking information security seriously.
The methods and labels applied to data differ all around the world, but some patterns do emerge. The following is a common way to classify data that many government organizations, including the military, use:
- Unclassified: Data that has little or no confidentiality, integrity, or availability requirements and therefore little effort is made to secure it.
- Restricted: Data that if leaked could have undesirable effects on the organization. This classification is common among NATO (North Atlantic Treaty Organization) countries but is not used by all nations.
- Confidential: Data that must comply with confidentiality requirements. This is the lowest level of classified data in this scheme.
- Secret: Data for which you take significant effort to keep secure because its disclosure could lead to serious damage. The number of individuals who have access to this data is usually considerably fewer than the number of people who are authorized to access confidential data.
- Top secret: Data for which you make great effort and sometimes incur considerable cost to guarantee its secrecy since its disclosure could lead to exceptionally grave damage. Usually a small number of individuals have access to top-secret data, on condition that there is a need to know.
- Sensitive But Unclassified (SBU): A popular classification by government that designates data that could prove embarrassing if revealed, but no great security breach would occur. SBU is a broad category that also includes the For Official Use Only designation.
It is important to point out that there is no actual standard for private-sector classification. Furthermore, different countries tend to have different approaches and labels. Nevertheless, it can be instructive to examine a common, private sector classification scheme:
- Public: Companies often display public data in marketing literature or on publicly accessible websites.
- Sensitive: Data in this classification is similar to the SBU classification in the government model. Some embarrassment might occur if this data is revealed, but no serious security breach is involved.
- Private: Private data is important to an organization. You make an effort to maintain the secrecy and accuracy of this data.
- Confidential: Companies make the greatest effort to secure confidential data. Trade secrets and employee personnel files are examples of what a company would commonly classify as confidential.
Regardless of the classification labeling used, what is certain is that as the security classification of a document increases, the number of staff that should have access to that document should decrease, as illustrated in Figure 1-1.
Figure 1-1. Ratio: Staff Access to Information Security Classification
Many factors go into the decision of how to classify certain data. These factors include the following:
- Value: Value is the number one criterion. Not all data has the same value. The home address and medical information of an employee is considerably more sensitive (valuable) than the name of the chief executive officer (CEO) and the main telephone number of the company.
- Age: For many types of data, its importance changes with time. For example, an army general will go to great lengths to restrict access to military secrets. But after the war is over, the information is gradually less and less useful and eventually is declassified.
- Useful life: Often data is valuable for only a set window of time, and after that window has expired, there is no need to keep it classified. An example of this type of data is confidential information about the products of a company. The useful life of the trade secrets of products typically expires when the company no longer sells the product.
- Personal association: Data of this type usually involves something of a personal nature. Much of the government data regarding employees is of this nature. Steps are usually taken to protect this data until the person is deceased.
For a classification system to work, there must be different roles that are fulfilled. The most common of these roles are as follows:
- Owner: The owner is the person who is ultimately responsible for the information, usually a senior-level manager who is in charge of a business unit. The owner classifies the data and usually selects custodians of the data and directs their actions. It is important that the owner periodically review the classified data because the owner is ultimately responsible for the data.
- Custodian: The custodian is usually a member of the IT staff who has the day-to-day responsibility for data maintenance. Because the owner of the data is not required to have technical knowledge, the owner decides the security controls but the custodian marks the data to enforce these security controls. To maintain the availability of the data, the custodian regularly backs up the data and ensures that the backup media is secure. Custodians also periodically review the security settings of the data as part of their maintenance responsibilities.
- User: Users bear no responsibility for the classification of data or even the maintenance of the classified data. However, users do bear responsibility for using the data in accordance with established operational procedures so that they maintain the security of the data while it is in their possession.
It is also important to understand the weaknesses in security countermeasures and operational procedures. This understanding results in more effective security architectures. When analyzing system vulnerabilities, it helps to categorize them in classes to better understand the reasons for their emergence. You can classify the main vulnerabilities of systems and assets using broad categories:
- Policy flaws
- Design errors
- Protocol weaknesses
- Software vulnerabilities
- Hostile code
- Human factor
This list mentions just a few of the vulnerability categories. For each of these categories, multiple vulnerabilities could be listed.
There are several industry efforts that are aimed at categorizing threats for the public domain. These are some well-known, publicly available catalogs that may be used as templates for vulnerability analysis:
- Common Vulnerabilities and Exposures (CVE): A dictionary of publicly known information security vulnerabilities and exposures. It can be found at http://cve.mitre.org/. The database provides common identifiers that enable data exchange between security products, providing a baseline index point for evaluating coverage of tools and services.
- National Vulnerability Database (NVD): The U.S. government repository of standards-based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security-related software flaws, misconfigurations, product names, and impact metrics. The database can be found at http://nvd.nist.gov.
- Common Vulnerability Scoring System (CVSS): A standard within the computer and networking fields for assessing and classifying security vulnerabilities. This standard is focused on rating a vulnerability compared to others, thus helping the administrator to set priorities. This standard was adopted by significant players in the industry such as McAfee, Qualys, Tenable, and Cisco. More information can be found, including the database and calculator, at http://www.first.org/cvss.
After assets (data) and vulnerabilities, threats are the most important component to understand. Threat classification and analysis, as part of the risk management architecture, will be described later in this chapter.
Once threat vectors are considered, organizations rely on various controls to accomplish in-depth defense as part of their security architecture. There are several ways to classify these security controls; one of them is based on the nature of the control itself. These controls fall into one of three categories:
- Administrative: Controls that are largely policies and procedures
- Technical: Controls that involve electronics, hardware, software, and so on
- Physical: Controls that are mostly mechanical
Later in this chapter, we will discuss models and frameworks from different organizations that can be used to implement network security best practices.
Administrative controls are largely policy and procedure driven. You will find many of the administrative controls that help with an enterprise’s information security in the human resources department. Some of these controls are as follows:
- Security-awareness training
- Security policies and standards
- Change controls and configuration controls
- Security audits and tests
- Good hiring practices
- Background checks of contractors and employees
For example, if an organization has strict hiring practices that require drug testing and background checks for all employees, the organization will likely hire fewer individuals of questionable character. With fewer people of questionable character working for the company, it is likely that there will be fewer problems with internal security issues. These controls do not single-handedly secure an enterprise, but they are an important part of an information security program.
Technical controls are extremely important to a good information security program, and proper configuration and maintenance of these controls will significantly improve information security. The following are examples of technical controls:
- Intrusion prevention systems (IPS)
- Virtual private network (VPN) concentrators and clients
- TACACS+ and RADIUS servers
- One-time password (OTP) solutions
- Smart cards
- Biometric authentication devices
- Network Admission Control (NAC) systems
- Routers with ACLs
While trying to secure an environment with good technical and administrative controls, it is also necessary that you lock the doors in the data center. This is an example of a physical control. Other examples of physical controls include the following:
- Intruder detection systems
- Security guards
- Uninterruptible power supplies (UPS)
- Fire-suppression systems
- Positive air-flow systems
When security professionals examine physical security requirements, life safety (protecting human life) should be their number one concern. Good planning is needed to balance life safety concerns against security concerns. For example, permanently barring a door to prevent unauthorized physical access might prevent individuals from escaping in the event of a fire. By the way, physical security is a field that Cisco entered a few years ago. More information on those products can be found at http://www.cisco.com/go/physicalsecurity.
Controls are also categorized by the type of control they are:
- Preventive: The control prevents access.
- Deterrent: The control deters access.
- Detective: The control detects access.
All three categories of controls can be any one of the three types of controls; for example, a preventive control can be administrative, physical, or technical.
Preventive controls exist to prevent compromise. This statement is true whether the control is administrative, technical, or physical. The ultimate purpose for these controls is to stop security breaches before they happen.
However, a good security design also prepares for failure, recognizing that prevention will not always work. Therefore, detective controls are also part of a comprehensive security program because they enable you to detect a security breach and to determine how the network was breached. With this knowledge, you should be able to better secure the data the next time.
With effective detective controls in place, the incident response can use the detective controls to figure out what went wrong, allowing you to immediately make changes to policies to eliminate a repeat of that same breach. Without detective controls, it is extremely difficult to determine what you need to change.
Deterrent controls are designed to scare away a certain percentage of adversaries to reduce the number of incidents. Cameras in bank lobbies are a good example of a deterrent control. The cameras most likely deter at least some potential bank robbers. The cameras also act as a detective control.
Need for Network Security
Business goals and risk analysis drive the need for network security. For a while, information security was influenced to some extent by fear, uncertainty, and doubt. Examples of these influences included the fear of a new worm outbreak, the uncertainty of providing web services, or doubts that a particular leading-edge security technology would fail. But we realized that regardless of the security implications, business needs had to come first.
If your business cannot function because of security concerns, you have a problem. The security system design must accommodate the goals of the business, not hinder them. Therefore, risk management involves answering two key questions:
- What does the cost-benefit analysis of your security system tell you?
How will the latest attack techniques play out in your network environment?
Figure 1-2 illustrates the key factors you should consider when designing a secure network:
- Business needs: What does your organization want to do with the network?
- Risk analysis: What is the risk and cost balance?
- Security policy: What are the policies, standards, and guidelines that you need to address business needs and risks?
- Industry best practices: What are the reliable, well-understood, and recommended security best practices?
Security operations: These operations include incident response, monitoring, maintenance, and auditing the system for compliance.
Figure 1-2. Factors Affecting the Design of a Secure Network
Risk management and security policies will be detailed later in this chapter.
When viewed from the perspective of motivation intersecting with opportunity, risk management can be driven not only by the techniques or sophistication of the attackers and threat vectors, but also by their motives. Research reveals that hackers are increasingly motivated by profit, where in the past they were motivated by notoriety and fame. In instances of attacks carried out for financial gains, hackers are not looking for attention, which makes their exploits harder to detect. Few signatures exist or will ever be written to capture these “custom” threats. In order to be successful in defending your environments, you must employ a new model to catch threats across the infrastructure.
Attackers are also motivated by government or industrial espionage. The Stuxnet worm, whose earliest versions appear to date to 2009, is an example. This worm differs from its malware “cousins” in that it has a specific, damaging goal: to traverse industrial control systems, such as supervisory control and data acquisition (SCADA) systems, so that it can reprogram the programmable logic controllers, possibly disrupting industrial operations.
This worm was not created to gather credit card numbers to sell off to the highest bidder, or to sell fake pharmaceuticals. This worm appears to have been created solely to invade public or private infrastructure. The cleverness of Stuxnet lies in its ability to traverse non-networked systems, which means that even systems unconnected to networks or the Internet are at risk.
Security experts have called Stuxnet “the smartest malware ever.” This worm breaks the malware mold because it is designed to disrupt industrial control systems in critical infrastructure. This ability should be a concern for every government.
Motivation can also so be political or in the form of vigilantism. Anonymous is currently the best known hacktivist group. As a recent example of its activities, in May 2012, Anonymous attacked the website of the Quebec government after its promulgation of a law imposing new requirements for the right to protest by college and university students.
The nature and sophistication of threats, as well as their pervasiveness and global nature, are trends to watch.Figure 1-3 shows how the threats that organizations face have evolved over the past few decades, and how the growth rate of vulnerabilities that are reported in operating systems and applications is rising. The number and variety of viruses and worms that have appeared over the past three years is daunting, and their rate of propagation is frightening. There have been unacceptable levels of business outages and expensive remediation projects that consume staff, time, and funds that were not originally budgeted for such tasks.
Figure 1-3. Shrinking Time Frame from Knowledge of Vulnerability to Release of Exploits
New exploits are designed to have global impact in minutes. Blended threats, which use multiple means of propagation, are more sophisticated than ever. The trends are becoming regional and global in nature. Early attacks affected single systems or one organization network, while attacks that are more recent are affecting entire regions. For example, attacks have expanded from individual denial of service (DoS) attacks from a single attacker against a single target, to large-scale distributed DoS (DDoS) attacks emanating from networks of compromised systems that are known as botnets.
Threats are also becoming persistent. After an attack starts, attacks may appear in waves as infected systems join the network. Because infections are so complex and have so many end users (employees, vendors, and contractors), multiple types of endpoints (company desktop, home, and server), and multiple types of access (wired, wireless, VPN, and dial-up), infections are difficult to eradicate.
More recent threat vectors are increasingly sophisticated, and the motivation of the attackers is reflected in their impact. Recent threat vectors include the following:
- Cognitive threats via social networks (likejacking): Social engineering takes a new meaning in the era of social networking. From phishing attacks that target social network accounts of high-profile individuals, to information exposure due to lack of policy, social networks have become a target of choice for malicious attackers.
- PDA and consumer electronics exploits: The operating systems on consumer devices (smartphones, PDAs, and so on) are an option of choice for high-volume attacks. The proliferation of applications for these operating systems, and the nature of the development and certification processes for those applications, augments the problem.
- Widespread website compromises: Malicious attackers compromise popular websites, making the sites download malware to connecting users. Attackers typically are not interested in the data on the website, but use it as a springboard to infect the users of the site.
- Disruption of critical infrastructure: The Stuxnet malware, which exploits holes in Windows systems and targets a specific Siemens supervisory control and data acquisition (SCADA) program with sabotage, confirmed concerns about an increase in targeted attacks aimed at the power grid, nuclear plants, and other critical infrastructure.
- Virtualization exploits: Device and service virtualization add more complexity to the network. Attackers know this and are increasingly targeting virtual servers, virtual switches, and trust relationships at the hypervisor level.
- Memory scraping: Increasingly popular, this technique is aimed at fetching information directly from volatile memory. The attack tries to exploit operating systems and applications that leave traces of data in memory. Attacks are particularly aimed at encrypted information that may be processed as unencrypted in volatile memory.
- Hardware hacking: These attacks are aimed at exploiting the hardware architecture of specific devices, with consumer devices being increasingly popular. Attack methods include bus sniffing, altering firmware, and memory dumping to find crypto keys.
- IPv6-based attacks: These attacks could become more pervasive as the migration to IPv6 becomes widespread. Attackers are focusing initially on covert channels through various tunneling techniques, and man-in-the middle attacks leverage IPv6 to exploit IPv4 in dual-stack deployments.
Trends Affecting Network Security
Other trends in business, technology, and innovation influence the need for new paradigms in information security. Mobility is one trend. Expect to see billions of new network mobile devices moving into the enterprise worldwide over the next few years. Taking into consideration constant reductions and streamlining in IT budgets, organizations face serious challenges in supporting a growing number of mobile devices at a time when their resources are being reduced.
The second market transition is cloud computing and cloud services. Organizations of all kinds are taking advantage of offerings such as Software as a Service (SaaS) and Infrastructure as a Service (IaaS) to reduce costs and simplify the deployment of new services and applications.
These cloud services add challenges in visibility (how do you identify and mitigate threats that come to and from a trusted network?), control (who controls the physical assets, encryption keys, and so on?), and trust (do you trust cloud partners to ensure that critical application data is still protected when it is off the enterprise network?).
The third market transition is about changes to the workplace experience. Borders are blurring in the organization between consumers and workers and between the various functions within the organization. The borders between the company and its partners, customers, and suppliers, are also fading. As a result, the network is experiencing increasing demand to connect anyone, any device, anywhere, at any time.
These changes represent a challenge to security teams within the organization. These teams now need to manage noncontrolled consumer devices, such as a personal tablet, coming into the network, and provide seamless and context-aware services to users all over the world. The location of the data and services accessed by the users is almost irrelevant. The data could be internal to the organization or it could be in the cloud. This situation makes protecting data and services a challenging proposition.
Attacks are increasingly politically and financially motivated, driven by botnets, and aimed at critical infrastructure; for example:
- Botnets are used for spam, data theft, mail relays, or simply for denial-of-service attacks (ref: http://en.wikipedia.org/wiki/Botnet).
- Zeus botnets reached an estimated 3.6 million bots, infected workstations, or “zombies” (ref: http://www.networkworld.com/news/2009/072209-botnets.html).
- Stuxnet was aimed at industrial systems.
- Malware is downloaded inadvertently from online marketplaces.
One of the trends in threats is the exploitation of trust. Whether they are creating malware that can subvert industrial processes or tricking social network users into handing over login and password information, cybercriminals have a powerful weapon at their disposal: the exploitation of trust. Cybercriminals have become skilled at convincing users that their infected links and URLs are safe to click, and that they are someone the user knows and trusts. Hackers exploit the trust we have in TinyURLs and in security warning banners. With stolen security credentials, cybercriminals can freely interact with legitimate software and systems.
Nowhere is this tactic more widespread than within social networking, where cybercriminals continue to attract victims who are willing to share information with people they believe are known to them, with malware such as Koobface. One noticeable shift in social engineering is that criminals are spending more time figuring out how to assume someone’s identity, perhaps by generating emails from an individual’s computer or social networking account. A malware-laden email or scam sent by a “trusted person” is more likely to elicit a click-through response than the same message sent by a stranger.
Threats originating from countries outside of the United States are rapidly increasing. Global annual spam volumes actually dropped in 2010, the first time this has happened in the history of the Internet. However, spammers are originating in increasingly varied locations and countries.
Money muling is the practice of hiring individuals as “mules,” recruited by handlers or “wranglers” to set up bank accounts, or even use their own bank accounts, to assist in the transfer of money from the account of a fraud victim to another location, usually overseas, via a wire transfer or automated clearing house (ACH) transaction. Money mule operations often involve individuals in multiple countries.
Web malware is definitely on the rise. The number of distinct domains that are compromised to download malware to connecting users is increasing dramatically. The most dangerous aspect of this type of attack is the fact that users do not need to do much to get infected. Many times, the combination of malware on the website and vulnerabilities on web browsers is enough to provoke infection just by connecting to the website. The more popular the site, the higher the volume of potential infection.
Recently there have been major shifts in the compliance landscape. Although enforcement of existing regulations has been weak in many jurisdictions worldwide, regulators and standards bodies are now tightening enforcement through expanded powers, higher penalties, and harsh enforcement actions. In the future it will be more difficult to hide failures in information security wherever organizations do business. Legislators are forcing transparency through the introduction of breach notification laws in Europe, Asia, and North America as data breach disclosure becomes a global principle.
As more regulations are introduced, there is a trend toward increasingly prescriptive rules. For example, recent amendments introduced in the United Kingdom in 2011 bring arguably more prescriptive information protection regulations to the Privacy and Electronic Communications Directive. Such laws are discussed in more detailed later in this chapter. Any global enterprise that does business in the United Kingdom today will likely be covered by these regulations. Lately, regulators are also making it clear that enterprises are responsible for ensuring the protection of their data when it is being processed by a business partner, including cloud service providers. The new era of compliance creates formidable challenges for organizations worldwide.
For many organizations, stricter compliance could help focus management attention on security, but if managers take a “check-list approach” to compliance, it will detract from actually managing risk and may not improve security. The new compliance landscape will increase costs and risks. For example, it takes time and resources to substantiate compliance. Increased requirements for service providers give rise to more third-party risks.
With more transparency, there are now greater consequences for data breaches. For example, expect to see more litigation as customers and business partners seek compensation for compromised data. But the harshest judgments will likely come from the court of public opinion, with the potential to permanently damage an enterprise’s reputation.
The following are some of the U.S. and international regulations that many companies are subject to:
- Sarbanes-Oxley (SOX)
- Federal Information Security Management Act (FISMA)
- Gramm-Leach-Bliley Act (GLBA)
- Payment Card Industry Data Security Standard (PCI DSS)
- Health Insurance Portability and Accountability Act (HIPAA)
- Digital Millennium Copyright Act (DMCA)
- Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada
- European Union Data Protection Directive (EU 95/46/EC)
- Safe Harbor Act - European Union and United States
- International Convergence of Capital Measurement and Capital Standards (Basel II)
The challenge becomes to comply with these regulations and, at the same time, make that compliance translate into an effective security posture.
Adversaries, Methodologies, and Classes of Attack
Who are hackers? What motivates them? How do they conduct their attacks? How do they manage to breach the measures we have in place to ensure confidentiality, integrity, and availability? Which best practices can we adopt to defeat hackers? These are some of the questions we try to answer in this section.
People are social beings, and it is quite common for systems to be compromised through social engineering. Harm can be caused by people just trying to be “helpful.” For example, in an attempt to be helpful, people have been known to give their passwords over the phone to attackers who have a convincing manner and say they are troubleshooting a problem and need to test access using a real user password. End users must be trained, and reminded, that the ultimate security of a system depends on their behavior.
Of course, people often cause harm within organizations intentionally: most security incidents are caused by insiders. Thus, strong internal controls on security are required, and special organizational practices might need to be implemented.
An example of a special organizational practice that helps to provide security is the separation of duty, where critical tasks require two or more persons to complete them, thereby reducing the risk of insider threat. People are less likely to attack or misbehave if they are required to cooperate with others.
Unfortunately, users frequently consider security too difficult to understand. Software often does not make security options or decisions easy for end users. Also, users typically prefer “whatever” functionality to no functionality. Implementation of security measures should not create an internally generated DoS, meaning, if security is too stringent or too cumbersome for users, either they will not have access to all the resources needed to perform their work or their performance will be hindered by the security operations.
To defend against attacks on information and information systems, organizations must begin to define the threat by identifying potential adversaries. These adversaries can include the following:
- Nations or states
- Corporate competitors
- Disgruntled employees
- Government agencies, such as the National Security Agency (NSA) and the Federal Bureau of Investigations (FBI)
Hackers comprise the most well-known outside threat to information systems. They are not necessarily geniuses, but they are persistent people who have taken a lot of time to learn their craft.
Many titles are assigned to hackers:
- Hackers: Hackers are computer enthusiasts who break into networks and systems to learn more about them. Some hackers generally mean no harm and do not expect financial gain. Unfortunately, hackers may unintentionally pass valuable information on to people who do intend to harm the system. Hackers are subdivided into the following categories:
- White hat (ethical hacker)
- Blue hat (bug tester)
- Gray hat (ethically questionable hacker)
- Black hat (unethical hacker)
- Crackers (criminal hackers): Crackers are hackers with a criminal intent to harm information systems. Crackers are generally working for financial gain and are sometimes called black hat hackers.
Phreakers (phone breakers): Phreakers pride themselves on compromising telephone systems. Phreakers reroute and disconnect telephone lines, sell wiretaps, and steal long-distance services.
Script kiddies: Script kiddies think of themselves as hackers, but have very low skill levels. They do not write their own code; instead, they run scripts written by other, more skilled attackers.
- Hacktivists: Hacktivists are individuals who have a political agenda in doing their work. When government websites are defaced, this is usually the work of a hacktivist.
The goal of any hacker is to compromise the intended target or application. Hackers begin with little or no information about the intended target, but by the end of their analysis, they have accessed the network and have begun to compromise their target. Their approach is usually careful and methodical, not rushed and reckless. The seven-step process that follows is a good representation of the methods that hackers use:
- Step 1. Perform footprint analysis (reconnaissance).
- Step 2. Enumerate applications and operating systems.
- Step 3. Manipulate users to gain access.
- Step 4. Escalate privileges.
- Step 5. Gather additional passwords and secrets.
- Step 6. Install back doors.
- Step 7. Leverage the compromised system.
To successfully hack into a system, hackers generally first want to know as much as they can about the system. Hackers can build a complete profile, or “footprint,” of the company security posture. Using a range of tools and techniques, an attacker can discover the company domain names, network blocks, IP addresses of systems, ports and services that are used, and many other details that pertain to the company security posture as it relates to the Internet, an intranet, remote access, and an extranet. By following some simple advice, network administrators can make footprinting more difficult.
After hackers have completed a profile, or footprint, of your organization, they use tools such as those in the list that follows to enumerate additional information about your systems and networks. All these tools are readily available to download, and the security staff should know how these tools work. Additional tools (introduced later in the “Security Testing Techniques” section) can also be used to gather information and therefore hack.
- Netcat: Netcat is a featured networking utility that reads and writes data across network connections.
- Microsoft EPDump and Microsoft Remote Procedure Call (RPC) Dump: These tools provide information about Microsoft RPC services on a server.
- GetMAC: This application provides a quick way to find the MAC (Ethernet) layer address and binding order for a computer running Microsoft Windows locally or across a network.
- Software development kits (SDK): SDKs provide hackers with the basic tools that they need to learn more about systems.
Another common technique that hackers use is to manipulate users of an organization to gain access to that organization. There are countless cases of unsuspecting employees providing information to unauthorized people simply because the requesters appear innocent or to be in a position of authority. Hackers find names and telephone numbers on websites or domain registration records by footprinting. Hackers then directly contact these people by phone and convince them to reveal passwords. Hackers gather information without raising any concern or suspicion. This form of attack is called social engineering. One form of a social engineering attack is for the hacker to pose as a visitor to the company, a delivery person, a service technician, or some other person who might have a legitimate reason to be on the premises and, after gaining entrance, walk by cubicles and look under keyboards to see whether anyone has put a note there containing the current password.
The next thing the hacker typically does is review all the information that they have collected about the host, searching for usernames, passwords, and Registry keys that contain application or user passwords. This information can help hackers escalate their privileges on the host or network. If reviewing the information from the host does not reveal useful information, hackers may launch a Trojan horse attack in an attempt to escalate their privileges on the host. This type of attack usually means copying malicious code to the user system and giving it the same name as a frequently used piece of software.
After the hacker has obtained higher privileges, the next task is to gather additional passwords and other sensitive data. The targets now include such things as the local security accounts manager database or the Active Directory of a domain controller. Hackers use legitimate tools such as pwdump and lsadump applications to gather passwords from machines running Windows, which then can be cracked with the very popular Cain & Abel software tool. By cross-referencing username and password combinations, the hacker is able to obtain administrative access to all the computers in the network.
If hackers are detected trying to enter through the “front door,” or if they want to enter the system without being detected, they try to use “back doors” into the system. A back door is a method of bypassing normal authentication to secure remote access to a computer while attempting to remain undetected. The most common backdoor point is a listening port that provides remote access to the system for users (hackers) who do not have, or do not want to use, access or administrative privileges.
After hackers gain administrative access, they enjoy hacking other systems on the network. As each new system is hacked, the attacker performs the steps that were outlined previously to gather additional system and password information. Hackers try to scan and exploit a single system or a whole set of networks and usually automate the whole process.
In addition, hackers will cover their tracks either by deleting log entries or falsifying them.
In classifying security threats, it is common to find general categories that resemble the perspective of the attacker and the approaches that are used to exploit software. Attack patterns are a powerful mechanism to capture and communicate the perspective of the attacker. These patterns are descriptions of common methods for exploiting vulnerabilities. The patterns derive from the concept of design patterns that are applied in a destructive rather than constructive context and are generated from in-depth analysis of specific, real-world exploit examples. The following list illustrates examples of threat categories that are based on this criterion. Notice that some threats are not malicious attacks. Examples of nonmalicious threats include forces of nature such as hurricanes and earthquakes.
Later in this chapter, you learn about some of the general categories under which threats can be regrouped, such as:
- Enumeration and fingerprinting
- Spoofing and impersonation
- Overt and covert channels
- Blended threats and malware
- Exploitation of privilege and trust
- Password attacks
- Availability attacks
- Denial of service (DoS)
- Physical security attacks
- Forces of nature
To assist in enhancing security throughout the security lifecycle, there are many publicly available classification databases that provide a catalog of attack patterns and classification taxonomies. They are aimed at providing a consistent view and method for identifying, collecting, refining, and sharing attack patterns for specific communities of interest. The following are four of the most prominent databases:
- Common Attack Pattern Enumeration and Classification (CAPEC): Sponsored by the U.S. Department of Homeland Security as part of the software assurance strategic initiative of the National Cyber Security Division, the objective of this effort is to provide a publicly available catalog of attack patterns along with a comprehensive schema and classification taxonomy. More information can be found at http://capec.mitre.org.
- Open Web Application Security Project (OWASP) Application Security Verification Standard (ASVS): OWASP is a not-for-profit worldwide charitable organization focused on improving the security of application software. The primary objective of ASVS is to normalize the range in the coverage and level of rigor available in the market when it comes to performing web application security verification using a commercially workable open standard. More information can be found at https://www.owasp.org.
- Web Application Security Consortium Threat Classification (WASC TC): Sponsored by the WASC, this is a cooperative effort to clarify and organize the threats to the security of a website. The project is aimed at developing and promoting industry-standard terminology for describing these issues. Application developers, security professionals, software vendors, and compliance auditors have the ability to access a consistent language and definitions for web security-related issues. More information can be found at http://www.webappsec.org.
- Malware Attribute Enumeration and Characterization (MAEC): Created by MITRE, this effort is international in scope and free for public use. MAEC is a standardized language for encoding and communicating high-fidelity information about malware based on attributes such as behaviors, artifacts, and attack patterns. More information can be found at http://maec.mitre.org.
Enumeration and Fingerprinting with Ping Sweeps and Port Scans
Enumeration and fingerprinting are types of attacks that use legitimate tools for illegitimate purposes. Some of the tools, such as port-scan and ping-sweep applications, run a series of tests against hosts and devices to identify vulnerable services that need attention. IP addresses and port or banner data from both TCP and UDP ports are examined to gather information.
In an illegitimate situation, a port scan is a series of messages sent by someone attempting to break into a computer to learn which computer network services (each service is associated with a well-known port number) the computer provides. Port scanning can be automated to scan a range of TCP or UDP port numbers on a host to detect listening services. Port scanning, a favorite computer hacker approach, provides information to the hacker about where to probe for weaknesses. Essentially, a port scan consists of sending a message to each port, one at a time. The kind of response received indicates whether the port is being used and needs further probing.
A ping sweep, also known as an Internet Control Message Protocol (ICMP) sweep, is a basic network-scanning technique that is used to determine which IP addresses map to live hosts (computers). A ping sweep consists of ICMP echo-requests (pings) sent to multiple hosts, whereas a single ping consists of ICMP echo-requests that are sent to one specific host computer. If a given address is live, that host returns an ICMP echo-reply. The goal of the ping sweep is to find hosts available on the network to probe for vulnerabilities. Ping sweeps are among the oldest and slowest methods that are used to scan a network.
IP Spoofing Attacks
The prime goal of an IP spoofing attack is to establish a connection that allows the attacker to gain root access to the host and to create a backdoor entry path into the target system.
IP spoofing is a technique used to gain unauthorized access to computers whereby the intruder sends messages to a computer with an IP address that indicates the message is coming from a trusted host. The attacker learns the IP address of a trusted host and modifies the packet headers so that it appears that the packets are coming from that trusted host.
At a high level, the concept of IP spoofing is easy to comprehend. Routers determine the best route between distant computers by examining the destination address, and ignore the source address. In a spoofing attack, an attacker outside your network pretends to be a trusted computer by using a trusted internal or external IP address.
If an attacker manages to change the routing tables to divert network packets to the spoofed IP address, the attacker can receive all the network packets addressed to the spoofed address and reply just as any trusted user can.
IP spoofing can also provide access to user accounts and passwords. For example, an attacker can emulate one of your internal users in ways that prove embarrassing for your organization. The attacker could send email messages to business partners that appear to have originated from someone within your organization. Such attacks are easier to perpetrate when an attacker has a user account and password, but they are also possible when attackers combine simple spoofing attacks with their knowledge of messaging protocols.
A rudimentary use of IP spoofing also involves bombarding a site with IP packets or ping requests, spoofing a source, a third-party registered public address. When the destination host receives the requests, it responds to what appears to be a legitimate request. If multiple hosts are attacked with spoofed requests, their collective replies to the third-party spoofed IP address create an unsupportable flood of packets, thus creating a DoS attack.
The basis of IP spoofing during a TCP communication lies in an inherent security weakness known as sequence prediction. Hackers can guess or predict the TCP sequence numbers that are used to construct a TCP packet without receiving any responses from the server. Their prediction allows them to spoof a trusted host on a local network. To mount an IP spoofing attack, the hacker listens to communications between two systems. The hacker sends packets to the target system with the source IP address of the trusted system, as shown in Figure 1-5.
Figure 1-5. Sequence Number Prediction
If the packets from the hacker have the sequence numbers that the target system is expecting, and if these packets arrive before the packets from the real, trusted system, the hacker becomes the trusted host.
To engage in IP spoofing, hackers must first use a variety of techniques to find an IP address of a trusted host and then modify their packet headers to appear as though packets are coming from that trusted host. Further, the attacker can engage other unsuspecting hosts to generate traffic that appears as though it too is coming from the trusted host, thus flooding the network.
Trust exploitation refers to an individual taking advantage of a trust relationship within a network.
As an example of trust exploitation, consider the network shown in Figure 1-6, where system A is in the demilitarized zone (DMZ) of a firewall. System B, located in the inside of the firewall, trusts System A. When a hacker on the outside network compromises System A in the DMZ, the attacker can leverage the trust relationship it has to gain access to System A.
Figure 1-6. Trust Exploitation
A DMZ can be seen as a semi-secure segment of your network. A DMZ is typically used to provide to outside users access to corporate resources, because these users are not allowed to reach inside servers directly. However, a DMZ server might be allowed to reach inside resources directly. In a trust exploitation attack, a hacker could hack a DMZ server and use it as a springboard to reach the inside network.
Several trust models may exist in a network:
- Active Directory
- Linux and UNIX
- Network File System (NFS)
- Network Information Services Plus (NIS+)
Password attacks can be implemented using several methods, including brute-force attacks, Trojan horse programs, IP spoofing, keyloggers, packet sniffers, and dictionary attacks. Although packet sniffers and IP spoofing can yield user accounts and passwords, password attacks usually refer to repeated attempts to identify a user account, password, or both. These repeated attempts are called brute-force attacks.
To execute a brute-force attack, an attacker can use a program that runs across the network and attempts to log in to a shared resource, such as a server. When an attacker gains access to a resource, the attacker has the same access rights as the rightful user. If this account has sufficient privileges, the attacker can create a back door for future access, without concern for any status and password changes to the compromised user account.
Just as with packet sniffers and IP spoofing attacks, a brute-force password attack can provide access to accounts that attackers then use to modify critical network files and services. For example, an attacker compromises your network integrity by modifying your network routing tables. This trick reroutes all network packets to the attacker before transmitting them to their final destination. In such a case, an attacker can monitor all network traffic, effectively becoming a man in the middle.
Passwords present a security risk if they are stored as plain text. Thus, passwords must be encrypted in order to avoid risks. On most systems, passwords are processed through an encryption algorithm that generates a one-way hash on passwords. You cannot reverse a one-way hash back to its original text. Most systems do not decrypt the stored password during authentication; they store the one-way hash. During the login process, you supply an account and password, and the password encryption algorithm generates a one-way hash. The algorithm compares this hash to the hash stored on the system. If the hashes are the same, the algorithm assumes that the user supplied the proper password.
Remember that passing the password through an algorithm results in a password hash. The hash is not the encrypted password, but rather a result of the algorithm. The strength of the hash is such that the hash value can be re-created only by using the original user and password information, and that it is impossible to retrieve the original information from the hash. This strength makes hashes perfect for encoding passwords for storage. In granting authorization, the hashes, rather than the plain-text password, are calculated and compared.
Hackers use many tools and techniques to crack passwords:
- Word lists: These programs use lists of words, phrases, or other combinations of letters, numbers, and symbols that computer users often use as passwords. Hackers enter word after word at high speed (called a dictionary attack) until they find a match.
- Brute force: This approach relies on power and repetition. It compares every possible combination and permutation of characters until it finds a match. Brute force eventually cracks any password, but it might take a long, long time. Brute force is an extremely slow process because it uses every conceivable character combination.
- Hybrid crackers: Some password crackers mix the two techniques. This combines the best of both methods and is highly effective against poorly constructed passwords.
Password cracking attacks any application or service that accepts user authentication, including the following:
- NetBIOS over TCP (TCP 139)
- Direct host (TCP 445)
- FTP (TCP 21)
- Telnet (TCP 23)
- Simple Network Management Protocol (SNMP) (UDP 161)
- Point-to-Point Tunneling Protocol (PPTP) (TCP 1723)
- Terminal services (TCP 3389)
Confidentiality and Integrity Attacks
Confidentiality breaches can occur when an attacker attempts to obtain access to read-sensitive data. These attacks can be extremely difficult to detect because the attacker can copy sensitive data without the knowledge of the owner and without leaving a trace.
A confidentiality breach can occur simply because of incorrect file protections. For instance, a sensitive file could mistakenly be given global read access. Unauthorized copying or examination of the file would probably be difficult to track without having some type of audit mechanism running that logs every file operation. If a user had no reason to suspect unwanted access, however, the audit file would probably never be examined.
In Figure 1-7, the attacker is able to compromise an exposed web server. Using this server as a beachhead, the attacker then gains full access to the database server from which customer data is downloaded. The attacker then uses information from the database, such as a username, password, and email address, to intercept and read sensitive email messages destined for a user in the branch office. This attack is difficult to detect because the attacker did not modify or delete any data. The data was only read and downloaded. Without some kind of auditing mechanism on the server, it is unlikely that this attack will be discovered.
Figure 1-7. Breach of Confidentiality
Attackers can use many methods to compromise confidentiality, the most common of which are as follows:
- Ping sweeps and port scanning: Searching a network host for open ports.
- Packet sniffing: Intercepting and logging traffic that passes over a digital network or part of a network.
- Emanations capturing: Capturing electrical transmissions from the equipment of an organization to deduce information regarding the organization.
- Overt channels: Listening on obvious and visible communications. Overt channels can be used for covert communication.
- Covert channels: Hiding information within a transmission channel that is based on encoding data using another set of events.
- Wiretapping: Monitoring the telephone or Internet conversations of a third party, often covertly.
- Social engineering: Using social skills or relationships to manipulate people inside the network to provide the information needed to access the network.
- Dumpster diving: Searching through company dumpsters or trash cans looking for information, such as phone books, organization charts, manuals, memos, charts, and other documentation that can provide a valuable source of information for hackers.
- Phishing: Attempting to criminally acquire sensitive information, such as usernames and passwords, by masquerading as trustworthy entities.
- Pharming: Redirecting the traffic of a website to another, rogue website.
Many of these methods are used to compromise more than confidentiality. They are often elements of attacks on integrity and availability.
A complex form of IP spoofing is called man-in-the-middle attack, where the hacker monitors the traffic that comes across the network and introduces himself as a stealth intermediary between the sender and the receiver, as shown in Figure 1-8.
Figure 1-8. IP Source Routing Attack
Hackers use man-in-the-middle attacks to perform many security violations:
- Theft of information
- Hijacking of an ongoing session to gain access to your internal network resources
- Analysis of traffic to derive information about your network and its users
- Corruption of transmitted data
- Introduction of new information into network sessions
Attacks are blind or nonblind. A blind attack interferes with a connection that takes place from outside, where sequence and acknowledgment numbers are unreachable. A nonblind attack interferes with connections that cross wiring used by the hacker. A good example of a blind attack can be found at http://wiki.cas.mcmaster.ca/index.php/The_Mitnick_attack.
TCP session hijacking is a common variant of the man-in-the-middle attack. The attacker sniffs to identify the client and server IP addresses and relative port numbers. The attacker modifies his or her packet headers to spoof TCP/IP packets from the client, and then waits to receive an ACK packet from the client communicating with the server. The ACK packet contains the sequence number of the next packet that the client is expecting. The attacker replies to the client using a modified packet with the source address of the server and the destination address of the client. This packet results in a reset that disconnects the legitimate client. The attacker takes over communications with the server by spoofing the expected sequence number from the ACK that was previously sent from the legitimate client to the server. (This could also be an attack against confidentiality.)
Another cleaver man-in-the-middle attack is for the hacker to successfully introduce himself as the DHCP server on the network, providing its own IP address as the default gateway during the DHCP offer.
Overt and Covert Channels
Overt and covert channels refer to the capability to hide information within or using other information:
- Overt channel: A transmission channel that is based on tunneling one protocol inside of another. It could be a clear-text transmission inserted inside another clear-text protocol header.
- Covert channel: A transmission channel that is based on encoding data using another set of events. The data is concealed.
There are numerous ways that Internet protocols and the data that is transferred over them can provide overt and covert channels. The bad news is that firewalls generally cannot detect these channels; therefore, attackers can use them to receive confidential information in an unauthorized manner.
With an overt channel, one protocol is tunneled within another to bypass the security policy; for example, Telnet over FTP, instant messaging over HTTP, and IP over Post Office Protocol version 3 (POP3). Another example of an overt channel is using watermarks in JPEG images to leak confidential information.
One common use of overt channel is for instant messaging (IM). Most organization firewalls allow outbound HTTP but block IM. A user on the inside of the network can leak confidential information using IM over an HTTP session.
In Figure 1-9, the firewall allows outbound HTTP while a user on the inside of the network is leaking confidential information using instant messaging over HTTP.
Figure 1-9. Overt Channel
Steganography is another example of an overt channel. Steganography (from the Greek word steganos, meaning “covered” or “secret”) literally means covered or secret writing. The combination of CPU power and interest in privacy has led to the development of techniques for hiding messages in digital pictures and digitized audio.
For example, certain bits of a digital graphic can be used to hide messages. The key to knowing which bits are special is shared between two parties that want to communicate privately. The private message typically has so few bits relative to the total number of bits in the image that changing them is not visually noticeable. Without a direct comparison of the original and the processed image, it is practically impossible to tell that anything has been changed. Still, it might be detected by statistical analysis that detects non-randomness. This non-randomness in a file indicates that information is being passed inside of the file.
With a covert channel, information is encoded as another set of events. For example, an attacker could install a Trojan horse on a target host. The Trojan horse could be written to send binary information back to the server of the attacker. The client, infected with the Trojan horse, could return to the hacker’s server a ping status report in a binary format, where a 0 would represent a successful ping over a one-minute period, and a 1 would represent two successful pings over a one-minute period. The hacker could keep connectivity statistics for all the compromised clients he has around the world.
If ICMP is not permitted through a firewall, another tactic is to have the client visit the web page of the attacker. The Trojan horse software, now installed on the client, has a “call home” feature that automatically opens a connection to TCP port 80 at a specific IP address, the address of the hacker’s web server. All of this work is done so that the hacker can keep precise statistics of how many compromised workstations he possesses around the world. One visit per day would be represented by a 1, and no visits would be represented by a 0. As you might imagine, this technique is usually quite limited in bandwidth.
Phishing, Pharming, and Identity Theft
Identity theft continues to be a problem. In computing, phishing is an attempt to criminally acquire sensitive information, such as usernames, passwords, and credit card details, by masquerading as a trustworthy entity. Phishing is typically carried out by email or instant message (IM), although sometimes phone contact is attempted; the phisher often directs users to enter details at a website, as shown on the left in Figure 1-10. Phishing is an example of social engineering.
Figure 1-10. Phishing and Pharming Attacks
Pharming, also illustrated in Figure 1-10, is an attack aimed at redirecting the traffic of a website to another website. Pharming is conducted either by changing the hosts file on a victim computer or by exploiting a vulnerable Domain Name System (DNS) server. Pharming has become a major concern to businesses hosting e-commerce and online banking websites.
To protect against pharming, organizations implement “personalization” technologies, such as user-chosen images on the login page. Consider also supporting identified email initiatives such as DomainKeys Identified Mail (DKIM); these initiatives are beyond the scope of this book.
DoS attacks attempt to compromise the availability of a network, host, or application. They are considered a major risk because they can easily interrupt a business process and cause significant loss. These attacks are relatively simple to conduct, even by an unskilled attacker.
DoS attacks are usually the consequence of one of the following:
- The failure of a host or application to handle an unexpected condition, such as maliciously formatted input data or an unexpected interaction of system components.
- The inability of a network, host, or application to handle an enormous quantity of data, which crashes the system or brings it to a halt. Even if the firewall protects the corporate web server sitting on the DMZ from receiving a large amount of data and thus from crashing, the link connecting the corporation with its service provider will be totally clogged, and this bandwidth starvation will itself be a DoS.
Hackers can use many types of attacks to compromise availability:
- SYN floods
- ICMP floods
- Electrical power
- Computer environment
Botnet is a term for a collection of software robots, or bots, that run autonomously and automatically. They run on groups of “zombie” computers controlled by crackers.
Although the term botnet can be used to refer to any group of bots, it is generally used to refer to a collection of compromised systems running worms, Trojan horses, or back doors, under a common command and control infrastructure. The originator of a botnet controls the group of computers remotely, usually through a means such as Internet Relay Chat (IRC).
Often, the command and control takes place via an IRC server or a specific channel on a public IRC network. A bot typically runs hidden. Generally, the attacker has compromised a large number of systems using various methods, such as exploits, buffer overflows, and so on. Newer bots automatically scan their environment and propagate using detected vulnerabilities and weak passwords. Sometimes a controller will hide an IRC server installation on an educational or corporate site, where high-speed connections can support a large number of other bots.
Several botnets have been found and removed from the Internet. The Dutch police found a 1.5-million node botnet (http://www.wisegeek.com/what-is-a-botnet.htm), and the Norwegian ISP Telenor disbanded a 10,000-node botnet. Large, coordinated international efforts to shut down botnets have also been initiated. Some estimates indicate that up to 25 percent of all personal computers are part of a botnet (http://everything.explained.at/Botnet/).
DoS and DDoS Attacks
DoS attacks are the most publicized form of attack. They are also among the most difficult to eliminate. A DoS attack on a server sends an extremely large volume of requests over a network or the Internet. These large volumes of requests cause the attacked server to slow down dramatically. Consequently, the attacked server becomes unavailable for legitimate access and use.
DoS attacks differ from most other attacks because DoS attacks do not try to gain access to your network or the information on your network. These attacks focus on making a service unavailable for normal use. Attackers typically accomplish this by exhausting some resource limitation on the network or within an operating system or application. These attacks typically require little effort to execute because they either take advantage of protocol weaknesses or use traffic normally allowed into a network. DoS attacks are among the most difficult to completely eliminate because of the way they use protocol weaknesses and accepted traffic to attack a network. Some hackers regard DoS attacks as trivial and in bad form because they require so little effort to execute. Still, because of their ease of implementation and potentially significant damage, DoS attacks deserve special attention from security administrators.
System administrators can install software fixes to limit the damage caused by all known DoS attacks. However, as with viruses, hackers constantly develop new DoS attacks.
A DDoS attack generates much higher levels of flooding traffic by using the combined bandwidth of multiple machines to target a single machine or network. The DDoS attack enlists a network of compromised machines that contain a remotely controlled agent, or zombie, attack program. A master control mechanism provides direction and control. When the zombies receive instructions from the master agent, they each begin generating malicious traffic aimed at the victim.
DDoS attacks are the “next generation” of DoS attacks on the Internet. This type of attack is not new. UDP and TCP SYN flooding, ICMP echo-request floods, and ICMP directed broadcasts (also known as Smurf attacks) are similar to DDoS attacks; however, the scope of the attack is new. Victims of DDoS attacks experience packet flooding from many different sources, possibly spoofed IP source addresses, which brings their network connectivity to a grinding halt. In the past, the typical DoS attack involved a single attempt to flood a target host with packets. With DDoS tools, an attacker can conduct the same attack using thousands of systems.
Figure 1-11 shows the process of a DDoS attack:
- The hacker uses a host to scan for systems to hack.
- After the hacker accesses handler systems, the hacker installs zombie software on them to scan, compromise, and infect agent systems.
- Remote control attack software is loaded on agent systems.
- When the hacker issues instructions to handlers on how to carry out the DDoS attack.
Figure 1-11. DDoS Attack
The actual breach and vulnerability exploit is often accomplished using a combination of malware that infects, propagates, and delivers its payload following different techniques associated with traditional malware. Known as blended threats, these attack mechanisms combine the characteristics of viruses, worms, Trojan horses, spyware, and other malware.
A blended threat will exploit a vulnerability such as a buffer overflow or lack of HTTP input validation. Such attacks can spread without human intervention by scanning for other hosts to infect, embedding code in HTML, or by spamming, to name a few methods.
Blended threats plant Trojans and back doors. They are often part of botnet attacks, which try to raise privilege levels, create network shares, and steal data.
Most blended attacks are considered “zero day,” meaning that they have not been previously identified. Blended attacks are ever-evolving and pretested by cybercriminals on common antivirus products before they are released. These threats easily breach firewalls and open channels, and they represent a challenge to detect and mitigate.
Principles of Secure Network Design
In planning an overall strategy for security architecture design, sound principles are needed to accomplish an effective security posture. The selective combination of these principles provides the fundamentals for threat mitigation within the context of a security policy and risk management.
- Defense in depth: This is an umbrella term that encompasses many of the other guidelines in this list. It is defined by architectures based on end-to-end security, using a layered approach. The objective is to create security domains and separate them by different types of security controls. The concept also defines redundancy of controls, where the failure of one layer is mitigated by the existence of other layers of controls.
- Compartmentalization: Creating security domains is crucial. Different assets with different values should reside in different security domains, be it physically or logically. Granular trust relationships between compartments would mitigate attacks that try to gain a foothold in lower-security domains to exploit high-value assets in higher-security domains.
- Least privilege: This principle applies a need-to-know approach to trust relationships between security domains. The idea, which originated in military and intelligence operations, is that if fewer people know about certain information, the risk of unauthorized access is diminished. In network security, this results in restrictive policies, where access to and from a security domain is allowed only for the required users, application, or network traffic. Everything else is denied by default.
- Weakest link: This is a fundamental concept—a security system is as effective as its weakest link. A layered approach to security, with weaker or less protected assets residing in separated security domains, mitigates the necessary existence of these weakest links. Humans are often considered to be the weakest link in information security architectures.
- Separation and rotation of duties: This is the concept of developing systems where more than one individual is required to complete a certain task. The principle is that this requirement can mitigate fraud and error. This applies to information security controls, and it applies to both technical controls and human procedures to manage those controls.
- Hierarchically trusted components and protection: This principle applies a hierarchical approach to the compartmentalization and least privilege ideas, aiming at providing a more structured approach to data classification and security controls. The concept assumes that the hierarchy will be easier to implement and manage, resulting in similarly manageable and compartmentalized security controls.
- Mediated access: This principle is based on centralizing security controls to protect groups of assets or security domains. In that sense, firewalls, proxies, and other security controls act on behalf of the assets they are designed to protect, and mediate the trust relationships between security domains. Special considerations should be in place to prevent the mediation component from becoming a single point of failure.
- Accountability and traceability: This concept implies the existence of risk and the ability to manage and mitigate it, and not necessarily avoid or remove it. Information security architectures should provide mechanisms to track activity of users, attackers, and even security administrators. They should include provisions for accountability and nonrepudiation. This principle translates into specific functions, such as security audits, event management and monitoring, forensics, and others.
Cisco has always been a proponent of defense in depth. This was made clear in 2000 when it released its Cisco SAFE Blueprint for enterprise (SAFE is not an acronym), where it laid out its vision for defense in depth.
Defense in Depth
Addressing the fact that a security system is only as strong as its weakest link is often difficult when designing a system’s security. The complexity of modern systems makes it hard to identify each individual weak link, let alone the weakest one. Thus, it is often most desirable to eliminate possible weaknesses by instituting several concurrent security methods.
Securing information and systems against all threats requires multiple, overlapping protection approaches that address the human, technological, and operational aspects of information technology. Using multiple, overlapping protection approaches ensures that the system is never unprotected from the failure or circumvention of any individual protection approach.
When a system is designed and implemented, its quality should always be questioned through design reviews and testing. Identification of various failure modes might help a designer evaluate the probability of element failure, and identify the links that are the most critical for the security of the whole system. Many systems have a security-based single point of failure, an element of functionality or protection that, if compromised, would cause the compromise of the whole system. It is desirable to eliminate or at least harden such single points of failure in a high-assurance system.
Defense in depth is a philosophy that provides layered security to a system by using multiple security mechanisms:
- Security mechanisms should back each other up and provide diversity and redundancy of protection.
- Security mechanisms should not depend on each other, so that their security does not depend on other factors outside their control.
- Using defense in depth, you can eliminate single points of failure and augment weak links in the system to provide stronger protection with multiple layers.
The defense-in-depth strategy recommends several principles:
- Defend in multiple places: Given that insiders or outsiders can attack a target from multiple points, an organization must deploy protection mechanisms at multiple locations to resist all classes of attacks. At a minimum, you should include three defensive focus areas:
- Defend the networks and infrastructure: Protect the local- and wide-area communications networks from attacks, such as DoS attacks. Provide confidentiality and integrity protection for data that is transmitted over the networks; for example, use encryption and traffic flow security measures to resist passive monitoring.
- Defend the enclave boundaries: Deploy firewalls and intrusion detection systems (IDS) or intrusion prevention systems (IPS) or both to resist active network attacks.
- Defend the computing environment: Provide access controls and host intrusion prevention systems (HIPS) on hosts and servers to resist insider, close-in, and distribution attacks.
- Build layered defenses: Even the best available information assurance products have inherent weaknesses. Therefore, it is only a matter of time before an adversary finds an exploitable vulnerability. An effective countermeasure is to deploy multiple defense mechanisms between the adversary and the target. Each of these mechanisms must present unique obstacles to the adversary. Further, each mechanism should include both protection and detection measures. These measures increase the risk of detection for adversaries while reducing their chances of success, or make successful penetrations unaffordable. One example of a layered defense is to have nested firewalls (each coupled with IDS or IPS) that are deployed at outer and inner network boundaries. The inner firewalls may support more granular access control and data filtering.
- Use robust components: Specify the security robustness (that is, strength and assurance) of each information assurance component as a function of the value of what it is protecting and the threat at the point of application. For example, it is often more effective and operationally suitable to deploy stronger mechanisms at the network boundaries than at the user desktop.
- Employ robust key management: Deploy robust encryption key management and public key infrastructures that support all the incorporated information assurance technologies and that are highly resistant to attack.
- Deploy an IDS or IPS: Deploy infrastructures to detect and prevent intrusions and to analyze and correlate the results and react accordingly. These infrastructures should help the operations staff answer the following questions:
- Am I under attack?
- Who is the source?
- What is the target?
- Who else is under attack?
- What are my options?
|
<urn:uuid:487cf07e-8295-4abd-a034-8dd0cc5115aa>
|
CC-MAIN-2022-40
|
https://www.ciscopress.com/articles/article.asp?p=1998559&seqNum=7
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00249.warc.gz
|
en
| 0.93291 | 16,350 | 2.875 | 3 |
I’m willing to bet that you or someone at your company has already been the victim of a phishing attack. With email surpassing in-person and telephone conversations, particularly now that everyone is working remotely, email has become the preferred attack vector for many criminal organizations. So, I feel pretty confident that the odds are in my favor for winning the bet. Let’s take a look at what phishing is, how it’s done, why and what you can do to keep you, your employees, and your company safe.
In fact, email related cyber-attacks have been on the rise year after year with the United Nations reporting a 600% increase in malicious emails during the COVID-19 pandemic. Threat actors have taken notice that employees are more vulnerable to cyber-attacks while working from home as some of the security controls implemented in the workplace are not available at the individual computer level.
What is phishing?
According to the US Cybersecurity & Infrastructure Security Agency (CISA), phishing is an attempt by an individual or group to solicit personal information from unsuspecting users by employing social engineering techniques. Phishing emails are crafted to appear as if they have been sent from a legitimate organization or known individual.
It’s scary at how easy it is to fall victim to these attacks. There are two common phishing methods. One entails the victim clicking on a link that takes them to a fraudulent website or landing page that looks legitimate. The attacker uses the page to steal username and password, personal identifiable information (PII), credit card numbers, among other information. The second technique commonly used is to include a file attached to the email that installs malware in the victim’s computer when opened. Attackers then use this malware to gain remote access to the victim’s computer. The attackers can then use such access to pivot to other systems in the network or to steal documents and other information from the compromised system.
What is the attacker’s goal?
They want control of your system. According to MITRE ATT&CK, a globally-accessible knowledge base of adversary tactics and techniques, attackers use phishing to gain a foothold into an organization’s ecosystem and make their way acquiring access to the administrator level. Once there, they can launch a ransomware attack, exfiltrate confidential data, and cause business disruption and financial loss. Security organizations such as Abacode utilize the framework to understand an attack kill chain and stop it as early as possible.
Here’s one scenario to illustrate how quickly this can happen. An attacker, impersonating a provider using a similar email domain, sends a phishing email to the finance department requesting changes to the payment options to the existing account. If the attack is not detected and the provider account payment options are changed, the next time the provider is paid, the money will be sent to the attacker’s bank account and not to the provider’s back account.
Seven controls you can implement to mitigate the risk associated to phishing emails
There is no silver bullet to eliminate the cyber risk associated to phishing email attacks. However, with the implementation of mitigating controls as part of a comprehensive cybersecurity program, the risk of compromise due to phishing attacks can be greatly reduced.
The following seven controls are critical to protect your organization from phishing attacks:
- Email Security Gateway: The first line of defense recommended is an Email Security Gateway solution to monitor inbound and outbound emails, prevent unwanted emails from landing, scan emails for malware and block emails from suspected sources. Top market solutions include Microsoft Advanced Threat Protection (ATP) for Office 365, Mimecast, Barracuda, and Proofpoint among many others. These solutions are not the kind of solution that can be set up and forgotten. These solutions require constant monitoring and administration to be effective.
- Cybersecurity Awareness Training: Even with the best email security gateway solution, some phishing emails are going to land, and the users are going to have to deal with them. Some experts in the industry say that humans are the weakest link when it comes to cybersecurity. Quite for the contrary, I believe that humans are the biggest opportunity for organizations to improve their cybersecurity. Well-trained employees are able to properly identify and respond to phishing emails reducing the risk of falling for the attacker’s covert demands.
- Regular Phishing Campaigns: Validating the knowledge acquired through formal cybersecurity awareness training is critical to identify team members that need additional training and attention. Additionally, regular phishing campaigns help keep employees alert and engaged in their role as security agents of the organization.
- 24/7 Cybersecurity Monitoring: As mentioned before, there’s no silver bullet to stop phishing emails. Consequently, the email platform, network and endpoints that are used to access email need to be monitored. If a phishing email lands and the user falls for the email by clicking the link, downloading the attachment and providing credentials, it is critical to identify the incident as early in the MITRE ATT&CK stages as possible. With a proper monitoring solution, phishing attacks could be detected based on the network traffic to the landing page, malware installation, communication out to a command and control site, logons from unusual locations and geographies, etc.
- Enhanced Financial Controls: Financial controls need to consider the scenario in which the email platform is compromised and email communication that looks legit is actually incoming from threat actors. For instance, any email requests to modify payment options, payroll or any other transactional system should be validated through other means of communication such as a telephone call to the number on record, not the one provided in the email. Similarly, ACH and wire transfer transactions should include financial controls with a properly established approval chain for transactions in excess of a set amount, for instance, $10,000. Lastly, bank accounts need to be monitored on a daily basis and all levels of security notifications should be enabled.
- Multi-Factor Authentication (MFA) Everything: Phishing emails take advantage of systems that do not have MFA implemented in most cases. Yes, there are techniques to compromise accounts that have MFA enabled but those are very rare at the moment. Starting with the email platform, all systems accessible from the Internet should require MFA, in the form of a code sent via text message, using an authentication mobile app, or a physical MFA token.
- Email Policy: An email policy defines what is acceptable use for the email platform according to your business and security requirements. Without a properly established and disseminated email policy, team members will make assumptions of what acceptable use is, what in some cases could leave the organization exposed from the cyber risk standpoint.
Phishing attacks and other cybersecurity threats are only going to increase. It’s essential to stay one step ahead. That’s not easy when that’s not your business. It’s our business though and we’d welcome a conversation on how we can help.
|
<urn:uuid:6df1e0ac-4be5-4998-a0f9-0c01c21cda60>
|
CC-MAIN-2022-40
|
https://abacode.com/how-to-protect-your-business-from-phishing-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00249.warc.gz
|
en
| 0.933461 | 1,433 | 2.921875 | 3 |
1 - Lesson 1: Organizing Content Using Tables and Charts
Topic A: Sort Table DataTopic B: Control Cell LayoutTopic C: Perform Calculations in a TableTopic D: Create a ChartTopic E: Add an Excel Table to a Word Document
2 - Lesson 2: Customizing Formats Using Styles and Themes
Topic A: Create and Modify Text StylesTopic B: Create Custom List or Table StylesTopic C: Apply Document Themes
3 - Lesson 3: Inserting Content Using Quick Parts
Topic A: Insert Building BlocksTopic B: Create and Modify Building BlocksTopic C: Insert Fields Using Quick Parts
4 - Lesson 4: Using Templates to Automate Document Formatting
Topic A: Create a Document Using a TemplateTopic B: Create and Modify a TemplateTopic C: Manage Templates with the Template Organizer
5 - Lesson 5: Controlling the Flow of a Document
Topic A: Control Paragraph FlowTopic B: Insert Section BreaksTopic C: Insert ColumnsTopic D: Link Text Boxes to Control Text Flow
6 - Lesson 6: Managing Long Documents
Topic A: Insert Blank and Cover PagesTopic B: Insert an IndexTopic C: Insert a Table of ContentsTopic D: Insert an Ancillary TableTopic E: Manage OutlinesTopic F: Create a Master Document
7 - Lesson 7: Using Mail Merge to Create Letters, Envelopes, and Labels
Topic A: Use Mail MergeTopic B: Merge Data for Envelopes and Label
Actual course outline may vary depending on offering center. Contact your sales representative for more information.
Who is it For?
This course is designed for students who wish to use Microsoft Word to create and modify complex documents and use tools that allow them to customize those documents.
To ensure your success, you should have end-user skills with any current version of Windows®, including being able to start programs, switch between programs, locate saved files, close programs, and access websites using a web browser. In addition, you should be able to navigate and perform common tasks in Word, such as opening, viewing, editing, and saving documents; formatting text and paragraphs; formatting the overall appearance of a page; and creating lists and tables. To meet this prerequisite, you can take any one or more of the following Logical Operations courses:
Using Microsoft® Windows® 10 (Second Edition)
Microsoft® Word for Office 365™ (Desktop or Online): Part 1
|
<urn:uuid:22a6a7a7-f2c7-45a0-ac69-e2e9ed5c970f>
|
CC-MAIN-2022-40
|
https://charleston.newhorizons.com/training-and-certifications/course-outline/id/1035992681/c/microsoft-word-for-office-365-desktop-or-online-v1-1-part-2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00249.warc.gz
|
en
| 0.741468 | 533 | 3.0625 | 3 |
- 1 Sales Order Definition
- 2 Difference Between Sales Order and Invoice
- 3 Difference Between Sales Order and Purchase Order
- 4 Steps in the Sales Order Entry Process
- 5 Strategies to improve the sales order process
Sales Order Definition
When a customer places an order for a specific product or a service, they generate a purchase order, which is sent to the seller. This purchase order includes the name of the product or service being purchased, its price and quantity.
The purchase order also includes details of the customer such as their address and payment mode. Once the seller receives the purchase order, they will generate a sales order.
Difference Between Sales Order and Invoice
A sales order is a confirmation that the supplier is capable of supplying the goods or services ordered in a purchase order by the buying party. It indicates that the seller has reviewed the purchase order and is confirming that the order will be fulfilled, whereas an invoice, on the other hand, is a bill that is raised by the supplier in order to receive payment for the goods or services.
Another main difference between a sales order and invoice is that a sales order indicates confirmation of an order, while the invoice raises the payment for the order
Difference Between Sales Order and Purchase Order
The main difference between a sales order and a purchase order is that while the seller generates the sales order, the customer generates the purchase order. Both these documents usually include similar information.
Are you looking to automate your order processing?
Give iTech Data a spin to improve your customer service and faster order-to-cash!
Steps in the Sales Order Entry Process
The entire sales order entry process involves various departments, right from sales and customer support, to warehouses and logistics partners.
The exact steps and technologies used in sales and purchase order processing can vary, but in general, the sales order entry process follows certain common steps.
Step 1: A customer places an order for a product. This order can be placed online, in person, or through the phone. The document generated is known as a purchase order.
Step 2: Details of the order and the customer are stored on the database. The customer order is then sent to the warehouse where warehouse managers check if the required products are in stock. A sales order is raised when the company can confirm that they are able to fulfill the order amount with the inventory.
Step 3: If inventory of that particular stock is low or is unavailable, an order is placed to the supplier.
Step 4: The purchase order is passed on to the accounts team who tags it under accounts receivable or cash sale. An invoice of sale is raised based on the quantity of the order and sent to the customer.
Step 5: A logistics partner will transport the product to the delivery destination and a delivery partner will make the final delivery to the customer’s address.
Since there are so many parts to successful sales and purchase order processing, there is plenty of room for human error, inaccurate invoices, inefficient inventory management, and more.
The best way to make your sales order entry process more effective is to integrate it into a single dashboard and automate some of the key steps.
“The more touchpoints and stakeholders involved in the order management system the higher the probability of human error”
Strategies to improve the sales order process
There are several impactful ways in which you can make your sales order entry process more efficient. If you’re planning to outsource, you can reap the benefits of companies that are experts in sales order automation. But if you do it in-house, then you should know these strategies.
1. Use an order management system
An order management system is a single unified platform that helps manage inventory, automate the order-to-cash cycle and facilitate better communication between teams. The platform automatically stores orders received, keeps a track of inventory levels and raises the required sales orders and a corresponding invoice of sale. Investing in an order management system can offer you greater ROI in the long run because you will need fewer internal resources to manage your orders and inventory.
Since customer data is stored on secure cloud servers, an OMS can also prevent data breaches.
2. Automate the entire Process
A manual process is vulnerable to errors like inaccurate amounts being entered, incorrect inventory ordered, and delays in sales orders and invoices being raised. In addition, manual processes can affect your delivery times, leading to a negative customer experience.
An order management system can automate several key parts of this process. For example, it can automatically check the available inventory for a certain purchase order, generate a sales order and an invoice of sale and finally store customer details in a single database.
Overall, automated order processing will help businesses track the complete journey right from receiving the order to fulfilling it.
3. Invest in training your employees
An order management system is effective only if your employees are on board with using it.
Any change in internal processes will require a period of training and familiarization. If your employees aren’t comfortable with using the new software, they might revert to older processes or use the software incorrectly. This could result in gaps and delays in the sales order entry process.
To prevent this, it is essential that you provide your employees with complete training on how to best use the software and the advantages of doing so. It is equally important that you choose an order management software that caters to the unique needs of your organization. Only when employees find real benefit from using the sales and purchase order processing software will they adapt to it.
Automated digital solutions are indispensable for companies dealing with a huge volume of orders. Customers are expecting faster delivery times and in today’s highly-competitive markets, companies cannot afford to fall short. Adopting an automated order management system is vital to improve efficiency, profitability and customer satisfaction.
Reach out to our team today!
|
<urn:uuid:3260e8f7-0c68-42dc-90b0-6bccf8d051f5>
|
CC-MAIN-2022-40
|
https://itechdata.ai/sales-order-entry-process-definition-and-strategies/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00249.warc.gz
|
en
| 0.928658 | 1,267 | 2.859375 | 3 |
Security awareness games by Infosec
On-demand training for every cybersecurity role
Certification training from industry experts
2021 IT & Security Talent Pipeline Study
Cyber Work Podcast
New cybersecurity career conversations every week
Join a team dedicated to making a difference.
Personally identifiable information (PII) is any information which, either on its own or combined with other information, can be used to distinguish or trace an individual's identity.
Physical security helps prevent losses of information and technology in the physical environment. This interactive module identifies physical security vulnerabilities, like printers and trash cans, and the risks employees face when technology is left unattended in publicly accessible areas. Prevention tactics to combat each type of risk is also discussed.
Hacker's use Advanced Persistent Threats (APT) to access a network and stay undetected. Targets are typically large businesses and organizations.
|
<urn:uuid:5e43ad69-3546-4eda-b8e4-d95cd15d0654>
|
CC-MAIN-2022-40
|
https://www.infosecinstitute.com/content-library/dont-be-a-target-personal-information/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00249.warc.gz
|
en
| 0.869718 | 197 | 2.6875 | 3 |
Success doesn't lie in technical skills or education alone, although they certainly help. Programmers don't exist in a vacuum; they generally interact with teams. Successful programmers must also possess the following personal (soft) skills:
- Ability to solve problems. Let's face itthe world of software development is a magnet for Murphy's Law: If something can go wrong, rest assured that it will. A savvy programmer must be a solution-oriented, out-of-the-box thinker who frequently comes up with new and innovative ways of doing old (and new) things.
- Good communication skills. Communications are a must! That means communicating with humansa mind-meld with the computer isn't enough. These skills should encompass all facets of communication, including both written and verbal contact.
- Self-discipline. While programmers may interact with groups, writing code is a solitary process that requires self-discipline. The world of software development is frequently fraught with deadlines and last-minute changes that translate into long days and lots of hard work.
|
<urn:uuid:bcee81a8-ad1f-4ddb-af41-247e871d2251>
|
CC-MAIN-2022-40
|
https://www.ciscopress.com/articles/article.asp?p=1655229&seqNum=8
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00449.warc.gz
|
en
| 0.946725 | 217 | 2.734375 | 3 |
Like a feudal lord, the IT admin is the ruler of their domain. Unlike medieval times, however, the IT admin’s domain is not one of fiefs and farmers, but filled with users, systems, networks, databases, and more. These facets of the domain are all governed by “laws,” if you will, set forth by the sysadmin to dictate the authorization of access of said facets. The domain controller is a key tool that allows an IT admin to do so. Let’s explore the definition of domain controller.
What is a Domain Controller?
To keep with the medieval analogy, the domain controller is like the portcullis that allows access to the IT kingdom. In real terms, the domain controller is a server that manages user authentication and access to the domain’s resources to ensure network security. The term itself was especially prevalent during the early days of IT, when the workplace was Windows®-centric. Domain controllers were a key concept of Windows Server and Microsoft® Active Directory® (MAD or AD), the legacy on-prem directory service. By leveraging the domain controller, admins could create Group Policy Objects (GPOs) to dictate access to groups of users, among other functionalities. Given the dominance of AD in the modern workplace, domain controller became a commonly used term.
In the cloud era, however, the IT landscape is evolving. Much like the castles and moats of yore, domain controllers are becoming obsolete. Many organizations are searching for more effective, cloud-based directory services, and AD—while still widely used—is losing its hold on the market. The domain controller as a tool is still a useful one, but given the fact that it’s a primarily Microsoft-based term, it’s fading just as AD is.
Many believe that Microsoft Azure® Active Directory will be the cloud replacement for the domain controller. According to Microsoft, however, it is not. In reality, Azure AD creates domains inside of Azure itself, and, given the prevalence of non-Windows web-based IT resources, this functionality is rather limited. So, in this now OS agnostic and multi-location realm of IT infrastructure, what is the domain controller of the future?
JumpCloud® Directory-as-a-Service®: A Modern Definition of Domain Controller
JumpCloud® Directory-as-a-Service® is the first outright cloud directory service, and is a reimagination of Active Directory for the modern era. By leveraging Directory-as-a-Service, IT admins can once again take control over their domain, regardless of platform, protocol, or location. The JumpCloud Directory-as-a-Service (DaaS) agent-based solution ties users and their systems into one unified identity that can also authorize access to web-based applications and other cloud SaaS solutions. The cloud directory also integrates cloud RADIUS and LDAP-as-a-Service capabilities to help control access to WiFi and legacy applications. Directory-as-a-Service also features Policies, a cross platform GPO-like capability akin to the abilities of the original AD domain controller.
To learn more about JumpCloud Directory-as-a-Service and its take on the definition of domain controller, check out our knowledge base posts on Policies or our YouTube channel. You can also contact us to speak to a support expert directly. If JumpCloud seems like the definition of domain controller that your organization needs, consider signing up for Directory-as-a-Service today. It’s completely free, and includes ten users on the house.
|
<urn:uuid:537956d2-35ef-417f-9a01-ae7d5617284b>
|
CC-MAIN-2022-40
|
https://jumpcloud.com/blog/definition-of-domain-controller
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00449.warc.gz
|
en
| 0.913325 | 742 | 3.203125 | 3 |
If you’ve been following the news, trying to buy a new car, or looking to replace an old laptop, you may have run into reports of a chip shortage. These tiny pieces of technology are found in nearly every electronic device these days and are unfortunately in short supply right now. This is causing problems for the tech industry, but is also impacting the automotive industry which is actually quite reliant on these small silicon wafers.
Prior to the 2020 pandemic semiconductor chips were in short supply, but the post-pandemic disruptions of the global supply chain has thrown everything into chaos. We’ll dive into why there is a chip shortage and what that means for you.
What’s causing the global chip shortage?
The global chip shortage has its origins dating back a few years – with the earliest signs of trouble appearing in 2018. In 2018 a pair of trade wars arose – one between the US and China as well as one between Japan and Korea. As a result of these trade wars the raw materials necessary to produce chips became more expensive, increasing chip manufacturer’s lead time and reducing chip supply.
The trade wars continued to escalate, eventually culminating in the US blacklisting the major Chinese chip manufacturer SMIC. The trade war between Japan and Korea led to Japan withholding shipments of vital chipmaking components, putting a squeeze on South Korean chip production capacity.
Further complicating matters is these economic squabbles rest atop geopolitical conflict – in the case of the US/China trade war the conflict revolves around Taiwan. China asserts that Taiwan is a part of China, while Taiwan is adamant about its sovereignty. While chip manufacturing isn’t at the center of this issue, it is embroiled within it as Taiwan produces about 60% of the world’s microchips. Taiwan’s largest chip foundry, TSMC, makes chips for clients including Apple, Nvidia, and Qualcomm.
On top of these troubles, demand for consumer electronics has surged – driven in large part by people buying consumer electronics to stave off boredom and to work from home during the post-pandemic lockdowns. The pandemic also pushed consumer demand for automobiles as people eschewed public transportation. Modern cars are heavily chip-reliant and this further stressed the struggling chip supply.
The Semiconductor Industry Association has said that global chip sales are expected to grow by 8.4% in 2021 compared to 2021. This represents an impressive bump in demand, particularly given that both 2020 saw the global chip market grow by 5%.
Finally, the pandemic has led to massive and unprecedented shipping delays. This means both that chip makers are unable to get the raw materials they need and that companies purchasing those chips are unable to get the chips themselves. This last factor is hard to overstate – particularly given that most semiconductor companies are based overseas.
What industries are impacted by the chip shortage?
Many industries have been hit by the chip shortage, although hardest hit are the auto industry and the tech industry.
Automotive companies have been left high and dry, running out of chips that are essential for their vehicles to function. Toyota has had to cut its October production by 40% while General Motors has seen sales fall 30% as it struggles to keep its plants open. GM has taken to nearly finishing construction of its best selling trucks, and then installing the chips as they arrive.
Ford’s Q3 numbers slipped by 28% compared to last year, although it reported small but meaningful improvements in September. This improvement may come from Ford shifting its focus from selling dealer stock to encouraging customers to place orders for build-to-order cars.
Not all carmakers were hit so hard, with Elon Musk’s Tesla managing to increase sales despite the global shortage. While Tesla is similarly experiencing delays and shortages, they have managed to dodge the worst of the shortage’s damage by using substitute chips and rewriting some of its firmware.
The new car shortages in combination with increased demand for private cars has driven used car prices through the roof. Some used cars are even managing to sell for 20% more than their original MSRPs. The cumulative effect is that dealerships are sitting with limited inventory and anyone looking to buy a car (new or used) can expect to pay quite a premium.
Outside of automakers, chip dependent tech companies that manufacture smartphones, computers, laptops, and game consoles are all feeling the pain of this bottleneck. One major indicator of how severe a shortage the world faces is that Apple has announced they are cutting back production of the new iPhone 13. As a titan of the industry that sells its products with high profit margins, Apple has the resources to pay more for chips. The fact that they’re scaling back production means the chips simply aren’t available to be had.
Apple’s biggest competitor, Samsung, has seemed to weather the shortage relatively well – although they did see weaker than expected 3rd quarter sales. As a chip manufacturer themselves, they may have been buffered from the effects of the shortage by increased chip prices.
On the computer front the chip shortage has had mixed effects. Intel reports that they are missing key components to finish many laptops, resulting in weaker sales, but desktop sales are up. Apple’s newest MacBook Pro is suffering from delays, although not so bad as some analysts had originally feared. These units are currently predicted to be available by the end of November or the first week of December depending upon the model.
Finally, next generation game consoles like the Sony PlayStation 5 and the Xbox Series X/S are going to prove difficult to find this holiday season. Sony at least isn’t taking the problem lying down and has entered into a multibillion dollar partnership with TSMC to manufacture chips in Japan. Unfortunately the plant won’t be operational until 2024 so this is a long-term solution rather than one which will pay dividends this holiday season.
When will the microchip shortage go away?
Due to the multifactored nature of the microchip shortage, it’s difficult to say when the shortage will be resolved but it certainly won’t be in the very near future.
The trade wars that precipitated the current shortage don’t show any signs of lifting, with Biden holding to Trump-era tariffs. However, the blacklist that impacted Chinese companies like SMIC and Huawei has proven to be of less significance than initially predicted. It turns out that the law was written in such a way that permits these companies to apply for export exemptions and 61 billion dollars of these exemptions were issued between November 2020 and May 2021 to Huawei alone.
And, while the chip manufacturing is set to increase, most of the new capacity is still a ways off. TSMC has stated that they expect the chip shortage to persist until 2022, while Intel’s CEO, Pat Gelsinger, has voiced his opinion that the shortage will continue into 2023. AMD’s CEO, Lisa Su, is more optimistic than her competitor and has stated that she believes the shortage will begin to lift during the second half of 2022.
Further complicating any predictions about when the shortage will end is the ongoing supply chain disruptions. As long as the chip industry is based in Asia the US will remain dependent upon the fragile global shipping networks. For now, it seems that shipping issues will persist until the end of 2022. This highlights the need for the United States to boost American microchip production in order to protect its industry from future chip shortages.
One thing is clear, microchip demand is not going to go away with consumer preferences for electric cars and more people working from home.
Modern combustion engine cars, like the Ford Focus, use an already impressive 300 chips – but electric cars increase this number by a factor of 10. The all-electric Ford Mach E requires 3000 computer chips, controlling everything from brake regeneration modules to window switches.
With more people than ever working from home and the specter of COVID looming, laptops and console gaming systems are going to continue enjoying a boost in sales. It is easy to see that reduced demand is not a realistic solution to the current chip shortage!
What can I do to avoid the frustration of the chip shortage?
The chip shortage will come to an end – perhaps as soon as the middle of next year. In the meantime there are a few ways you can avoid the frustration of limited supply and elevated costs of the current shortage.
If you’re in the market to buy a new car, you’re probably best off waiting. One thing to consider, however, is if you have a car you are going to be trading in, or have a car you aren’t using, the market for used cars is currently in your favor.
When it comes to laptops and personal computers, repairs are going to be more cost and time efficient than ever.
Finally, while it may be nice to upgrade your cellphone every year or two, this year it is probably best to hold onto your current phone a bit longer than normal.
Ultimately we’ve all got to be patient while semiconductor manufacturing comes back online and the global supply chain issues are resolved.
Whether you’re buying a car or replacing your laptop, the current chip shortage is going to make that take longer and cost more. If you are in a position to wait to replace either of these, that is going to save you some money – consider hanging onto what you’ve got until this shortage lifts. However, keep in mind that the chip shortage is not going to be solved overnight; expect it to linger well into 2022, and possibly even into 2023.
|
<urn:uuid:cec30af9-1476-4236-94cf-749a183afdce>
|
CC-MAIN-2022-40
|
https://bristeeritech.com/it-security-blog/why-is-there-a-chip-shortage/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00449.warc.gz
|
en
| 0.965225 | 1,974 | 2.796875 | 3 |
How do I plug in a phone?
Plug in your phone for the ability to make and receive calls over your network.
- Internet source is the physical connection to the internet. This can be shared with other network devices (phones, computers, etc.) and dispersed through an ethernet cable plugged into a wall jack, router, or switch.
- PoE stands for Power over Ethernet, a technology that allows ethernet cabling to carry both data and electricity to a device.
- Daisy chain refers to using the phone’s PC port (switchport) to share the same internet source with another device, such as a computer.
- Plug in the handset and headset cables.
- Plug the network cable from your internet source into the LAN port on the phone (possibly labeled as SW, NET, or Internet).
- If daisy-chaining a computer to the phone, plug a network cable into the PC port on the phone and then plug the other end into the computer. The phone should supply the connection to the computer (internet source > phone > computer).
- If the phone is not using PoE, plug in the power adapter.
|
<urn:uuid:56af76e5-2ef6-4c74-a03c-6d6ad9fed0e4>
|
CC-MAIN-2022-40
|
https://support.goto.com/connect/help/how-do-i-plug-in-a-phone-gotoconnect-plug-in-phone
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00449.warc.gz
|
en
| 0.85953 | 243 | 3.375 | 3 |
DevOps is the current novel software development approach, which unifies software development and software operation activities. This brings Continuous Integration (CI) into the foreground, which is an integral element of any DevOps practice. CI is based on the merging of code from all the members of the development team into a shared cloud or machine, where the entire software product is kept integrated at all times. One of the main benefits of CI is that it automates the integration process, which eliminates the disadvantages of manual integration. In particular, CI infrastructures and tools are not confined to gluing code from different developers, but rather automates the build of an integrated product based on the code contributions of the various developers. In this way, a CI process provides up-to-date versions of the integrated product for testing, demonstration and release purposes.
CI has proven benefits in terms of reducing debugging effort, accelerating integration and releases, as well as in terms of increasing developers’ productivity and helping them to focus on innovation rather than on bug fixing. These benefits of CI have given rise to the emergence of several CI tools, which are usually integrated within a DevOps tooling infrastructure. Following paragraphs present five (5) of the most popular CI tools, while at the same time discussing the factors that can drive the selection of the most proper one for a given software project.
Jenkins is one of the most popular CI tools, which is widely used since the very early days of DevOps. It is a cross-platform, open source and Java-based. It comes with a GUI interface in addition to console commands and offers a very flexible plug-ins mechanism, which boosts its extensibility. Jenkins is not confined to CI but supports continuous delivery (CD) as well. The continuous delivery is based on the modeling, orchestration, and visualization of delivery workflows, which are conveniently called pipelines. The pipelines are defined in a domain-specific language (DSL) and contained in special files (“Jenkins files”). The notion of pipelines matches the DevOps concept, as pipelines combine both development and operations related actions. Moreover, through Jenkins, teams can distribute and store pipelines in various (shared) repositories. The tool offers a visual interface, which facilitates management of pipelines and tracking of the progress of their execution.
TeamCity is a tool by JetBrains, which is focused on distributed management of builds, execution, and monitoring of build processes, as well as the flexible integration of changes. It is open source and designed to serve the needs of multiple stakeholders, including developers, engineers, and software project managers. In addition to the (free) open-source version, the tool is available at a scalable licensing fee towards supporting large-scale enterprise projects.
Similar to Jenkins, TeamCity is also Java-based. Its main functionalities include building, checking and running tests on a server, at any time when new changes are committed in the codebase, but also during regular time intervals. This means that TeamCity ensures a clean codebase at all times. It also offers a dashboard that enables stakeholders to monitor progress and changes across all many different projects.
TeamCity supports very complex build grid-like pipelines, which are distributed among various build agents. It provides remote execution features, which allow developers to test their changes prior to committing them i.e. committing only successful builds. Similar to Jenkins, it can be extended to various plugins, including pluggable build programs (e.g., Ant, Maven), pluggable build monitoring and notification services (e.g., Jabber), as well as integration with version control systems.
CircleCI is a CI tool that works closely with software repositories in GitHub or GitHub Enterprise. In particular, the tool provides notification services and alerts for every commit performed in a GitHub project that is associated with it. Notifications contain success or failure information and are provided through webhooks. Moreover, the notifications can be integrated with messaging and collaboration frameworks such as Slack, HipChat, and IRC. CircleCI can be configured to deploy code to various cloud environments, notably Amazon’s environments such as AWS CodeDeploy, AWS EC2 Container Service (ECS) and AWS S3. It can be also integrated with the Google Container Engine (GKE), Heroku and other cloud services.
Apache Gump is the Python-based continuous integration tool of the Apache Software Foundation. It provides support for Apache Ant, Apache Maven, and other build tools. Being tight to the Apache ecosystem, Gump provides some unique features, such as its ability to build and compile software using the latest development versions of the Apache build projects and tools. In this way, Gump is able to automatically identify library inconsistencies, while being immune to changes made to the respective Apache projects. Gump can run on a developer’s machine or DevOps data center. However, the project maintains its own dedicated server, where several Apache projects and their dependencies are built. Gump maintains projects’ definitions and dependencies in a proper in-memory XML object. Hence, project definitions are mapped from XML into in-memory objects for processing.
Bamboo is one more CI tool that supports integrated pipelines comprising automated builds, automated tests, and releases. Specifically, Bamboo supports building, testing and deployment functionalities. In terms of builds, it enables the setup of multi-stage build plans, which can include triggers associated with commits. As far as testing is concerned, Bamboo supports the execution of parallel automated tests that are run whenever there is a code change. It also supports automated deployment workflows, while dealing with aspects such as authorizations and authentication during deployment. Similar to other CI tools, Bamboo can be integrated with collaboration and bug tracking suites like Jira and Bitbucket. It is also extensible based on a rich collection of plugins, which are available in Bamboo’s CI marketplace.
The above list of CI tools, yet non-exhaustive, outlines the main CI functionalities which are currently available to the DevOps teams. While the presented tools feature commonalities in the functionalities that they support, each one of them provides a set of unique selling propositions. Therefore, the task of selecting a CI tool can be particularly challenging. Relevant selections need to take into account both the technical characteristics and the cost of each solution, as well as how it matches the development and deployment requirements at hand. Among the main considerations are the nature of the CI/CD hosting environment (e.g., on-premise, cloud-based, hybrid) that should be supported, the need for integrating with other platforms and tools (e.g., GitHub, GitLab, Docker, Kubernetes), the target functional requirements (e.g., in terms of pipelines and workflows), as well as the community support that each tool provides. These considerations have to be confronted with the capabilities of the various tools while taking into account budgetary considerations. No matter how the selection process works, there is no DevOps without an effective CI tool that meets the development and deployment needs.
The role of CIOs in fostering an agile and innovative DevOps culture
Microservices: A Powerful tool for Business Agility
Secure Software Development: From DevOps to DevSecOps
The DevOps Tooling Ecosystem
Tools and Techniques DevOps Continuous Integration and Testing
Significance of Customer Involvement in Agile Methodology
Quantum Computing for Business – Hype or Opportunity?
The emerging role of Autonomic Systems for Advanced IT Service Management
Why is Data Fabric gaining traction in Enterprise Data Management?
How Metaverse could change the business landscape
We're here to help!
No obligation quotes in 48 hours. Teams setup within 2 weeks.
If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch.
Outsource with Confidence to high quality Service Providers.
If you are a Service Provider looking to register, please fill out
this Information Request and someone will get in
Enter your email id and we'll send a link to reset your password to the address
we have for your account.
The IT Exchange service provider network is exclusive and by-invite. There is
no cost to get on-board;
if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
|
<urn:uuid:a547ec96-7a51-402b-813b-9350a75e5612>
|
CC-MAIN-2022-40
|
https://www.itexchangeweb.com/blog/a-closer-look-at-five-tools-for-devops-and-continuous-integration/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00449.warc.gz
|
en
| 0.942001 | 1,722 | 2.734375 | 3 |
OpenAI is an artificial intelligence research organization on a Crusade to build general self-learning AI. The group’s robotics division says Dactyl, its robotic hand they built last year, has learned to solve a Rubik’s cube.
Automated Rubik’s Cube solving robots are far from new. Guinness World Records has a dedicated category for them. But the OpenAI feat stands out because This AI powered robotic hand figured out how to unscramble the cube without human programed instructions.
The achievement used a training technique developed by OpenAI that could lend itself to building more dexterous and more autonomous machines. This may change human robot relations in a real way. Most robots in existence are extremely limited to specific tasks.
What OpenAI has done is a step towards training an AI to move and operate in the real-world. OpenAI has shown success training AI’s in cyberspace using machine learning models.
The basic concept of machine learning is to create a virtual environment for the system to operate. The AI over time learns through trial and error. After a few million attempts, the neural networks can learn to play hide and seek or walk on two legs.
The real problems start when the AI is installed onto a robot. Simulations don’t currently account for all real-world variables. This means machine learning models often end up facing challenges with no training to resolve them.
OpenAI’s new training method, helps train the AI, and therefore the robot hand to deal with the unexpected. This is done by injecting uncertainty into the training simulations. For example varying the impact of gravity between simulation learning sessions.
“One of the parameters we randomize is the size of the Rubik’s Cube,” OpenAI’s researchers elaborated in a blog post. “ADR begins with a fixed size of the Rubik’s Cube and gradually increases the randomization range as training progresses. We apply the same technique to all other parameters, such as the mass of the cube, the friction of the robot fingers, and the visual surface materials of the hand.”
A second machine learning model is used to control the processes. The AI is used to make the simulation progressively more difficult as the hand’s AI improves. “As the neural network gets better at the task and reaches a performance threshold, the amount of domain randomization is increased automatically,” explained the researchers.
The project has shown promising initial results. When the model was broken out in our corporeal world, the robotic hand managed to solve a Rubik’s cube while wearing a rubber glove, with a few fingers attached to each other and even when the researchers tried to hit the cube with various objects.
OpenAI believes that the training technique may have applications in far more serious projects. They say the method could enable industrial robots, drones and other autonomous machines to adapt their behavior when encountering an unforeseen hindrance. Versatility is an important step towards, self-learning artificial general intelligence.
This Rubik’s Cube solver is one of the first major projects OpenAI has detailed since starting a $1 billion partnership with Microsoft a few months back. Under that July deal, Microsoft will fund the lab and provide cloud infrastructure in exchange for IP access.
This is a big achievement for AI, but it’s also worrisome. An AI is now being used to train an AI. That AI is now out of cyberspace and on the hand of a robot. I’m left thinking is OpenAI creating the next stage of human evolution, and asking what will happen to us flesh and bone humans?
Header Image: OpenAI.
I’m Danial Payne I’ve been a freelance writer, video, and web person since 1988. My passion is technology, whether it’s the latest cameras or cutting edge ways the internet is used to improve medicine. I write for Internet News Flash and am helping with the online resurrection of Digital Content Creators Magazine Contact me: [email protected]
|
<urn:uuid:026befd4-2599-4f3c-8fbb-8683dd71f4f1>
|
CC-MAIN-2022-40
|
https://www.internetnewsflash.com/openais-ai-powered-robot-hand-learned-to-solve-a-rubiks-cube/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00449.warc.gz
|
en
| 0.921483 | 842 | 3.609375 | 4 |
In the not-so-distant future, smart cities will weave the Internet of Things (IoT) and interconnected devices into existing technology infrastructure to bring entire communities online. Singapore, for example, recently launched its Smart Nation program, deploying citywide sensors and monitors to collect data on everyday living. Using an online platform dubbed Virtual Singapore, the city-state plans to use the information to improve livability and enhance government services.
But like all things digital, smart city networks have the potential to be breached by malevolent intruders. In Ukraine, hackers targeted a power grid and took an entire city's substations offline, leaving thousands of residents without power. Cybercriminals can also disrupt emergency response systems. In Texas, hackers triggered all of Dallas' emergency sirens, eventually prompting government officials to shut down the city's security system.
Bringing cities online invites a new type of threat that most government agencies are unprepared for. From traffic lights to power grids, smart cities are full of entry points that could fall victim to hackers' exploits. As cities design their digital future, government agencies need to prioritize cybersecurity protocols to mitigate attacks that have the potential to cripple entire communities.
Smart Cities Must Strengthen Security Protocols
As the number of IoT devices grows, security sits atop the list of government policy concerns. But today, only one in three governing bodies says they are prepared to manage IoT security. While 12% of government respondents believe they have the resources to respond to cybercrimes, 47% say they are only well-equipped in some areas and ill-equipped in others. Some cities, such as Dallas, discover the hard way that they lack the skills needed to protect their residents in the wake of crises.
Part of the security challenge stems from a shortage of dedicated professionals. Although the majority of states have developed some type of cybersecurity response plan, 83% of government agencies say only 1% to 2% of their IT departments are security experts. The absence of dedicated security professionals is a problem for state CIOs, who struggle to keep up with evolving cybersecurity best practices. Government agencies should strongly consider offering certification or supplemental training courses to prime their in-house security teams for the challenge of protecting smart cities.
In addition to the knowledge gap, some cities are hamstrung by reliance on outdated technology used to manage industrial systems. Several years ago, Iranian hackers were able to infiltrate a water dam near New York City when city officials connected its control system to poorly protected office computer networks. Public sector groups attempting to digitize their city operations with aging infrastructure create security risks that that can easily be penetrated online. Governing bodies need to modernize each IT component within a smart city's ecosystem to keep hackers from accessing essential services.
Security Starts with a Resilient Infrastructure
As the number of IoT-connected devices swells and cyberattacks become more complex, municipalities, counties, and states considering smart solutions need to rethink their security approach.
Protection begins with building a resilient infrastructure. Before moving essential services online, a robust response protocol and disaster recovery plan for worst-case scenarios must be established. Including additional layers of security can help mitigate the fallout from a cyber attack on one system and ensure associated services continue to function. Steps like incorporating end-to-end encryption, using blockchain technology, or deploying decentralized applications are also strategies to consider using when securing essential municipal services.
Government leaders and urban planners should also consider utilizing artificial intelligence tools to help detect anomalies in city networks and accelerate government response times during a hack. Unlike human security workers, machines can process huge volumes of data quickly, reducing the time it takes to identify and negate threats. Once security measures are in place, public sector teams should routinely test for loopholes and broken patches that can be fixed immediately before intruders discover them first.
The complexity of the security challenge associated with smart cities may challenge even the most tech-savvy government staff, which may quickly find themselves in unfamiliar territory when it comes to systems integration. Moreover, pilot projects that were manageable with existing internal staff quickly become unmanageable when it's time to expand to full deployment. That's where the technology industry comes in.
Tech firms with expertise in integration, APIs, cloud computing, data, and security are essential to facilitating smart cities' growth. Expect to see concepts such as "smart cities-as-a-service" to gain traction as a means for providing efficient and effective end-to-end solutions. With economies of scale, standardization, and commoditization, smart city technologies will become more accessible and affordable over time. The smart cities-as-a-service approach brings it all together into what many city planners will view as an appealing option.
Smart cities are quickly becoming a reality that many governments are looking to embrace. But as cities become smarter, they also become more vulnerable. As communities continue to push for digital transformations, governing bodies will need to double down on cybersecurity to keep smart cities, and their residents, safe.
|
<urn:uuid:6556e6b9-a4a5-4dd2-a012-a80d4a4ac64b>
|
CC-MAIN-2022-40
|
https://www.darkreading.com/attacks-breaches/how-smart-cities-can-minimize-the-threat-of-cyberattacks
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00449.warc.gz
|
en
| 0.941538 | 1,015 | 2.984375 | 3 |
During the 1990s, investment in medical device research and development more than doubled. Fast-forward to today, and the United States reportedly boasts the largest medical device market in the world (valued at around $148 billion). This charge into the future is showing no signs of slowing down, as its value is expected to rise to $155 billion by 2017, fueled by the industry’s desire to find better solutions for diagnosing, treating, or managing medical issues.
As the healthcare industry moves forward, its professionals must remember that medical device security is critical - not only affecting an organization's safety, but also potentially a patient’s health. Let’s take a look at the continued security concerns that exist around medical devices and what manufacturers and healthcare institutions can do to prevent harmful attacks.
This is a dilemma that is continuously being faced in today’s healthcare industry. As new medical device ideas and innovations come to fruition, manufacturers and IT professionals are being tasked with weighing the risks vs. the rewards.
For example, wirelessly connecting a pacemaker to a hospital’s network might provide a clear view into the performance of the device, but it could also open the door for potentially life-threatening hacks. The FDA is tasking manufacturers and hospitals alike to address this balance and ensure the rewards outweigh the risks prior to bringing IP-enabled devices to the market.
After medical devices are released onto the market, there’s a chance they could be hacked if proper security measures haven’t been put in place by the manufacturers or the hospitals that are using them. The wide range of endpoints, whether they be heart monitors or insulin pumps, transfer data directly to the networks and systems that cybercriminals frequently target. Cybercriminals can access these entries to steal sensitive patient data (like social security numbers) and even take control of the devices themselves, which poses an extreme hazard to patient safety.
For example, the University of Southern Alabama put medical device security vulnerabilities on full display a year ago, when researchers successfully hacked a wireless patient simulator’s pacemaker, and then killing the simulator. The research project was an eye-opening revelation at just how dangerous it can be to have medical devices connected to unsecured networks.
According to the FDA, medical device manufacturers are accountable for understanding the risks and hazards that could potentially be exposed in worst-case scenarios. These risks are now, more than ever, being tied to cybersecurity. Manufacturers need to have mitigations in place to protect against patient security breaches and ensure the devices will work as intended, even under extreme circumstances.
The FDA also suggests that manufacturers look into utilizing cybersecurity tools to rate their devices’ vulnerabilities based on severity. Some things manufacturers should keep in mind when running tests include, the complexity of the attack, the scope of the vulnerability, and its impact on integrity. The main point for manufacturers to understand, and put to work, is that simply creating devices around convenience and moving the healthcare industry further into the technological future is not enough. Medical devices also need to be designed with security as a top of mind concern.
Healthcare institutions that have older medical devices, or implemented new connected medical devices, need to be just as vigilant as the manufacturers themselves when it comes to security - and HIPAA will hold them accountable. Despite the growth in the number of connected medical devices, research shows that the vast majority of healthcare security professionals have not properly prepared their networked environments for the risks these devices pose. The reality is that an abysmal 9.6% of respondents from a recent IDC Health Insights survey said they have integrated medical devices into their security strategy.
At the most basic level, healthcare providers need to educate themselves and others within the organization about the evolution of threats, and acknowledge that traditional security measures are no longer enough. New techniques and advanced security solutions need to be considered in the effort to slowdown cyber-attacks.
"We try to preach that a lot. Be aware of your surroundings. Understand what you are doing
and who you are seeing in front of you, in your email, and when you're online. Security is everybody's responsibility," emphasizes a security engineering manager interviewed for the IDC Health Insights cyberthreats report.
At a more advanced level, healthcare institutions should consider investing in both outside-in and inside-out protection technologies to establish an integrated approach.
Conventional firewalls alone simply don’t get today’s job done when it comes to protecting from the outside in. To fight today’s sophisticated threats, healthcare organizations must adopt an integrated security strategy that uses multiple technologies, and threat intelligence applied across the attack cycle and throughout the healthcare system. An advanced threat protection (ATP) framework can help prevent threats based on known threats, detect unknown threats, and put a halt to damage by responding in a timely matter to potentially harmful incidents. This approach combats threats from the network’s core to the endpoint user device, and even into the cloud, protecting valuable health data and intellectual property.
When looking for inside-out protection, especially for medical devices, healthcare institutions should consider investing in internal segmentation firewalls, or ISFWs. This new class of firewalls intelligently segments internal networks, making it more difficult for cybercriminals to access valuable health IT assets. While edge security might keep the outer layer secure, ISFWs keep the individual assets inside the network more secure. In addition, ISFWs ensure continuous operations, limit the risk of publicly accessible networks (very important in today’s healthcare industry), provide an additional layer of security, restrict the east-to-west movement of an attack should it get through the perimeter defenses, and can quickly isolate infected devices or network segments.
Read this white paper for more information about outside-in and inside-out protection strategies for healthcare.
Let’s get a conversation going on Twitter! Do you think today’s hospitals and healthcare institutions are successfully defending against attacks on medical devices?
|
<urn:uuid:b4b826b7-7682-4639-ad74-b8a73854447d>
|
CC-MAIN-2022-40
|
https://www.fortinet.com/blog/industry-trends/medical-device-security-the-continued-concern
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00449.warc.gz
|
en
| 0.947492 | 1,233 | 2.6875 | 3 |
In this Blog, we will discuss How the IoT works
IoT ARCHITECTURE LAYERS
There are four major layers.
Figure 1: IoT architecture layers
Sensor, Connectivity and Network Layer
Fig 2: Sensor, Connectivity and Network Layer
- This layer consists of RFID tags, sensors which are an essential part of an IoT system and are responsible for collecting raw data. These form the essential “things” of an IoT system.
- Sensors, RFID tags are wireless devices and form the Wireless Sensor Networks (WSN).
- Sensors are active in nature which means that real-time information is to be collected and processed.
- This layer also has the network connectivity (like WAN, PAN, etc.) which is responsible for communicating the raw data to the next layer which is the Gateway and Network Layer.
- The devices which are comprised of WSN have finite storage capacity, restricted communication bandwidth and have small processing speed.
- We have different sensors for different applications – temperature sensor for collecting temperature data, water quality for examining water quality, moisture sensor for measuring the moisture content of the atmosphere or soil, etc.
As per the figure below, at the bottom of this layer, we have the tags which are the RFID tags or barcode reader, above which we have the sensors/actuators and then the communication networks.
Gateway and Network Layer
From the figure below, at the bottom, we have the gateway which is comprised of the embedded OS, Signal Processors, and Modulators, Micro-Controllers, etc. Above the gateway we have the Gateway Networks which are LAN, WAN etc.
Fig 3 : Gateway and Network Layer
- Gateways are responsible for routing the data coming from the Sensor, Connectivity and Network layer and pass it to the next layer which is the Management Service Layer.
- This layer requires having a large storage capacity for storing the enormous amount of data collected by the sensors, RFID tags, etc. Also, this layer needs to have a consistently trusted performance in terms of public, private and hybrid networks.
- Different IoT device works on different kinds of network protocols. All these protocols are required to be assimilated in a single layer. This layer is responsible for integrating various network protocols.
Management Service Layer
- This layer is used for managing IoT services. The management Service layer is responsible for Securing Analysis of IoT devices, Analysis of Information (Stream Analytics, Data Analytics), Device Management.
- Data management is required to extract the necessary information from the enormous amount of raw data collected by the sensor devices to yield a valuable result of all the data collected. This action is performed in this layer.
- Also, a certain situation requires an immediate response to the situation. This layer helps in doing that by abstracting data, extracting information and managing the data flow.
- This layer is also responsible for data mining, text mining, service analytics etc.
From the figure below, we can see that, management service layer has Operational Support Service (OSS) which includes Device Modeling, Device Configuration and Management and many more. Also, we have the Billing Support System (BSS) which supports billing and reporting.
Also, from the figure, we can see that there are IoT/M2M Application Services which includes Analytics Platform in which Statistical, in motion, predictive, in-memory analysis and data, text mining segmentations are used for overall intelligent analysis.
Data – which is the most important part includes governance, encryption, repo, quality management in order to maintain the quality of the data.
Security which includes Access Controls, Encryption, Identity Access Management, etc. and then we have the Business Rule Management (BRM) includes rule definitions, modelling, simulation, execution and Business Process Management (BPM) similar function related to processing.
Fig 4: Management Service Layer
Application layer forms the topmost layer of IoT architecture which are responsible for effective utilization of the data collected. Various IoT applications include Home Automation, E-health, E-Government, etc.
From the figure below, we can see that there are two types of applications which are Horizontal Market which includes Fleet Management, Supply Chain, etc. and on the Sector-wise application of IoT, we have energy, healthcare, transportation, etc.
Fig 5: Application Layer
Smart Environment Application Domains
Figure 6: Smart Environment Application Domains
WLAN stands for Wireless Local Area Network which includes Wi-Fi, WAVE, IEEE 802.11 a/b/g/p/n/ac/ad, and so on WPAN stands for Wireless Personal Area Network which includes Bluetooth, ZigBee, 6LoWPAN, IEEE 802.15.4, UWB, and so on.
Figure 7: Smart Environment Application Domains: Service Domain and their Services classified. That’s all for this blog!!
Thanks for visiting us. Join certcube Labs for professional cybersecurity training & IT Security Services.
|
<urn:uuid:e3d8d1bc-5bab-46e0-8716-c96d5ac40bf5>
|
CC-MAIN-2022-40
|
https://blog.certcube.com/iot-architechure-for-pentesters-ii/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00649.warc.gz
|
en
| 0.895119 | 1,038 | 3.5 | 4 |
Many websites such as forums, dating sites, job application portals, newsletters or social networks require a user registration. This registration generally requires an email address and a freely choosable pseudonym as username. Most Internet users assume that only the chosen pseudonym is publicly visible while the email address is treated confidential by the site operator. Depending on the type of website it is important that the existence of an account is kept confidential since knowing that an account exists may lead to certain conclusions about the account owner. As an example, if a person has registered in a forum about a specific disease, it is likely that this person is affected by this disease. The problem presented here allows unauthorized third parties to find out whether there is an account for a specific email address. In case of online job application portals, the existence of an account typically means that the account owner has applied for a job. This may allow an employer to find out that one of his employees has confidentially applied for a job at another company.
2. Description of the problem
The main problem is that if a user tries to register with an email address or a username, which is already registered at the site, the user gets a corresponding error message in the browser. An unauthorized third party may try to register with the email address or username of the potential account owner. If this results in an error message, the attacker may conclude that the email address or username is already registered.
The email address is linked to a specific person and most employers know at least one private email addresses of their employees. The problem can obviously only reveal the existence of an account on a site such as a job application portal and no more detailed information about the account (such as the corresponding pseudonym to a given email address or the application submitted). However, the bare fact that an employee has applied for another job may already have negative consequences for the existing employment.
For some sites such as job application portals the username can also be linked to a specific person (especially for rare names and/or small industries), since many applicants choose an easily predictable username such as "First Name.Last Name" or a pseudonym which is also known to the current employer and expect the data to be treated confidential by the company they apply to. If this predictable username is registered in the job application form of another company, an employer may conclude that his employee applies for a job there.
Some websites don't require a username and use the email address and password for logging in. Some other sites assign a randomly chosen username for every registered user. Most of these sites reveal the existence of an account for an email address as well when trying to register again with the email address.
3. Distribution of problem for job application portals
I have checked the job application portals of some big companies by trying to register with the same username or email address twice. 27 of the 30 companies in the DAX index (which contains the biggest stock companies in Germany) are affected by the problem. The remaining 3 companies either don't provide an online job application portal or only allow direct applications without an account registration. This leads to the conclusion that the vast majority of companies running an online job application portal are affected by the problem. Some international companies such as IBM or Intel are affected as well.
4. Other problematic online accounts
The problem not only affects job application portals but also many other websites such as online shops, forums, newsletters, social networks or dating sites. The mere existence of an account in a forum may lead to problematic conclusions about the owner of the account. An employer may for instance check whether a female applicant is registered in a forum about pregnancy with the email address used for the application instead of asking whether she is pregnant (which is illegal to ask in some countries) and not employ her if there is an account. The registration in a forum or a newsletter about a sensitive topic such as employment rights, homosexuality, certain political opinions/activities, diseases (e.g. HIV) or psychical problems should also not be revealed to everyone who knows the user's email address. Most users expect that a forum only reveals the chosen pseudonym to the public and that the email address is treated confidential. So it may be problematic if everyone, who knows the email address, can figure out that someone has an account in a forum.
5. Possible use of vulnerability by cyber criminals
The existence of an account in a forum or vendor support site about specific hardware or software components can reveal some information about the hardware/software used by the account owner. This may allow an attacker to specifically exploit vulnerabilities in those components in a targeted attack.
Cyber criminals could also exploit this problem to increase the effectivity of their attacks. For instance, a phisher may choose to only send his phishing mails to email addresses which are actually registered at a site. An attacker may also verify that an account exists before trying to break into the account by brute-forcing the password (or the security question for resetting the password).
6. Confirmation emails
Some sites send a confirmation email when trying to register with a given email address. This confirmation mails allow users to find out that someone has tried to register with the user's email address on a site.
Some sites reveal the existence of an email address/username before actually submitting the registration form e.g. using Ajax requests to the server. In this case, no confirmation email is sent to the owner of the email address. Some other sites reveals the existence of an email address when submitting the registration form even if there is another error such as an empty or weak password, a duplicate username or required form fields left blank. In these cases the sites don't send any confirmation email but still reveal the existence of an account to a given email address.
When a confirmation mail is sent, most users will just ignore it since they haven't registered on the site and not take into account that this email may be the result of someone trying to reveal the user's accounts. Even if a user knows about the problem, it may still be impossible to find out who is responsible for the attack.
7. Mitigation for website operators
Website operators can take some technical measures to mitigate the risk for their users. Depending on the nature of the site it may be necessary to abandon the possibility for users to choose a username, because a given username may have already been taken, which will make the registration fail and thus reveal the existence of a given username. For sites which already publicly reveal the chosen names as part of the site functionality (such as forums or dating sites) and most users choose a pseudonym for the registration, a freely choosable username is obviously unproblematic. For other sites such as job application portals where a confidential treatment of all user data is commonly expected and many users choose their real name as username, it is probably necessary to abandon the possibility to register with a freely choosable username. The site may either create a randomly generated username or just use the user's email address instead of a username for logging in.
The same problem also applies for email addresses. Most sites show an error message (or a hint to use the existing account) when trying to register with an already registered email address. This problem can be solved by requiring the user to verify the email address by clicking on a link sent to the user via email. If the email address is already registered, the site doesn't need to tell the client browser about the existing account. The site may then send a reminder about the existing account instead of a verification link via email. This makes sure that only the owner of the email address can find out whether there is an account for his email address.
The website operator should also make sure that the password reset functionality and changing the email address of an existing account (which an attacker can easily register for this purpose) doesn't reveal to the client browser whether a given email address is already registered at the site.
The measures proposed here may lead to some extra effort and losses of comfort (no freely choosable username, requirement to verify email address) and increased support expenditures. So there is an obvious trade-off between usability and privacy. For some sites such as job application portals, dating sites or forums about sensitive topics it is obvious that privacy should have priority over usability and the existence of an account shouldn't be revealed to unauthorized third parties.
8. Mitigation options for users
Users may also protect their privacy by using secret email addresses/aliases for registering sensitive accounts. You can easily register a new account at a freemail provider of your choice for this. However, registering too many email accounts may be problematic since it requires users to remember all email addresses/passwords and regularly check all the accounts for incoming mails. As an alternative, some email providers such as hotmail allow setting up a limited number of alias addresses for one account, so that a user can check incoming emails to multiple addresses with one single email account. Many providers also allow appending a plus sign and a random string to an email address. You can for instance use [email protected] instead of [email protected] when doing a confidential application to a company. Since an attacker only knows the base address ([email protected]) and can't guess the random string you appended, he can't check whether you have already registered an account. However, you should keep in mind that you will need the full email address used for the registration for doing a password reset. So it may be a good idea to write down the email alias you used for the registration.
If you have already registered at a sensitive site with a non-secret email address, you can still change your email address to an alias. Most sites allow changing the email address in the profile settings. However, some sites still block the registration of a new account with the same email address thus revealing the fact that there had been an account. You may also inform the website operator if a site you know/are using is affected by the problem and the existence of an account should be kept strictly confidential based on the nature of the website.
If you want to hide the existence of an account, you should also choose a username which is non-guessable even for someone who knows you. This is obvious for community sites such as forums or dating sites with publicly visible nicknames. However, even for sites such as job application portals where you expect your data to be treated confidential, you should still choose a non-guessable username.
|
<urn:uuid:da9e7913-492e-4677-abf1-fd39bcefd799>
|
CC-MAIN-2022-40
|
https://www.jakoblell.com/blog/2013/01/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00649.warc.gz
|
en
| 0.943126 | 2,145 | 2.671875 | 3 |
“The groundwater in Barangay San Antonio, Basey, Samar will remain non-potable for the next 5 to 10 years.” This was Dr. Bayani Cardenas’ initial conclusion on a study he conducted in the area, as mentioned in his lecture at the National Institute for Geological Sciences last June 18.
Communities in the Philippines like Barangay San Antonio have the least access to piped clean drinking water. According to local officials, around 20 million Filipinos still rely on their own means for their water supply, including pumps for extraction of groundwater, a method that poses several problems to sanitation and structural safety. The World Health Organization states that unclean drinking water, combined with poor sanitation, is the second biggest killer of children, and that one in nine people around the world still lack access to safe water. This problem was aggravated by the onslaught of Typhoon Yolanda (international name: Haiyan), which brought storm surges that contaminated the groundwater in several communities, particularly in Eastern Visayas. These huge waves caused saltwater to seep into aquifers, some of which were the water supply of certain rural communities in Samar and adjacent provinces.
“A drop of salt water in a glass of drinking water can make it non-potable,” said Dr. Cardenas. He added that according to studies, humans can only drink water with at most 1.5-2% salt concentration in water. More than 2% salt concentration will make the water undrinkable.
During their research conducted in January 2014, merely two months after Typhoon Yolanda struck, Dr. Cardenas and his team of geologists and other scientists established a Flood Height Mapping Team, sampled groundwater from about 300 wells in the vicinity, and constructed their own wells. Their findings showed that after the flood subsided, groundwater in the area was contaminated with saltwater.
Dr. Cardenas and his team identified two kinds of aquifers in the area: the surficial aquifer and the deeper aquifer. The surficial aquifer is a shallow aquifer usually made up of beach sand about 10-15 feet deep. While the deeper aquifer, where groundwater is extracted, lies below the surficial aquifer.
Using the Electrical Resistivity Tomography, Dr. Cardenas and his team were able to develop a model illustrating how seawater infiltrates groundwater. The model explains that seawater moves through the water table like fingers slowly seeping through each layer. According to the model, it would take 5-10 years in order for seawater to be taken in and flushed out. However, the model holds true only to the surficial aquifer.
As with the case of deeper aquifers, the team found out that water pumps (colloquially bomba or poso) acted as injectors—taking the saltwater brought by the storm surge directly into the deeper aquifers, contaminating the groundwater.
The team returned 6 months after their first visit and found that the groundwater had become drinkable. This was due to the natural dilution of the groundwater within the deeper aquifer. Although the salinity level has been greatly reduced, there is still the issue of bacterial contamination within the groundwater in San Antonio.
“Since growing coastal populations will continue to rely on groundwater for their needs, strategies for reducing vulnerability to intense storm surge-caused groundwater contamination and mitigating its effects are needed,” researchers stated in the study.
With the extent of damage found in Barangay San Antonio, Dr. Cardenas reiterates that Typhoon Yolanda was one of the worst typhoons in history. However, this was not the first, nor the last—there have been at least 2 cases of typhoons as strong as Yolanda recorded in 1897, with storm surge heights of at least 7 meters. Dr. Cardenas concludes that ferocious typhoons like Yolanda, though rare events, will certainly happen in the future.
With super typhoons being a repeatable and possible occurrence, Filipinos ought to be more prepared in reducing vulnerability and preventing similar damages. The NOAH Lecture Series aims to enhance and enrich local knowledge on storm surges and other natural hazard events that have remarkable impact on the lives of many Filipinos. Project NOAH, a risk reduction tool for vulnerable communities, provides necessary weather information through the NOAH website and its mobile applications. The Project NOAH website makes use of weather tools, sensors, and hazard and risk assessment tools to provide near real-time weather data and hazard information—tools that are helpful in proper disaster preparation and long-term disaster planning and mitigation.
Source: Typhoon Ruby (Hagupit) Update (NOAH DOST)
Project NOAH holds first lecture series for 2015: Devastation of Aquifers from Super Typhoon Haiyan’s Storm Surge
|
<urn:uuid:0054a8eb-5f53-42ae-8f24-1b0a13869dae>
|
CC-MAIN-2022-40
|
https://jeffric.com/project-noah-holds-first-lecture-series-for-2015-devastation-of-aquifers-from-super-typhoon-haiyans-storm-surge/4998/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00649.warc.gz
|
en
| 0.959115 | 1,021 | 3.078125 | 3 |
Flash Points: Business Rules, Events, and Integrity
Intuitively, we know that certain business rules apply when certain events occur. But how exactly?
At the risk of stating the obvious, let me begin by clarifying that a business rule and an event are not the same thing. There should be no confusion about that. A business rule gives guidance; an event is something that happens.
How do business rules and events relate? Consider the business rule: A customer must be assigned to an agent if the customer has placed an order. Figure 1 presents a concept model diagram outlining the relevant terms and wordings for this business rule statement.
Figure 1. Terms and Wordings for the Agent-Assignment Business Rule
The business rule itself has been expressed in declarative manner using RuleSpeak®. This means, in part, that it does not indicate any particular process, procedure, or other means to enforce or apply it. It is simply a business rule — nothing more, nothing less.
Declarative also means that the business rule makes no reference to any event where it potentially could be violated or needs to be evaluated. The business rule does not say, for example, "When a customer places an order, then …."
This observation is extremely important for the following reason. "When a customer places an order" is not the only event when the business rule could potentially be violated. Actually, there is another event when this business rule could be violated: "When an agent leaves our company…." This other event could pose a violation of the business rule under the following circumstances: (a) The agent is assigned to a customer, and (b) that customer has placed at least one order.
In other words, the business rule could potentially be violated during two quite distinct kinds of events.
- When a customer places an order …
- When an agent leaves our company …
The first is rather obvious. The second is much less so. Both events are nonetheless important because either could produce a violation of the business rule.
This example is not atypical or unusual in any way. In general, every business rule (expressed in declarative form) produces two or more kinds of events where it could potentially be violated or needs to be evaluated. (I mean produces in the sense of can be analyzed to discover.)
We call these events flash points. Business rules do exist that are specific to an individual event, but they represent the exception not the general case.
Let's summarize what I've said so far:
- Business rules and events, while related, are not the same.
- Specifying business rules declaratively helps ensure no flash point is missed.
- Any given business rule, especially a behavioral business rule, needs to be evaluated for potentially multiple flash points.
Figures 2 and 3 provide additional examples to reinforce this last point.
Figure 2. Multiple Events for a Simple Business Rule
Figure 3. Multiple Events for a More Complex Business Rule
Why is that last point so important? The two or more events where a business rule needs to be evaluated are likely to occur within at least two, and possibly many, different processes, procedures, or use cases. They might also occur anywhere in ad hoc (unmodeled) business activity.
Yet for all these different processes, procedures, use cases, and other activity, there is only a single business rule. By specifying the business rule only once, and faithfully supporting all its flash points wherever they occur, you ensure consistency and integrity everywhere.
Discovering and analyzing flash points for business rules often also proves a very useful activity in validating business rules. Important and sometimes surprising guidance issues (a.k.a. business policy questions) often crop up. This capability is one of many for validation and verification of business rules that business-rule-friendly tools can and should support.
This type of business-rule-centric event analysis is not a feature of any traditional requirements or IT methodology — and perhaps even most importantly, of any widely-used automated rule platform. Once you fully appreciate that crucial insight, you can begin to see why legacy systems so often produce such inconsistent results.
Merriam-Webster Unabridged Dictionary — event 1a.
Refer to www.RuleSpeak.com
Ronald G. Ross, Business Rule Concepts: Getting to the Point of Knowledge (4th ed), (2013), Chapter 8, pp. 99-102.
# # #
About our Contributor:
All About Concepts, Policies, Rules, Decisions & Requirements
We want to share some insights with you that will positively rock your world. They will absolutely change the way you think and go about your work. We would like to give you high-leverage opportunities to add value to your initiatives, and give you innovative new techniques for developing great business solutions.
|
<urn:uuid:cbccb8ae-b3fc-48a9-a4b8-86c51c2b993a>
|
CC-MAIN-2022-40
|
https://www.brcommunity.com/articles.php?id=b997
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00649.warc.gz
|
en
| 0.940615 | 1,013 | 2.75 | 3 |
Processes are definable portions of a system or subsystem that consist of a number of individual elements, actions or steps. A process as a set of interrelated resources and activities which transform inputs into outputs with the objective of adding value.”
A brief description of common business functional processes include the following:
The human resources (HR) department is responsible for an analysis of the needs and training of the workforce, employee turnover analysis, absenteeism analysis and attitude surveys. In addition, the HR department may recruit, select and hire people for the organization.
As a support service, production engineering is the problem solving arm of the company. The engineering department should be proactive (always searching) in their problem solving activities. The planning of new equipment or processes is a must for this department.
Sales and Marketing
It is up to sales and marketing to develop effective plans to identify customers and markets for the company’s current products and services, and to identify wants and needs for new products. They should work with engineering in order to pass along customer ideas or desires The development of a marketing plan helps guide the production plan.
The financial department, in many plants, includes the accounting department. The accounting function compiles monthly statements and the profit or loss statements for the company. A standard cost system should be in place so that data can be collected and based against it. Budget forecasts, capital project requests and external funding can also be coordinated by the finance department.
In the manufacture of certain products, there are numerous legal ramifications. The theories based upon breach of warranty have a statutory basis in the Uniform Commercial Code. Product safety requirements and labelling laws not only protect the consumer, but also should reduce the liability risk to the company.
The manufacturing activity is associated with companies manufacturing a product or products. Manufacturing takes designs from engineering, schedules from planning and assembles and tests the company products. For a service organization, this function is replaced by the personnel performing the service.
Safety and Health
The safety and health department aids the company in complying with local, state, federal and industry regulations. The best known safety agencies include OSHA, state safety agencies, NFPA, etc. These agencies impact the establishment of a safety program, safety committee and special safety task-forces for the company.
Legal and Regulatory
A legal department (or attorneys on retainer) may be necessary to handle legal matters, especially in the very litigious society of today. A review of purchase agreements, land contracts, leases, right-of-ways, tax abatement's, economic impact grants, etc. are examples.
Research and Development (R&D)
Research and development activities are critical for the future of the company. The customer is satisfied for a certain time span with the existing product or service, but eventually the customer will want a new and improved product. Interaction with the marketing function and customer is needed to generate new ideas and products.
The securing of the proper raw materials, at the right time, is a basic requirement of the purchasing department. They must find ways to reduce the number of corporate suppliers, without increasing the risk of shutting down lines due to a lack of product. The forming of alliances and partnerships among suppliers and customers should always be of concern.
IT or MIS
The information technology (IT) or management information systems (MIS) function is a key ingredient in the factory of the future. Many companies have already exploited information technology. Some of the benefits of IT or MIS include electronic data interchange with customers and suppliers, electronic e-mail for communications, bar coding for all products, data collection for analysis, use of personal computers (PCs), online order status and real time inventory.
Production Planning and Scheduling
Production planning and scheduling is a department that helps to coordinate the flow of materials throughout the plant. It tracks the levels of materials and inventory, schedules the product, tracks the product and informs customers and suppliers of progress.
The quality department has but one function in the corporation which is to coordinate the total quality effort of a company and direct the quality assurance activities.
An environmental department, separate from safety and health is desirable. The proliferation of new regulations make this a very volatile and difficult field. What was permissible in the past can be deemed a violation today. The impact of The Clean Air Act is presently a concern, as is effluent water. The Environmental Protection Agency (EPA) has jurisdiction over many of the emissions from a company.
A technology department is a luxury that many large companies can afford. This department is capable of scanning the magazines, journals, trade shows, conferences, patent applications and libraries looking for new products and technology. Such an arrangement offers a competitive advantage.
Servicing relates to either manufactured or sold products or the servicing of client accounts for non-manufacturing companies. Servicing is responsible for fixing problems with the product, and assuring that customers are satisfied.
|
<urn:uuid:e010d33e-18ad-4b9b-9cbc-7f4990465a0d>
|
CC-MAIN-2022-40
|
https://www.greycampus.com/opencampus/lean-six-sigma-green-belt/business-functional-processes
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00049.warc.gz
|
en
| 0.941589 | 1,027 | 2.59375 | 3 |
SQL Injection (SQLi) Attacks: Definition, Examples, and Prevention
What Is SQL Injection (SQLi)?
A SQL injection is a common hacking technique which can compromise a database. By "injecting" an SQL command or code fragment into a legitimate data entry field (like a password field), attackers can use SQL to communicate directly with a database. This works because SQL does not differentiate between the control and data planes.
A successful exploit can trick the database into sharing restricted data, modify data, execute administration operations on the database (like shutting down a DBMS such as Db2), recover the content of a given file present on the DBMS file system, and even issue commands to the operating system.
SQLi is a type of code injection attack.
Protection Against SQLi Attacks
Here are some ways to protect against SQL injection attacks:
- Use parameterized queries, validate user-submitted input, and use stored procedures
- Avoid dynamic SQL
- Block known malicious input
- Sanitize inputs
Limiting the ways that queries are made to the database can close loopholes that attackers use. Stored procedures combat SQL injection attacks by limiting the types of statements that can affect the database.
One approach is to enforce strict input validation by only accepting characters from a list of safe values (also known as whitelisting). Another approach rejects any input that matches a list of potentially malicious values (also called blacklisting).
Blocking everything except approved entries can be very effective, but is difficult to implement and requires continual maintenance. Attempting to block malicious inputs is generally seen as an ineffective technique because there are many ways to fool the filters looking for malicious code. For example, attackers can:
- Use upper and lowercase letters to bypass case-sensitive filters
- Use the escape character to bypass filters
- Use different types of encoding to avoid detection
These are just a few examples of the many methods used to try and bypass these types of defenses.
Detection of this attack can be enhanced using decryption. This is because SQL Injection attacks usually originate from HTTPs over port 443 with encryption protocols such as TLS. Additionally decrypted SQL traffic can be used for detection of SQL injection style attacks. For that reason, it's critical that security tools have decryption capabilities for all common encryption protocols including TLS 1.3 and Kerberos.
|
<urn:uuid:9e93e30d-6172-43c6-b36b-142d869ee283>
|
CC-MAIN-2022-40
|
https://www.extrahop.com/resources/attacks/sqli/?uniqueid=TN07532198&utm_source=idg-brandpost&utm_medium=display&utm_campaign=2021-q2-secpub-awareness&utm_content=blog&utm_term=no-term&utm_region=global&utm_product=all&utm_funnelstage=no-stage&utm_version=no-version
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00049.warc.gz
|
en
| 0.894981 | 500 | 3.796875 | 4 |
Parkinson’s is neurodegenerative brain disease that affects almost 5 million people worldwide, and is second only to Alzheimer’s in global prevalence. Researchers have studied the causes, analysed the symptoms and have refined advanced genomics and proteomics techniques to create increasingly sophisticated cellular profiles of Parkinson’s disease pathology — all to little avail. Part of the problem, some have argued, is that the methodologies used to study the disease have barely changed since it was first described by Dr. James Parkinson in 1817.
However, The Michael J. Fox Foundation for Parkinson’s Research (MJFF) and Intel Corporation announced earlier today a collaboration to detect patterns in the progression of the disease by analysing the data collected from wearable technologies — essentially, monitoring devices that can track a patient’s symptoms throughout the day.
The devices are capable of tracking a whole range of symptoms, such as gait, hand tremors, and sleep patterns. Unlike the traditional method that encourages patients to manually log their symptoms — requiring an incredible amount of time and effort — the wearable technologies can collect 300 observations a second.
With this data, the team at Intel use their expertise to create algorithms that effectively analyze these datasets and identify important patterns. Having these algorithms at their disposal, medical researchers at the Michael J. Fox Foundation can then choose which algorithms to apply to a particular patient’s compilation of data — thereby tailoring the technology to individual cases, and exercising their medical expertise to extract meaningful information.
As the company describe:
“To analyze the volume of data, Intel developed a big data analytics platform that integrates a number of software components including Cloudera CDH* — an open-source software platform that collects, stores, and manages data. The data platform is deployed on a cloud infrastructure optimized on Intel(R) architecture, allowing scientists to focus on research rather than the underlying computing technologies. The platform supports an analytics application developed by Intel to process and detect changes in the data in real time. By detecting anomalies and changes in sensor and other data, the platform can provide researchers with a way to measure the progression of the disease objectively.”
The hope is that such a platform will soon be able to store other types of data — such as patient, genome, and clinical trial data — as well as incorporate more advanced technologies, like machine learning and graph analytics. As Todd Sherer, PhD, CEO of The Michael J. Fox Foundation emphasizes, the potential for these technologies to revolutionize current approaches to Parkinson’s is staggering:
“Nearly 200 years after Parkinson’s disease was first described by Dr. James Parkinson in 1817, we are still subjectively measuring Parkinson’s disease largely the same way doctors did then…Data science and wearable computing hold the potential to transform our ability to capture and objectively measure patients’ actual experience of disease, with unprecedented implications for Parkinson’s drug development, diagnosis and treatment.”
Currently, the project has been tested on a trial group of 25 participants, and is being prepared for mass use. A smartphone app, with which patients can add notes to their records, is also in the process of development.
Read the press release here
(Image Credit: A Health Blog)
|
<urn:uuid:38129f32-247e-4328-9031-f305fd018b73>
|
CC-MAIN-2022-40
|
https://dataconomy.com/2014/08/intel-and-michael-j-fox-turn-to-technology-to-aid-parkinsons-research/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00049.warc.gz
|
en
| 0.934325 | 667 | 2.75 | 3 |
For some time, observability in IT operations has been associated with three data types that monitoring systems must ingest in order to be at least somewhat effective: logs, metrics, and traces. This limit to the type of data consumed is far from efficient when it comes to the true needs of a present-day IT operations practice.
With observability’s deep connection to causality, monitoring systems must be equipped to provide a causal analysis of the observed system. As we lean into true observability for IT operations practices, practitioners are charged with ensuring the wellbeing of enterprise systems to accurately observe those systems through causality analysis.
IT System State Changes Make Up Digital Business Processes
Industry visionaries have identified the need for a change in the goal of monitoring IT systems — moving from monitoring to observing — as apprehensions rise around the limitations of traditional monitoring technologies. While many of these industry leaders believe the key to observability lies in logs, metrics, and traces, harnessing this broad array of data does not automatically lead to a true understanding of the IT system state changes that make up digital business processes.
Derived from the mathematical Control Theory, observability hinges on the idea that we want to determine the actual sequence of state changes that either a deterministic or stochastic mechanism is going through during a given time period. The issue in doing so is that we do not always have direct access to the mechanism to observe the state changes and document the sequence. We have to tap data or signals produced by the system and its surrounding environment to conclude the state-change sequence through procedures. This means we can observe a mechanism accurately only if it and its environment produces a data set with pre-existing procedures that allow for precise conclusion around the state change sequence.
In the past, monitoring systems were not built for observability, but rather for the capturing, storing, and presenting of data generated by underlying IT systems. This meant that human operators were responsible for making conclusions around the IT system and providing analysis of the data set. While topology diagrams, CMDB data models, and other representations of the IT system aided in this process, they stood independent from the actual data ingested. At best, these models could lead to system modifications through context to data produced by the monitoring system.
Even with new technology that allows us to both ingest data and proactively identify patterns and anomalies in the data being produced, we still lack in true observability into the systems at hand. This is because the patterns and anomalies are derived from the data sets themselves, rather than insights from the system that generated that data. In short, these patterns and anomalies focus on correlation and departures from normalities, and not on causal relationships around the actual state changes within the system.
Causality Vs. Correlation
Let’s take some time to look at the difference between causality and correlational normality through this example: Two events are ingested by two data items — a CPU standing at 90 percent and end-user response time for a given application clocking at three seconds. When one occurs, so does the other.
The fact that when one event occurs, so does the other shows correlational normality. It does not mean that they have a causal relationship.
For the two events to have a causal relationship, it would have to also show that an intervention allowed the level of CPU usage to, let’s say, 80 percent, in turn shortening the response time by two seconds. We show causality by exhibiting that an intervention influencing one event will result in a change in the other event without a second intervention.
While most businesses will reject the idea of “conducting an experiment” on an IT system to prove causality, there is a way to gain insight into causality from the data generated by the system. In fact, system state changes are events that produce a causal relationship when a sequence of those events occurs. In turn, with the establishment of causality, we bring about a true understanding of system state changes. This is observability.
|
<urn:uuid:6b082049-338c-4b3a-bbba-b90e83183129>
|
CC-MAIN-2022-40
|
https://dataconomy.com/2018/12/why-aiops-must-move-from-monitoring-to-observability/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00049.warc.gz
|
en
| 0.948246 | 821 | 2.5625 | 3 |
Formating a zone file is easier than you think! You can access RFC 1035 for further information on how zone files are defined.
ZONE FILE FORMAT
Overall, in DNS Made Easy, you can import your zone files by following the BIND zone file format as shown below. If you have all the records you need from your current provider, you can use the zone below as a sample and replace the domain name with yours and its respective values and import your domain into DNS Made Easy. You may also choose your own TTL value.
$ORIGIN yourdomain.com. www.yourdomain.com. 3600 IN A 126.96.36.199 app.yourdomain.com. 3600 IN A 188.8.131.52 mail.yourdomain.com. 3600 IN A 184.108.40.206 yourdomain.com. 1200 IN AAAA ::1 app.yourdomain.com. 1200 IN AAAA ::1 yourdomain.com. 1200 IN CAA 0 issue "letsencrypt.org" yourdomain.com. 1200 IN MX 10 mail yourdomain.com. 1200 IN TXT "v=spf1 mx a ~all" payroll.yourdomain.com. 1200 IN NS youexternalserver.com. portal.yourdomain.com. 3600 IN CNAME www.yourdomain.net. _sip._tls.yourdomain.com. 1200 IN SRV 100 10 5660 sipdir.online.lync.com.
EXAMPLE 2: A shorter version of your zone file
- You can also make this file shorter by only stating the hostname of your record since the $ORIGIN will make it so all the records have the domain name ( yourdomain.com) appended to the name.
- If no record name is included, then the default name will be the one mentioned within your $ORIGIN variable. For instance, the CAA record below refers to yourdomain.com, whereas the NS record refers to payroll.yourdomain.com.
$ORIGIN yourdomain.com. www 3600 IN A 220.127.116.11 app 3600 IN A 18.104.22.168 mail 3600 IN A 22.214.171.124 1200 IN AAAA ::1 app 1200 IN AAAA ::1 1200 IN CAA 0 issue "letsencrypt.org" 1200 IN MX 10 mail 1200 IN TXT "v=spf1 mx a ~all" payroll 1200 IN NS youexternalserver.com. portal 3600 IN CNAME www.yourdomain.net. _sip._tls 1200 IN SRV 100 10 5660 sipdir.online.lync.com.
The Composition of Records in a zone file
A zone file is a collection of resource records with each record entry described in the following sequence:
|Format:||Host Label||TTL||Record Class||Record Type||Record Data|
- Host Label – A host label helps to define the hostname of a record and whether the
$ORIGINhostname will be appended to the label. Fully qualified hostnames terminated by a period will not append the origin.
- TTL – The Time To Live (TTL) is the amount of time that a DNS record will be cached by an outside DNS server or resolver, in seconds.
- Record Class – DNS Made Easy only uses the IN classes of records.
- Record Type – The type of a record, such as CNAME, AAAA, or TXT.
- Record Data – The data within a DNS answer, such as an IP address, hostname, or other information. Different record types will contain different types of record data.
Things to consider when importing your zone files to DNS Made Easy:
When importing your zone files into DNS Made Easy, we will automatically create an SOA record ( Start of Authority) for your domain. DNS Made Easy will also include the assigned name servers for the domain being imported. These are the same name servers that must be delegated to the domain's registrar so that your records can be propagated throughout the Internet.
HTTP redirection records must be added manually as those are not technically a DNS record but instead an application-level protocol for distributed, collaborative, hypermedia information systems. ( RFC 2616 )
|
<urn:uuid:e75ea863-7891-49b7-98c7-edeace70d9f4>
|
CC-MAIN-2022-40
|
https://support.dnsmadeeasy.com/support/solutions/articles/47001210835-formatting-a-zone-file
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00049.warc.gz
|
en
| 0.78634 | 936 | 2.71875 | 3 |
5G Use Cases: Healthcare
April 23, 2018
Perhaps a less talked about, but undeniably crucial and exciting use case for 5G is its place within the healthcare industry.
There are a number of ways in which the technologies of a 5G network will benefit the healthcare sector, including general lifespan improvements for the general public and increasing accessibility for those who are currently unable to access basic services.
This article will explore some of the use-cases for 5G in the healthcare sector, including:
- Vital tracking wearables
- Secure remote consultations
- Assisted and fully-robotic surgery
- 2D/3D scanning
It is now commonplace for most people to own mobile connected leisure wearables such as fitness trackers that actively capture, process and report real-time data that can then be accessed and assessed by the user. However, with the introduction of 5G, similar trackers will be able to make advantage of much faster connection speeds and ultimately transform the standard doctor/patient relationship around the world.
Clinical tracking devices and low-energy, low bit-rate sensors will allow doctors to remotely monitor and analyse patient’s vitals, activity and even potentially tell them whether the user has taken medication without the need to travel to a surgery or hospital location, freeing up vital time and resources to be used elsewhere. These devices are known as IoMT (Internet of Medical Things).
The data captured by IoMT devices will support fully-predicative analytics, greatly reducing the time it will take to detect a health issue and significantly increasing the accuracy of doctor’s diagnoses.
Secure remote consultations
With 5G services, trips to your local doctor’s surgery could become a thing of the past. With the greatly improved data rates that 5G will bring, 3D ultra-HD live video streaming will allow you to connect with your GP in real-time and even access VR (virtual-reality) services simultaneously to help doctors to explain care and patients understand treatment.
These services are invaluable for those patients who are less able to access physical locations because of health issues or the rural location of their home.
Aside from patient care, the same remote streaming consultations could be used for professional training to other healthcare professionals around the world who would otherwise be confined to text-books or basic online learning resources. Using VR, medical students would be able to virtually follow a real-life example of patient care.
Assisted and fully robotic surgery
It may sound like something out of a science-fiction movie, but automated, assisted and fully-robotic services has already begun in surgeries around the world and will significantly improve with the introduction of 5G. This includes the use of haptic devices such as gloves, that allow a surgeon to wear a glove-like device to remotely move and ‘feel’ the patient they are operating on from a different location.
The use of such robotic devices could be used for extremely long surgeries where a human surgeon could become fatigued or lose concentration over a long period of time. Also, they could be used as an assistant to a human surgeon, guiding and assessing the patient as the surgeon operates.
Potentially one of the oldest and most important techniques for diagnoses comes from scanning a patient’s body for clues and information in to their condition. With 5G, scanning could be achieved at a fraction of the time it currently takes and made available for review more quickly and to more healthcare professionals in near real-time. The services could also feed in to the video and VR capabilities that 5G would bring, allowing for multiple patient assessment and ultimately more accurate diagnosis.
It would also be able to assist in training, again for trainee professionals to better understand a patient and their symptoms.
The ultra-reliable services that 5G technologies bring to mobile communication will no-doubt be life-changing for many people around the world and particularly in the field of healthcare. Both patients and professionals will benefit from the improvements it will bring to treatment and learning services.
Get all of our latest news sent to your inbox each month.
|
<urn:uuid:432445eb-441a-40df-880f-cf86ccdb13c7>
|
CC-MAIN-2022-40
|
https://www.carritech.com/news/5g-use-cases-healthcare/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00049.warc.gz
|
en
| 0.953693 | 848 | 2.609375 | 3 |
Captcha challenge is the first line of defense to protect the website against attacks, it challenges to prove that you are the human user.
Google’s ReCaptcha was introduced in 2014 and it is used by the significant number of users and it relies on advanced risk analysis engine and it offers audio and image captcha, here security researchers took audio captcha to attack.
Security researchers from UM present unCaptcha, a low-resource, fully automated attack on Google’s 2017 reCaptcha audio captcha with a high success rate.
We have evaluated unCaptcha using over 450 reCaptcha challenges from live websites, and showed that it can solve them with 85.15% accuracy in 5.42 seconds, on average: less time than it takes to even play the audio challenge! Researchers said.
To attack captcha one should have huge resources but anyway the success rate is very less.Here they provided a low resource attack with a high success rate.
How unCaptcha works – Captcha
It is completely automated, they obtain audio samples and separated into segments for sound bites analysis and uploads to online speech recognization services like (IBM, Google Cloud, Google Speech Recognition, Sphinx, Wit-AI, Bing Speech Recognition).
And once the results are collected then it presents captcha solution.It is capable of locating “I’m not a robot” checkbox and clicks on it.Researchers also published the code of unCaptcha publically in Github.
For mitigations they suggested in broadening the vocabulary of sound bites beyond just digits, adding background noise which makes the segmentation more difficult.
You can get full research paper titled unCaptcha: A Low-Resource Defeat of reCaptcha’s Audio Challenge available to download here.
|
<urn:uuid:25eefc71-61b2-4854-bd12-a2037684ea90>
|
CC-MAIN-2022-40
|
https://gbhackers.com/uncaptcha-break-captcha/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00049.warc.gz
|
en
| 0.918257 | 367 | 2.6875 | 3 |
10 Gigabit Ethernet Technical Brief
Since the ratification of the IEEE 802.3ae 10 Gigabit Ethernet standard in June 2002, the penetration of 10 Gigabit Ethernet has rapidly increased. The June 2006 ratification of the IEEE 802.3an 10GBASE-T, 10 Gigabit Ethernet over UTP has further extended this trend. As more and more bandwidth- intensive and latency-sensitive applications are deployed in enterprises, data centers and service provider networks,
10 Gigabit Ethernet has gained market acceptance due to the plug-and-play simplicity of the Ethernet standard and its inexpensive wiring options. 10 Gigabit Ethernet is the natural migration path for forward-thinking enterprise, data center and service provider architects.
10 Gigabit Ethernet IEEE Standard Interfaces
In June of 2002 the IEEE 802.3ae committee ratified the 10 Gigabit Ethernet standard and along with the general specification, defined a number of fiber optic interfaces as shown in Figure 1.
These standard interfaces attempted to satisfy a number of different objectives including support for Multimode Fiber (MMF), Single-Mode Fiber (SMF) and SONET compatibility. Of the seven interfaces that were standardized in 2002, only 10GBASE-LR, 10GBASE-ER and to a lesser extent 10GBASE-SR have gained broad market acceptance. In order to meet market demands, 10 Gigabit Ethernet standardization efforts are evolving. There are several new 10 Gigabit Ethernet interfaces that have been proposed and are in various stages of development and standardization.
10Gtek summarizes brief comment on each interface and its level of market acceptance, including the following:
10GBASE-SR—The relatively short distances (30–80 meters on traditional MMF) that can be achieved with this interface limit its use to within a data center. A typical use for the 10GBASE-SR in a data center is to interconnect two Ethernet switches or to link anend-device (e.g. 10 Gigabit Ethernet server or storage device) to an Ethernet switch.
10GBASE-LR —The 10GBASE-LR is by far one of the most popular 10 Gigabit Ethernet interfaces. The 10GBASE-LR is being used for most every 10 Gigabit Ethernet need and will likely continue to be one of the most popular interfaces of choice for the foreseeable future.
10GBASE-LX4 —The 10GBASE-LX4 uses 4 lasers in parallel as opposed to one single serial laser source. This technical characteristic has unfortunately limited the viability of the 10GBASE-LX4 because it is simply too costly and complex to produce 10GBASE-LX4 optics in volume. As a result, most optical vendors have chosen not to develop the 10GBASE-LX4 optics for the newer MSAs such as XFP and hence there is limited availability of this interface.
10GBASE-ER — Almost all 10GBASE-ER 10 Gigabit Ethernet ports are used by Ethernet service providers for inter-POP connectivity. Due to its relatively high cost, this interface will only be used when there is a need to send a 10 Gigabit Ethernet signal greater than
10GBASE-ZR—To accommodate the growing demand for 10 Gigabit Ethernet use for longer distance than offered by 10GBASE-ER, based upon IEEE 802.3ae’s specification or
WAN Interfaces (SW, LW, EW) —The sole purpose of the WAN PHY was to achieve 10 Gigabit Ethernet and SONET OC-192/STM-64 compatibility. By virtue of this purpose, this interface will only be of interest to and is primarily used by service providers.
10GBASE-CX4 (IEEE 802.3ak)—10GBASE-CX4 is the copper 10 Gigabit Ethernet standard but instead of running over twisted pair it is specified to run over a twinaxial cable (the same cable used for Infiniband) with 24 gauge wire. This cable is fairly rigid and considerably more costly than Category 5 or 6 UTP. The primary applications for 10GBASE-CX4 are as a standards-compliant stacking interface (e.g. stacking interface between two individual fixed-configuration switches) and potentially for an end-device (e.g. server or NAS) attachment to an Ethernet switch in a data center. The later application has a 15 meter distance limitation.
SFP+ Direct Attach — SFP+ Direct Attach is known as the successor technology to 10GBASE-CX4. SFP+ Direct Attach, as implied in the name, uses SFP+ MSA and by using the inexpensive copper twinaxial cable with SFP+ connectors on both sides, provides 10 Gigabit Ethernet connectivity between devices with SFP+ interfaces. SFP+ Direct Attach has a 10 meter distance limitation, thus the target application is interconnection of top-of-rack switches with application servers and storage devices in a rack.
10GBASE-T (IEEE 802.3an)
10GBASE-T is a copper 10 Gigabit Ethernet standard that runs over twisted pair cabling (e.g. UTP). One can think of it as the 10 Gigabit Ethernet equivalent of the 1000BASE-T standard.
The IEEE formed a 10GBASE-T working group in November 2003 and ratified a final standard in July 2006.
The principal drivers behind 10GBASE-T are low cost and utilization of widely deployed and well understood twisted pair cabling. The primary application for 10GBASE-T will be for end-device (e.g server or NAS) attachment to an Ethernet switch in a data center.
The key advantages for 10GBASE-T are as follows:
? No change to the Ethernet frame format or the minimum and maximum frame sizes
? A full duplex only standard supporting star-wired LANs with point-to-point links and structured cabling topologies
? Support for auto-negotiation
? Support for coexistence with 802.3af “Power over Ethernet” (PoE)
? Support for the following link distances:
- Category 5e UTP – up to
- Category 6 UTP – up to
10GBASE-LRM (IEEE 802.3aq: Long Reach on FDDI-grade Multimode Fiber)
A study group was formed in November, 2003 by the IEEE to investigate the standardization of a 10 Gigabit Ethernet optical interface that will support a 300 meter distance on “FDDI grade” multimode fiber (
The objectives for this standard are as follows:
? Leverage existing 10 Gigabit Ethernet technology (10GBASE-R PCS)
? Support the fiber media selected from IEC 6
? At least achieve 220 meters on installed 500 MHz*km multimode fiber and achieve a distance of 300 meters on multimode fiber
? Pricing less than or equal to the price of 10GBASE-LR. This standard will require the use of a 1310nm laser, hence the price comparison to 10GBASE-LR
10 Gigabit Ethernet Pluggable Optics
Pluggable optics are very common for Gigabit Ethernet with the ubiquitous Small Form factor Pluggable (SFP) or “mini-GBIC”. 10 Gigabit Ethernet is the same and there are several separate Multi Source Agreements (MSAs) that have been specified to enable 10 Gigabit Ethernet pluggable optics. Following is a brief rundown of all 5 MSAs.
XENPAK — XENPAK was the most mature and widely deployed of the various 10 Gigabit Ethernet MSAs in early days of 10 Gigabit Ethernet since it was the only MSA that could support all 10 Gigabit Ethernet standard interfaces. The XENPAK MSA was the most popular MSA for Ethernet switches.
X2 — X2 is a smaller version of XENPAK that is targeted at the same market as XENPAK.
XFP — XFP can best be described as a small form factor pluggable for 10 Gigabit Ethernet. It is to a XENPAK what an SFP is to a GBIC. XFP is the least mature of the various MSAs and is designed for “next generation optical transceivers”. Its main advantage over the other MSAs is its size (allows for higher 10 Gigabit Ethernet port densities) and lower power consumption, and XFP is one of the most commonly used 10 Gigabit Ethernet MSA.
SFP+ — SFP+ is the newest pluggable optics technology. It is targeted at the high density 10 Gigabit Ethernet implementations, especially data center networks. With its smallest form factor and lowest power consumption amongst all other MSAs, SFP+ enables much higher port density for Ethernet switches and 10 Gigabit Ethernet Adapters. SFP+ can support both optical transceivers and SFP+ passive copper cables to provide flexible and cost-effective 10 Gigabit Ethernet solutions particularly for shorter distance.
Note: A 10 Gigabit Ethernet MSA is simply an agreement on the physical, thermal and mechanical characteristics of a pluggable transceiver and does not in any way impact the internal 10 Gigabit Ethernet optical or copper interface (e.g. 10GBASE-LR vs. 10GBASE-CX4) inside the pluggable. In other words, a single point-to-point link can have different MSA pluggable (e.g. XENPAK vs. XPAK) formats on either end and will work just fine as long as the 10GbE optical or copper interface (e.g. 10GBASE-SR) inside the pluggable is identical.
|
<urn:uuid:f5a2b3c3-6a30-4fd7-9d5d-b65d3c06c4e6>
|
CC-MAIN-2022-40
|
https://www.10gtek.com/new-919
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00049.warc.gz
|
en
| 0.91416 | 2,019 | 2.78125 | 3 |
After more development, WPA2 and the TKIP encryption algorithm were created. WPA2 is equivalent to the 802.11i standard. WPA2 provides AES encryption, 802.1X authentication, dynamic key management. For enterprises, WPA2 includes a connection to a Remote Authentication Dial-In User Server (RADIUS).
In wireless networks, user authentication is managed by Extensible Authentication Protocol (EAP). In an enterprise WLAN the authentication process is the following:
- The authentication process creates a virtual port for each WLAN client at the access point.
- The AP blocks all data frames except for 802.1x-based traffic.
- 802.1x frames carry EAP authentication packets via the AP to an Authentication, Authorization, and Account (AAA) server running a RADIUS protocol.
- If the EAP authentication is successful, the server sends an EAP success message to the AP, which then allows the client to send data through the virtual port.
- Before opening the virtual port, data link encryption between the WLAN client and the AP is established to ensure that no other WLAN client can access the port that has been established for a given authenticated client.
Additional security measures you can take is to filter the clients based on their MAC address and don’t broadcast the SSID of your WLAN, but don’t use these measures without WPA2, because they are not enough to consider your wireless network secured.
|
<urn:uuid:117a3a77-c4db-4173-bb7d-e20f0cc02128>
|
CC-MAIN-2022-40
|
https://www.certificationkits.com/ccna-200-301-topic-articles/cisco-ccna-200-301-wireless/cisco-ccna-200-301-wireless-authentication/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00249.warc.gz
|
en
| 0.889252 | 306 | 2.59375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.