text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Healthcare is a fast-paced industry that necessitates ongoing IT education.
As new information is learned and techniques are developed, healthcare professionals must receive proper training to remain relevant. A lack of education can result in unsafe working environments, dissatisfied employees, and higher business costs.
As more healthcare businesses move to a completely paperless system, a greater need for IT support exists.
Historically, patient information was filed in a paper system under lock and key, and it was considered safe and confidential. Now that sensitive information is accessed by patients and professionals using a variety of technologies, the need for security and training has only increased.
The impact of technology on healthcare shows a reduction in healthcare costs. A study from the University of Michigan revealed the switch from paper to electronic records reduced outpatient care costs by 3 percent, an estimated savings of $5.14 each month per patient.
Challenges faced by healthcare professionals
The healthcare industry faces several notable challenges that can be addressed with proper training. Among these are:
- Privacy and security breaches
- Inadequate documentation
- Patient safety
Privacy and security
Privacy and security are crucial in an industry where so much sensitive information is accessed every day. Ongoing IT support ensures proper measures are taken to keep this data confidential.
According to recently published privacy and security statistics, 84 percent of individuals feel confident their medical records are safe from unauthorized viewing.
Conversely, 64 percent are concerned about the electronic exchange of health information. Regular training is recommended to educate staff on best practices and specific requirements for ensuring privacy and security standards are met.
Inadequate documentation can lead to the misrepresentation and mishandling of information.
If procedures aren’t documented properly, staff members may perform them incorrectly. Information could also become lost.
Ongoing education will provide a starting point for updating all documentation best practices so they remain current. Staff will also have the correct information when following important procedures.
Patient safety is another challenge that can be overcome with ongoing IT support and proper education.
Doctors employ the latest technology when performing medical procedures. These tools are designed to keep patients safe, but IT support will ensure the right technologies are available and working properly.
Training all relevant staff is an equally important part of this process.
Benefits of staff training and education
Properly training staff requires a significant amount of time and effort, but it should be an integral part of every healthcare organization.
The key is to create effective training programs that target important areas.
Ongoing training in the healthcare industry increases employee retention bolsters staff morale and improves overall patient satisfaction. This leads to more profitable healthcare practice.
Overcome a lack of training
When overcoming a lack of healthcare training, you must first identify existing problems. Start with the areas that need the most attention.
Is your technology up-to-date and does your staff have proper access to it?
It can be difficult to find the training gaps, but recognizing them is the first step to implementing the right solutions. Determine the recurring issues that exist and formulate a plan of action.
Once you’ve identified the issues you need to tackle, decide which are the most critical and prioritize them. This allows you to find the most cost-effective means for acquiring technology or providing training.
Cost-effective training ideas
Education initiatives can be costly, especially when you will train a high number of employees on an ongoing basis. Some lower-cost options include:
- Hospital programs
- Classroom-based training
- Online training (where applicable)
The link between ongoing IT support and education
Ongoing IT support ensures all technology is current and all security standards are met.
This gives staff the technology needed to take advantage of the latest training methods and techniques so they may serve patients more effectively.
|
<urn:uuid:d4e59987-8002-4e67-bbeb-ea6e4db3fa52>
|
CC-MAIN-2022-40
|
https://www.ctsinet.com/why-you-need-ongoing-it-support-and-education/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00628.warc.gz
|
en
| 0.936162 | 792 | 2.75 | 3 |
In 2010, a major disaster took place in the Gulf of Mexico, when an oil spill of unthinkable proportions led to irreparable environmental and ecological damage. Known as the Deepwater Horizon oil spill, it is the largest marine oil spill as well as the largest environmental disaster in US history, which is still haunting the world.
2018 witnessed yet another spill, this time it is not oil, but data which is termed as the new oil of the industrialized internet economy. Yes, I am talking about the facebook and Cambridge Analytica data scandal, in which privacy of tens of millions of citizens is violated and exploited for private financial gain, without the consent or knowledge of the affected people.
With the rise of digital unicorns such as Amazon, Google, and facebook, data is increasingly viewed as the new oil of the digital world. Just as oil was the vital and essential commodity that powered the manufacturing era, data is the indispensable fuel powering the connected, digital economy.
Oil has been a tightly regulated commodity for more than a century, it consolidated power centres and created new power centres. What will happen to data, the new oil of this world? Will it also play a role in defining power centres rather than just remaining in data centres?
How is data different from oil?
While both oil and data are the key economic drivers in their respective eras, there are some vital differences between them, that we need to recognize and understand:
|Oil is a tangible commodity||Data is intangible|
|Oil is a fungible (substitutable) commodity||Data is non-fungible|
|Oil doesn’t produce more oil||Data can generate more data|
|Oil is a physical commodity||Data is an experience good i.e. its value is realized only after experiencing it like a movie or a book|
|Oil is nameless and faceless (Anonymous)||Data is named and identified|
As you can see from the above comparison, oil is a physical, tangible, and fungible commodity, whereas data is often perceived as an intangible, non-fungible, and experience good. While there is no distinction from one barrel of oil to another barrel, there is a world of difference between data associated with one individual to the other. In other words, while oil is nameless and faceless, data is a named and identified commodity. The other key difference is that while oil doesn’t produce more oil, data can generate more data.
It is important for us to understand these critical differences, so that we adopt the right approach towards data regulation.
Where is this data breach leading us?
Let’s consider what has happened in the current Cambridge Analytica and facebook scandal:
- As part of its policy to transform itself into a popular platform for social applications, facebook provided access to third-party app developers.
- To encourage wider usage, facebook enabled people to log into apps and share information about their friends
- One such third-party developer was able to gain unauthorized access to hundreds of thousands of user-accounts, which was subsequently shared with Cambridge Analytica.
- Even though complete story of what transpired is not yet known – it is reasonable to state that Cambridge Analytica has utilized its unauthorized access to user data towards behavioral influencing, in this case specifically how they vote.
What’s evident from this fiasco is that, in the current context of widespread collection of data from millions of unsuspecting people, we need to clearly distinguish and differentiate between:
- deriving insights and influencing behavior
- what is moral from what is ethical
- privacy and individual rights
Is Cambridge Analytica just one of the businesses that got caught, while there are many more such unscrupulous companies that are exploiting public data for private gain? My own view is that there are many ambitious baby corns (startups created explicitly to be acquired by Unicorns), that are taking advantage of the lax regulatory environment, and exploiting public data, often without consent or knowledge of the affected people.
Is history repeating itself?
I have an eerie feeling that the current digital landscape and the largely unregulated atmosphere in which some of the players are operating is like the wild west of the late 19th century. Some of these new unicorns and the baby corns are behaving fast and loose, very much like the wildcatters of the oil industry.
There is another key difference between these two eras – the wildcatters of the oil era took huge risks with either their own capital or investor capital, whereas the modern-day wildcatters exemplified by the likes of Cambridge Analytica are putting at risk data capital that they don’t own.
Many startups and baby corns, fuelled by an abundance of venture funding are increasingly resorting to a strategy of identifying new data-driven business models that are dependent upon exploiting public data. From what we have seen so far, there is very little hesitation in crossing the thin line between legal and illegal as well as what is ethical and moral. Let’s consider some examples:
- Using advanced data analytics to mine user data and derive unique insights into buying patterns, preferences, and a host of other relevant data points that could influence consumer behavior. Unicorns such as Netflix and Amazon have pioneered this into a science and achieved tremendous financial success.
- Contextual advertising, which is increasingly becoming the primary revenue stream for many businesses that have significant user volume. Google and facebook can be considered as prime examples, even though both have multiple revenue streams.
Even though there is no unanimity, the practices cited above are largely considered not only legal, but also ethical to a large extent.
Policy makers recognized a century ago that there is a need to balance genuine entrepreneurship with the need for regulation. This realization gave rise an era of stringent regulation, which laid the foundation for consumer safety, consumer rights, worker protection, and investor rights across diverse domains including oil, banking, airlines, and a host of other industries.
In a similar manner, what we are currently witnessing is to some extent unchartered territory of a data-driven world, that is evolving at exponential speed. While recognizing the genuine need for innovation, we must be proactive in regulating genuine usage of data, and draw the right balance between privacy and individual rights, and clearly define what is legal, ethical, and moral.
In the mad race to become the next digital unicorn, there is a real danger that, in the absence of appropriate regulation, we will create an environment conducive for unregulated monopolies, which could unwittingly give rise to the next Standard Oil.
Every action has a consequence
Mark Zuckerberg wants facebook to be the Internet. He may have wished to make the world a better place for all. As a side effect his net worth might also grow. Facebook is not just a social networking application anymore, it is a platform where commerce can take place, knowledge resides, analytics can be built and thanks to exploitation of data, human behavior can be influenced.
There are many apps on facebook today whose revenue model is based on data rather than on offering something unique to consumers. A consumer purchases merchandise and leaves a data trail along with the monetary transaction, which unwittingly becomes an opportunity for another business transaction? Funny isn’t it? Do we call this being opportunistic or exploitation?
Is facebook alone in indulging in this kind of data exploitation? Connecting professionals of this world, LinkedIn offers deeper insights into their profiles for recruiters and sales people. Most of us even pay hefty subscription fee for this access and insights. Effectively, users are offered free usage of the platform so that Linkedin can profit by selling their information to businesses. Twitter provides targeted feeds that businesses can buy based on demographic data. The difference is that facebook allows third-party apps to be built on its platform, thereby creating a wider exposure of its user information, while other apps are doing it themselves.,
What happens if the user base moves towards private networks, access is restricted to moderation, commerce is limited to the members alone, and targeted ads are forbidden?
Being public is a sign of transparency, but using the same information to exploit those who generate the information will drive people away from being public and transparent.
What if people choose to follow Elon Musk’s lead, and quit en masse from facebook?
Who wins and who loses?
|
<urn:uuid:e10d97ec-c4d2-432b-98c0-5da15551030a>
|
CC-MAIN-2022-40
|
https://www.comakeit.com/blog/data-breach-is-the-new-oil-spill/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00628.warc.gz
|
en
| 0.954926 | 1,712 | 2.515625 | 3 |
Quantum Computer Limits Today & Cryptography Will Keep Bitcoin Safe for Years According to Systems Analyst & Head of Institute of Information Technology Professionals, South Africa
(IT.Online-co.za) The complex ECDSA algorithm used to generate cryptographic signatures, along with the limited fault tolerance of quantum computers, means the codes protecting bitcoin are likely to remain completely secure for years to come.
This is according to John Singh, a senior systems analyst and head of the Institute of Information Technology Professionals South Africa (IITPSA) Blockchain Special Interest Group (SIG) in Kwazulu Natal.
He notes that the Schnorr digital signature scheme invented by Claus-Peter Schnorr, a German cryptographer and academic, was now being mooted in the bitcoin community and could enhance the already-secure system, if adopted.
Singh believes it is almost impossible to crack the cryptographic keys used to protect bitcoin, even using quantum computing. “You’d need at least 1 000 qubits to crack cryptographic keys, which is a few years off. Plus, quantum computers don’t have the fault tolerance to run long enough to crack these codes,” he says.
Singh says that, while Bitcoin is gaining momentum and has grown significantly, the technology remained hard to understand and the tools around it did not make it easy to participate.
|
<urn:uuid:86b09493-d285-407a-aa50-cd2a36a648b6>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/quantum-computer-limits-today-cryptography-will-keep-bitcoin-safe-for-years-according-to-systems-analyst-head-of-institute-of-information-technology-professionals-south-africa/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00028.warc.gz
|
en
| 0.915977 | 278 | 2.8125 | 3 |
Few things have revolutionized the business landscape quite like virtualization. What began as groundbreaking new technology available only to the largest corporations and the most technologically savvy employees is now within reach of even the smallest businesses with only a passing knowledge of electronics. At its simplest form, virtualization is the process of creating virtual machines and placing them on a piece of hardware. This can happen in various forms where companies can virtualize storage devices, servers, networks, and desktops. There are many reasons to utilize virtualization, but one that is perhaps the most cited is the promise of cost savings. The more small businesses look into it, the more they’ll see the way virtualization can save them money.
Table of Contents
One of the main points behind virtualization is taking physical machines and turning them into virtual ones. In other words, it reduces the pieces of hardware a company needs to purchase. For reasons that can be easily seen, this can end up saving companies a lot of money. If a company is acquiring a large number of new customers and needs a new server to help their business operations, instead of going out and buying the needed equipment, the company can use virtualization to create a virtual server, saving on the cost of the new hardware. Virtualization essentially allows companies to add more to their organization without actually adding any new equipment. This consolidation also means companies can use older pieces of equipment for a longer span of time.
Reduced Energy Costs
Using fewer pieces of equipment may save money simply based on fewer purchases, but another way to save is through reduced energy costs. Simply put, if you have fewer machines, you’ll need less energy to run them. Things like servers, storage units, and desktops can use a lot of energy, so consolidating these pieces of equipment can end up saving companies a lot of money over time. One company was able to save 77% on power costs simply by running virtual machines instead of buying new servers. If that weren’t incentive enough, it also costs less to cool the equipment. Hardware like servers not only require energy to run, they need energy to keep cool or risk overheating. A server room filled with dozens of servers can end up using a lot of energy just to keep everything at the optimal temperature. Reduce the number of machines by half or more, and the energy required to cool them would be drastically reduced, meaning a more manageable energy bill and a more environmentally friendly business.
Consolidating multiple machines into one piece of hardware not only keeps energy costs low, it allows for easier and cheaper maintenance. IT personnel have a lot on their plate these days, so anything that makes their jobs easier is a worthwhile investment. Virtualization can take all these different pieces of hardware that need to constantly be maintained and monitored and centralizes their operation, making those tasks much easier to manage. With fewer machines to worry about, fewer IT staff will be needed, and any emergencies that need a rapid response will be tended to much more quickly. There will also be better support, and upgrades and patches are easier to use, meaning businesses will be on top of the latest developments, downtime will be reduced, and there will be more cost savings.
One of the costs that few business leaders want to think about is what is accrued when a security breach hits an organization. Securing all a company’s systems and networks can be an immense challenge, but it’s one that’s made easier through virtualization. Overall security can be greatly improved because IT workers have fewer machines and a smaller infrastructure to manage and monitor. Any threats that are detected can be dealt with quickly and efficiently. Virtualization can also isolate machines and networks from each other, meaning if any system does get infected, the chances of that infection spreading to the rest of the infrastructure is minimal. In terms of security, while the cost savings may not be up front like in the other examples, preventing security breaches is a necessity for those companies wanting save money in the long term.
With virtualization technology now a realistic option for small businesses, the chance to save on costs is certainly an enticing one. Adopting a virtualization strategy is not necessarily an easy task, but most companies will say it’s worth it in the end. To have a more flexible, productive, and efficient business, virtualization is a worthy strategy to pursue, and cost savings are just one part of why it is so successful.
Rick Delgado- I’ve been blessed to have a successful career and have recently taken a step back to pursue my passion of freelance writing. I love to write about new technologies and keeping ourselves secure in a changing digital landscape. I occasionally write articles for several companies, including Dell.
(Image credit: Antonio D’Souza)
|
<urn:uuid:8771e5cd-ad0b-4207-ac2d-7a6cf641164b>
|
CC-MAIN-2022-40
|
https://dataconomy.com/2014/09/can-virtualization-save-your-business-money/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00028.warc.gz
|
en
| 0.95371 | 985 | 3.125 | 3 |
- Contact Us
Cybersecurity attracts an enormous amount of attention due to cyberattacks that are publicized daily. As more devices are connected to the Internet, they become attractive targets for criminals; therefore, the attack surface increases exponentially. The adoption and integration of Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices has led to an increasingly interconnected mesh of cyber-physical systems (CPS), which expands the attack surface and obscures the once clear functions of cybersecurity and physical security.
As with any technological advancement, connected devices have become a common target for cyber criminals hoping to steal valuable data or even cause potential destruction of property. Connectivity comes at a cost, and protecting connected devices through effective cybersecurity now goes hand-in-hand with physical security, creating a convergence of the physical and digital domains that reflects the increasingly interconnected state of the world in which we function.
No matter what industry vertical an organization operates within, it is important to understand why cybersecurity is needed to support physical security. It should come as little surprise to those in the security industry that devices ranging from doorbells to artificial hearts to surveillance cameras can be targeted by cyber criminals. It is no longer strictly a technology concern, but people, processes, and technology are all at risk if proper precautions are not implemented.
In today’s security landscape, very few businesses are running without CPS in place. However, as IoT technology evolves and more systems move into the cloud, companies need to re-examine their strategies constantly. Traditionally, physical security measures such as access control, security personnel, and surveillance were treated as standalone functions, with little regard for how data and IT systems are innately linked to physical security. When applications and systems are increasingly mobile or cloud-based, it is almost impossible to achieve compliance for sensitive data and identity protection without an integrated physical and cybersecurity strategy. Systems and devices can provide threat actors with additional attack vectors to connect to networks, infect other devices, and exfiltrate data. Today, organizations must consider physical security as a primary pillar of cybersecurity.
Examples of incidents involving cyber and physical can be categorized three ways:
|Cyberattack on physical systems||In March of 2021, more than 150,000 cloud-based Verkada physical security cameras were hacked. This incident provided the hackers with access to thousands of cameras through a broad cross-section of industries, from hospitals, schools, and corporate offices to police stations and jails. Not only were the hackers able to see into a variety of facilities, but they also accessed certain private data. For example, they saved video footage taken from the home of a Verkada employee of inmates in detention facilities. They had insight into who used access cards to enter certain hospital rooms. The hackers gained access to Verkada via a username and password for an administrator account that was publicly exposed on the Internet.|
|Physical systems used in cyberattacks||Mirai was one of the most infamous botnet attacks in 2016 and was the first significant botnet to infect insecure IoT devices. The Mirai botnet resulted in a massive, distributed denial of service (DDoS) attack that left much of the Internet inaccessible on the east coast of the United States.|
|Physical security of cyber systems||On April 21, 2017, Lifespan Corporation filed a breach report with OCR regarding the theft of a laptop when an employee’s car was broken into. The laptop was unencrypted and contained electronic protected health information, including patients’ names, medical record numbers, demographic data, and medication information. The laptop was never recovered.|
Cyber-Physical Systems (CPS) are collections of physical and computer components that are integrated to operate a process safely and efficiently. CPS systems have been integrated into critical infrastructures (smart grid, industry, supply chain, healthcare, military, agriculture, etc.), making them an attractive target for security attacks for various purposes, including economic, criminal, military, espionage, political, and terrorism. Thus, any CPS vulnerability can be targeted to conduct dangerous attacks against such systems. Cyber and physical assets represent a significant amount of risk to physical security and cybersecurity – each can be targeted, separately or simultaneously, to result in compromised systems and/or infrastructure. Different security aspects can be targeted, including confidentiality, integrity, and availability. To enable the broad adoption and deployment of CPS systems and to leverage their benefits, it is essential to secure these systems from any possible attack, internal or/and external, passive or active.
A cyber and physical security convergence strategy uses measures to restrict access to certain spaces, along with cybersecurity practices to secure the connected network and limit access to sensitive data. Physical and IT security convergence addresses the interconnected nature of these components and treats them as one rather than as separate business entities. A successful cyber or physical attack on connected industrial control systems (ICS) and networks can disrupt operations or deny critical services to society. For example:
If an adversary has physical access to a space or network, all information systems and data are considered “fair game” and are vulnerable to compromise and theft. Systems and devices may be left behind and unattended outside the view of security cameras; screens may still be unlocked with access to files, network shares, and other resources; and sensitive or confidential data may still be open in plain view on the screen and can be captured, stolen, modified, and/or deleted. Any exfiltrated information on the screen or in paper form regarding calendar schedules and plans, operational details, personal information, contact lists, details from presentations, etc. could be used in phishing, impersonation, and other cyberattacks used to spread disinformation to incite future conflict.
The CPS architecture consists of three main layers: perception, transmission, and application. Elsevier Public Health Emergency Collection published an article titled “Cyber-Physical Systems Security: Limitations, Issues, and Future Trends” that provides a detailed description of the three layers. The following graphic was referenced in Elsevier’s article and provides a high-level overview of the three CPS layers, including the objective, threat, target, and security measures applicable to each layer:
Image source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7340599/
While physical security measures are important for preventing unwanted access, a physical and cybersecurity convergence strategy should also cover network devices, applications, and software that power smart, cloud-based devices and security systems as well as the people who manage, monitor, and make business decisions for these functions.
Whether you are responsible for your organization’s physical security or cybersecurity, you still need to apply the same principles:
There are many benefits of cyber-physical security convergence:
By opening the lines of communication between your facilities team and information security team, you are more likely to identify instances where one team has a mitigating control in place that eliminates the need for the other team to invest in implementing a different control. Additionally, the two groups can collaborate to prevent unexpected costs with a new security investment project.
When an incident occurs, you are more likely to have a quicker and more effective response if the two teams work in conjunction with one another. Streamlined alerts and effective incident response are only obtainable once your physical security measures and cybersecurity measures are entirely in sync.
With data related to a cyber risk flowing to your facilities team, you are in a better position to take the appropriate steps to manage that risk from a physical perspective and vice versa.
Cybersecurity Helps Build a Physical Security Framework:
Cybersecurity supports the development of a framework for any physical security measures the organization decides to implement. In many ways, the type of cybersecurity measures that a company seeks to implement will determine which kind of physical security barriers and deterrents should be utilized. However, cybersecurity systems have their limitations, which is why physical security should still exist to pick up the slack and further strengthen business security.
There are many steps enterprise security leaders can take to achieve convergence. Here are a few items to consider:
Security should not be important to only one level of the organization - it needs to be important to everyone. Bringing conversations to a level that everyone can understand is critical for everyone to buy in and understand what is expected of them. Incorporating all members of an organization into conversations about security can assist in the understanding of how to approach cybersecurity and physical security to benefit the organization.
Even though physical and cybersecurity are inherently connected, many organizations still treat these security functions as separate systems. In the past, this was justified because the technology to integrate physical and cybersecurity was not yet available. However, now the problem comes down to governance, making it a priority to create a single body for security policies and bringing physical security and cybersecurity teams together to build strength in your organization. An integrated security architecture offers a foundation for connecting the physical and cyber worlds through intelligence sharing, visibility, control, and automation. As we use more technology in our daily lives, the more there is a need for CPS to help protect your organization from accidental and potentially malicious misuse of these systems and resources and help ensure their intended missions are not disrupted or compromised.
|
<urn:uuid:64580f42-c190-4c6f-bf27-618b2e921e89>
|
CC-MAIN-2022-40
|
https://www.compassitc.com/blog/cyber-physical-security-why-you-need-both
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00028.warc.gz
|
en
| 0.947812 | 1,906 | 2.703125 | 3 |
When analyzing malware, what you see on disk is oftentimes not an accurate representation of what’s actually happening in memory.
Today’s malware has a unique way of hiding and likes to bend the rules that most computer programs follow. No matter what it is, there is always something special that makes a malicious program unique and sets it apart from a normal program.
As a Malware reverse engineer, it’s not only my job to discover what that special “thing” is, but to attempt to understand what the malicious program is really doing, and do my best to explain how and why.
The purpose of this article is to provide our readers with an understanding of how malware operates within memory. Since you need to know some basic memory terms before trying to understand how they relate to malware, we’re going to explain some concepts first.
Let’s start from the beginning: running a program. When a program is first executed, a copy of it is loaded into memory, and it then becomes a process. This program lives in its own process virtual address space (VAS) along with software libraries the program needs to execute correctly. There are other things that live in a process VAS, like heap and stack memory, but we don’t need to go that in-depth right now.
The procedure of loading the required software libraries at start-up is called dynamic linking and is commonly used in all Windows programs, including malware. The libraries that are linked are called Dynamic-Link Libraries or DLLs for short. DLLs contain functions to be used by programs and are located in OS system directories.
When using a debugger—a tool used to step through a process one instruction at a time—we can see a live picture of our process memory and better understand what’s going on. This tool is used by software developers to find bugs or errors in their code (hence the name), but can also be a powerful tool for malware analysis. OllyDbg is a great user-mode debugger that’s very popular, so much so that many spin-off versions have been created like “Shadow” and “DeRoX".
If you’re new to using debuggers, I would recommend OllyDbg as it’s very user-friendly and easy to learn. I’m not going to explain how to debug or use this particular debugger, so I’d recommend using the Internet to find tutorials that will assist you in learning how to debug and understand Assembly Language.
You will need to load a program into a debugger to view its memory. I’m going to use a malware executable called new-sirefef.exe, which is a variant of the popular ZeroAccess Trojan. This malware is a rootkit, a special type of malware that can subvert the Operating System itself, and therefore is more difficult to detect and remove. Below are some file properties of new-sirefef.exe.
Once we've loaded new-sirefef.exe into OllyDbg, we can use the memory map tool to observe our executable and dependent libraries in memory. Notice how they are distributed into pieces within the process VAS. It looks like our new-sirefef.exe program is in the virtual memory range 0x00400000-0x00443FFF.
Each one of the files seen in the memory map is divided into pieces called memory sections or memory segments. This occurs because these files are Windows Portable Executable (PE) files and therefore adhere to the PE file format. For the sake of brevity, I won’t go into detail explaining the PE file specification, but you can find more information on the format from Microsoft. It’s best to become intimately familiar with this file format if you want to analyze Windows files.
Imported Functions Now, since there are hundreds of DLL files in Windows, our process needs a way to know which DLLs must be loaded into memory at start-up. The required DLL functions are located in the PE header, specifically the Import Address Table (IAT). The Import Address Table is simply a list of functions that the program must have to execute as expected. It also gives the analyst an idea of what the program does, although this can easily be faked, as we’ll see later.
Below is an image of the IAT of new-sirefef.exe as seen in IDA Pro. IDA Pro is a disassembler and debugger, and is arguably the most popular one in existence. You can see from the image that at least four DLL files will be loaded into memory at start-up: advapi32.dll, gdi32.dll, kernel32.dll, and user32.dll. However, more DLL files will be loaded, as some DLL functions are dependent on other DLL functions.
During the course of process execution, the program flow will change frequently from new-sirefef.exe to a DLL when calling functions inside of these DLLs. Since the required DLL files have already been loaded into memory at process start-up, however, this is a pretty smooth transition.
Malware in Memory Malware like new-sirefef.exe does work the same way a normal program does once it hits memory. Malware typically alters itself, using various methods before it runs like a normal program should (in some cases, it may never run like a normal program). This code alteration process is usually achieved using a software packer, a type of program used to obfuscate and/or compress the original program to inhibit analysis and reverse engineering. A software packer usually has a ‘stub’ program that runs at start-up and unpacks the original program in memory. Packers are also sometimes called cryptors, protectors, etc.
Packing helps to deter static analysis, which is analyzing the malware without running it. This is in contrast to dynamic analysis, or analyzing the malware while it’s running in live memory. If you want to perform static analysis on a packed program, you’re going to have to acquire an unpacked version first. The process of unpacking malware varies from file to file, and can take some time if performed manually.
When we talk about packing, there’s really a countless number of ways to do it; in fact, it sometimes becomes difficult to keep up with them all. That’s why there are programs that exist to help you in this process. My personal favorite is Exeinfo PE, by A.S.L. (I’m sure you’ve seen this tool mentioned before in my previous writings; it’s a personal favorite). I like this program because it not only does a great job at detecting packers and cryptors, but also provides tips for unpacking, which is a great feature for a beginner. There are plenty of others, like PEiD, RDG packer detector, or DiE, but some of these tools aren’t as accurate and/or are no longer supported.
Also, now that you understand how the IAT works, know that the IAT for new-sirefef.exe is far from complete, as more functions will “unpack” as the process executes. While the functions initially present in the IAT actually do get called, most of them are just “fillers” that are there to throw you off as an analyst. We are missing a key technique that is heavily employed by malware to retrieve the important functions, and that is runtime linking.
During runtime linking, library functions are retrieved as the process executes, and thus the list of imported functions grows. Two functions from kernel32, LoadLibrary and GetProcAddress can be used to retrieve any function located in any library on a system. Notice how they’re both in the IAT for new-sirefef.exe. Thus, if you see a program that has an IAT with only these two functions, it’s a pretty good indicator there’s something to hide (usually malicious).
Our new-sirefef.exe process uses runtime linking to locate more functions for its IAT. One of these functions is a very important kernel32 function: VirtualAlloc. The VirtualAlloc function creates a new section of virtual memory in the process VAS. The base address of this new memory section in newsirefef.exe is 0x003B0000.
During execution, encrypted code has been moved from the .rsrc (resource) segment of new-sirefef.exe and placed into our new virtual memory. The code is then decrypted and executed within our new memory at 0x003B0000.
This is a very common technique used by malware, and especially packed programs. The problem is that this code is not visible on disk, and is only available in temporary or ephemeral memory. Thus if we decided to close our debugger at this point, everything in this memory segment would get discarded, and we could no longer analyze this malware.
What’s Next? In order to analyze the malware further from this point, we should find a way to copy the new memory to disk for static analysis. Stay tuned for part 2 of this article, where we’ll accomplish just that and also take a look at some other tricks this rootkit uses while unpacking.
To be continued…
References: 1. Michael Sikorski and Andrew Honig, Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software (San Francisco: No Starch Press, 2012), 13–15.
Joshua Cannell is a Malware Intelligence Analyst at Malwarebytes where he performs research and in-depth analysis on current malware threats. He has over 5 years of experience working with US defense intelligence agencies where he analyzed malware and developed defense strategies through reverse engineering techniques. His articles on the Unpacked blog feature the latest news in malware as well as full-length technical analysis. Follow him on Twitter @joshcannell
|
<urn:uuid:2a495602-9276-487e-8d4f-52ef4449474d>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/blog/news/2013/06/my-memory-isnt-what-it-used-to-be-part-1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00028.warc.gz
|
en
| 0.935478 | 2,120 | 3.046875 | 3 |
The public library in a small Texas town used an assessment tool that helped officials eventually upgrade equipment—including buying a 3D printer. One community member used it to make a prosthesis.
Four years ago, occupational therapist Adrian Vega crafted a custom prosthetic for a 6-month-old infant born with just one hand.
He made it at the local library, using a 3D printer.
"He called me," recalled Mendell Morgan, director of the El Progreso Memorial Library in Uvalde, a southern Texas town with a population of around 16,000. "And said, 'Mr. Morgan, I read in the paper that you've got a new 3D printer.' And I said, 'Yes,' all perked up, and he said, 'May I come over and make a hand for a client of mine who's 6 months old?'
"And I said, 'You want to do what now?'"
The library had just acquired the printer as part of a new suite of equipment purchased through a technology grant that officials had applied for after completing an assessment through Edge, a performance indicator system. That test had identified gaps in the facility’s technology that could be partially addressed by equipment upgrades.
“Edge gives a framework of questions, standards, benchmarks you might be wanting to attain, and when you’re thinking about those questions you’re having to assess where you stand. It’s an evaluative tool,” Morgan said. “Depending on the results, it helps you formulate the strategies you need to employ.”
Edge, created by a national coalition and funded by the Bill & Melinda Gates Foundation, is overseen by the Urban Libraries Council, which launched an updated version of the platform this month. The system allows library officials to assess their services against the needs of their communities.
The upgrade features updated benchmarks and gives users the ability to compare their facilities to libraries of similar sizes and to save and manage action plans online. The goal is to help libraries showcase their successes and target areas for improvement, which can help secure funding from both local governments and outside organizations.
“If they’re able to say, ‘Hey, our library is underperforming on serving disabled members in our community, and I can show with Edge that we’re lagging behind other similarly sized communities,’ that’s shown to be a very effective tool for libraries to get grants and other tools for improvement,” said Curtis Rogers, director of communications for the Urban Libraries Council.
At El Progreso Memorial Library, Morgan did just that, using assessment results to secure a $10,000 technology grant offered by the Texas State Library and Archives Commission. Morgan used the money to purchase several computers, four iPads, a poster printer, a high-speed scanner and, notably, a 3D printer—the first of its kind in Uvalde.
“I was aware that other libraries had them, and in going through the process of that assessment it appeared it would be a very good thing because a lot of libraries are going into what they call ‘maker space,’” Morgan said. “It wasn’t offered in the community elsewhere. Even though we’re somewhat isolated and rural, I wanted our users to feel like they had the amenities like a big-city library would have.”
Morgan pictured community members using the printer to make cell-phone cases, bracelets, replacement buttons and Christmas decorations—all “nice applications,” he said. But shortly after the device was installed, Vega called with an entirely different kind of project in mind.
Vega’s client, 6-month-old Elijah, was afflicted with amniotic band syndrome, which occurs before birth when a fetus becomes entangled in fibrous, string-like bands of amniotic fluid in the womb. In Elijah’s case, the band wrapped entirely around his arm, impeding the development of one hand. Vega told Morgan that he had previously used a 3D printer to create a hand facsimile for an adult, and was interested in doing the same thing for Elijah.
“I said, ‘Please come over here right now and show me,’” Morgan said.
Vega came to the library with a thumb drive that loaded a printer program onto the computer. The printer constructed the hand in stages—first the lower parts of the fingers, then the upper half—which Vega then took home to assemble.
“Eventually he put it on the little boy, who was thrilled because he could immediately get anyone’s attention by smacking them with his plastic hand,” Morgan said.
Vega returns to the library periodically to print new devices that keep pace with Elijah's size as he grows older. The project brought publicity to the library and helped Morgan demonstrate the facility’s importance, a constant struggle in Uvalde, where only about half of the library’s budget is funded by local municipalities.
“It costs, realistically, about $465,000 per year to run the place. The city is presently giving us $102,000 and the county about $127,000,” Morgan said. “We’re always on the lookout for things we can do to raise money, and Edge helps with that by giving me ideas about how technology can be embraced.”
|
<urn:uuid:2a20751c-cf78-407f-b881-7390c12b863b>
|
CC-MAIN-2022-40
|
https://www.nextgov.com/emerging-tech/2019/03/how-library-embraced-new-technology-and-helped-build-prosthetic-hand/155383/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00028.warc.gz
|
en
| 0.964585 | 1,123 | 2.828125 | 3 |
When you logged on to your computer this morning, data privacy probably wasn’t the first thing you were thinking about. The same goes for when you opened your phone to catch up on social media and check emails, turned on your smart TV for a family movie night, or all the other ways we routinely use our connected devices in our everyday lives.
Although we live in an increasingly connected world, most of us give little thought to data privacy until after our personal information has been compromised. However, we can take proactive steps to help ourselves and our loved ones navigate this environment in a safe way. On January 28th – better known as Data Privacy Day – we have the perfect opportunity to own our privacy by taking the time to safeguard data. By making data privacy a priority, you and your family can enjoy the freedom of living your connected lives online knowing that your information is safe and sound.
Data Security vs. Data Privacy
Did you know that there is a difference between data security and data privacy? Although the two are intimately intertwined, there are various characteristics of each that make them different. National Today3 provides a useful analogy to define the two:
- Data security is like putting bars on your windows to make it difficult for someone to break into your home (guarding against potential threats).
- Data privacy is like pulling down the window shades so no one can look inside to see what you are wearing, who lives with you, or what you’re doing (ensuring that only those who are authorized to access the data can do so).
At this point, we already know not to share our passwords or PIN numbers with anyone. But what about the data that is collected by companies every time we sign up for an email newsletter or make an online account? Oftentimes, we trust these companies to guard the personal data they collect from us in exchange for the right to use their products and services. However, the personal information collected by companies today is not regarded as private by default, with a few exceptions. For this reason, it’s up to us to take our data privacy into our own hands.
The Evolution of Data Breaches
Because we spend so much of our day online, plenty of our information is available on the internet. But what happens if one of your favorite online retailers experiences a data breach? This is the reality of the world we live in today, as data breaches have been on the rise and hackers are continuously finding clever, new ways to access our devices and information.
Thanks to the COVID-19 pandemic, we’ve become more reliant on technology than ever before. Whether it be for distance learning, online shopping, mobile banking, or remote work, we’ve all depended on our devices and the internet to stay connected. But with more time online comes more opportunities for cybercriminals to exploit. For example, with the massive increase in remote work since the onset of the pandemic, hackers have hijacked online meetings through a technique called ‘Zoombombing4.’ This occurred after the online conferencing company shared personal data with Facebook, Google, and LinkedIn. Additionally, the number of patient records breached in the healthcare industry jumped to 21.3 million in the second half of 2020 due to the increase in remote interactions between patients and their providers5.
When it comes to data breaches, any business is a potential target because practically every business is online in some way. When you put this in perspective, it’s important to consider what information is being held by the companies that you buy from. While a gaming service will likely have different information about you than your insurance company, you should remember that all data has value, and you should take steps to protect it like you would money.
Protecting Your Privacy With McAfee
Your browsing history and personal information are private, and we at McAfee want to keep it that way. By using McAfee Secure VPN, you can browse confidently knowing that your data is encrypted.
To further take control of your data privacy, monitor the health of your online protection with McAfee’s Protection Score. This tool provides simple steps to improve your security and allows you to know how safe you are online, which is the first step towards a safer, more confident connected life. Check your personal protection score here.
Here are a few more tips to keep you on top of your data privacy game:
1. Update your privacy and security settings. Begin with the websites and apps that you use the most. Check to see if your accounts are marked as private, or if they are open to the public. Also, look to see if your data is being leaked to third parties. You want to select the most secure settings available, while still being able to use these tools correctly.
2. Lock down your logins. Secure your logins by making sure that you are creating long and unique passphrases for all your accounts. Use multi-factor identification, when available.
3. Protect your family and friends. You can make a big difference by encouraging your loved ones to protect their online privacy. By helping others create solid safety habits as they build their digital footprints, it makes all of us more secure.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
|
<urn:uuid:a0f8fcc0-718f-49ae-897d-6fffb9f607ce>
|
CC-MAIN-2022-40
|
https://www.mcafee.com/blogs/privacy-identity-protection/its-data-privacy-day-heres-how-to-stay-protected-in-2022-2/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00028.warc.gz
|
en
| 0.946207 | 1,092 | 2.75 | 3 |
Sunday, October 2, 2022
Published 2 Years Ago on Monday, Oct 05 2020 By Yehia El Amine
Think of every Ironman movie you’ve ever watched; notice how Tony Stark goes from one menu to another, from one 3D suit design to another, all of which can be control seamlessly with a simple brush of a hand. That’s Augmented Reality (AR), or what scientists are trying to achieve with it. Defined as the ability to overlay digital elements over real-world situations and scenery. Currently, the gaming industry is seen as the biggest proponent of AR usage, but a large number of other industries are starting to jump on the bandwagon.
One such industry is education, where history professors are using this technology to take their students on an interactive journey through ancient civilizations. The same can be said for biology teachers, as they explore the human body from a much more in-depth perspective. Physics instructors can now walk around the solar system and use the visuals presented to them to make learning even more engaging.
But AR has found itself at the heart of a multi-billion dollar industry that’s looking to invest in it and invest big: corporate learning.
According to Forbes, the global spending on corporate training has reached the value of over $130 billion with more than $70 billion within the U.S. alone.
AR isn’t only considered as a new multimedia learning tool such as webinars, HTML5 web editors, and the like, but learning professionals are treating it as a new breed of immersive learning technology that has come to the forefront of their strategies.
The key word here is “immersive,” since no other technology, apart from AR and VR, allows the learner to be placed at the center of the learning experience, while simulating real work environments and scenarios.
Many companies have embraced a plethora of communication solutions brought about to improve how they conduct business; the likes of video conferencing, instant messaging, or screen sharing tools, can all be overshadowed by the might of AR, especially to increase efficiency in the workplace while fostering collaboration and problem solving.
Instead of attempting to verbally describe the problem to another employee, learners can be put into the same situation they faced previously, and receive real time coaching in how to handle the situation at hand.
AR champions the ability to provide a common view and interactive experience where a number of people can jump in and take part regardless of their location.
This can also be used in terms of collaboration and onboarding; by tapping into a person’s sight, sound, and touch, remote employees can have a sense of community, while providing employees with a more authentic view of their tasks and responsibilities.
This not only reduces the time of assimilation into their role and company’s culture, but allows them to directly hit the ground running, in a much more amusing setting.
Using AR glasses or helmets, employees can test their knowledge and try out different ways to approach certain risky situations to extract the best possible outcome, while receiving real time assistance in parallel.
This not only saves time for pointless meetings to take place, but paves the way for more efficiency, and enhances a person’s understanding due to the level of interactivity. It also encourages employees to think more strategically and calmly when handling difficult situations in the workplace.
NASA is actively experimenting with the augmented reality technology to teach its workers repairing in outer space. The VTT Technical Research Centre of Finland uses their augmented reality system to enable International Space Station workers “see” helpful data that simplifies manual repair and diagnostic procedures.
See-What-I-See (SWIS) AR smart glasses allow employees to receive real time remote assistance from more skilled and qualified experts and coaches. Training sessions can be done from the comfort of your home or office without the need to relocate trainers and coaches from elsewhere.
This gives a major boost to the gig-economy, allowing the world’s top talent to work from literally anywhere in the world, while seeing the same environment as employees do in the field, all whilst utilizing SWIS glasses.
Since most employees use mobile smart devices, augmented reality corporate training requires minimum resources to invest. When implemented and properly configured, augmented reality can improve corporate safety and productivity, as well as reduce expenses.
In parallel, a number of AR platforms are already available on the market, such as ARKit and ARCore and the technology shouldn’t demand a companywide hardware overhaul; allowing companies to leverage smartphones as effective training tools.
Unlike traditional learning, where a course ends once a student leaves a classroom, employees who use AR can learn new information every day. Furthermore, augmented reality is interactive and, as a result, more engaging compared to traditional corporate training methods.
The impact of augmented reality on corporate training has already mushroomed a lot of positive results to date.
According to a study done by Blippar, a UK-based tech company specializing in augmented reality, those who enrolled into AR-powered training sessions performed 150 percent better than those using traditional paper-based methods.
“Evidence also shows that in manufacturing or assembly-type situations where employees use AR applications to view step-by-step instructions, instead of referring to a desktop computer or paper manual, routinely show productivity gains of between 20 and 35 percent,” the study reveals.
In parallel, research done by Boeing has also supported the benefits of integrating AR-based training programs in corporate training sessions, since employees demonstrated a “90 percent improvement in first-time quality and a 30 percent reduction in the time required to complete the job,” according to their numbers.
Boeing’s findings were also confirmed with a follow-up study by Upskill, a US-based AR software company, which reported augmented reality improved employee productivity by double-digit percentages.
Considering the implications behind these findings, this represents millions of dollars in cost savings for a business.
While everything cited above can be a pretty convincing argument, companies need to factor in that having tailor-made AR training programs may require a hefty budget; taking into account, hardware, designing, programming, and many other factors.
AR is still in its infancy and growing, it’s going to take some time down the road for people to start achieving Tony Stark levels of augmented reality. So, companies need to be patient with the current AR rollout in the market.
The question of confidentiality needs to be taken into consideration here; since the requirements to generate, analyze, and collect data is considered as one of the primary demerits of AR. Especially since there are a number of AR systems that capture the surroundings in real time, which brings forth a legal issue similar to someone recording a conversation, shooting pictures of a person or property.
VR is slow for demos. It takes time to configure the headset, to put on/adjust the headset, explain the controls, and for the user to view the content. Even if the content is only two to three minutes long, a trainer would be lucky to get 15-20 people per hour through the system.
The best call to action right now, is to wait.
The state of AR is still growing, and needs more time to develop and bloom into something that could take humanity further down the evolutionary scale.
The Asus Rog Phone 6 Pro is the latest upgrade to the rog phones family. A great gaming phone with loads of cool features and excellent screen fidelity lets us look closely and see what the fuss is all about. How Good Does It Look The Asus Rog Phone 6 Pro has an intimidating, rough, […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved
|
<urn:uuid:34519ff6-2026-4907-8393-d4621ff844a8>
|
CC-MAIN-2022-40
|
https://insidetelecom.com/let-me-show-you-ars-role-in-the-corporate-training-world/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00228.warc.gz
|
en
| 0.944503 | 1,609 | 3.078125 | 3 |
I'm sure you've known at least one person in your life - maybe even yourself - who was bullied when they were younger. I wasn’t alive before the Internet and social media emerged the way we know it now, but from what I’ve seen in movies, I can tell you bullying has changed over time. When our parents were our age, they were more likely to be bullied physically by kids they knew. This still occurs, but kids our age are much more likely to be victims of what we know as cyberbullying.
Even though having an anonymous person insult or harass you sounds better than having an actual person threaten to punch you in the face, I can assure you it isn’t true. In fact, it can even be worse. StopBullying.gov published some surveys that have actually proved that most teens consider cyberbullying to be worse than face-to-face bullying. Cyberbullying isn’t just something that happens in school or at recess, it’s an awful way to harass a person all the time, no matter when or where.
One of the main factors in the increase in cyberbullying is the fact that bullying someone face-to-face can be harder than attacking the victim online: everyone knows about it, people judge you, and you might even feel afraid of possible consequences. On the other hand, when you're behind a screen you feel stronger and safer. Studies show that 90% of the time the victim knows who the bully is, but they can’t prove it since trolls tend to use fake accounts or message anonymously. This allows the bully to feel less worried about getting caught.
The ability to hide behind anonymity may be why antisocial or more introverted people tend to have better connections online than in real life. They may feel less anxiety and feel that they’re in a more comfortable, controlled environment. Unfortunately, in the case of trolls and others who enjoy being abusive online, this anonymity gives them a sense of omnipotence - that there will be no consequences.
Some apps don't allow you to have an anonymous account, but there’s a way around this: just create a fake account, known in the social media world as “catfishing.” This is just one more example of the devious lengths to which trolls will go to attack their victims. Even on apps that don’t really facilitate anonymity, like Instagram or Snapchat, they can still remain incognito by using the disappearing stories or pictures they both permit for example.
On the other hand, some apps let the user be completely anonymous if they so choose. I had an account with Ask.fm when I was about fifteen for a while and I recall getting lots of mean anonymous messages. I didn't worry too much about it because, sadly it was considered normal, something that happened to everyone. My friends and I would read the nasty messages we received to each other and laugh about it, but deep down it kind of hurt us.
I remember one of my friends getting insulted for her makeup constantly, which led to her becoming really self-conscious; she stopped wearing makeup, stopped wanting to talk about makeup, and even stopped buying those products.
I know this isn't necessarily one of the worst cases of cyberbullying on record, but the point I want to get across is that even if I didn't really let it affect me, it did affect some of my friends to the point where they wanted to stop being themselves. And when it comes to cyberbullying, trolls obviously don't only lurk on ask.fm, you can find them anywhere.
I remember a friend of mine crying in class because she was being cyberbullied big time. The bully was using the disappearing snapchat pictures to send her threats and harass her simply for fun. If that wasn't enough, the troll also located where she lived thanks to the SnapMap and threatened to meet her in front of her house to apparently beat her up. Luckily it didn't happen, since many cyberbullies tend to be tough and intimidating only when hiding behind a screen, but it was still very upsetting.
When it comes to myself, I've been cyberbullied too. Besides the nasty comments I've received on Ask.fm, I've also gotten bullied on apps that are considered somewhat safe like Instagram. It happened on my art account; a stranger messaged me and asked for a shoutout, I agreed since their account had about 20k followers and maybe getting a shoutout back would have been good for my art account as well. But after I posted my shout-out for this total stranger, and asked for them to post one for me, they replied with really nasty messages about my art and myself for no apparent reason. I brushed it off and blocked the account, but then a new account messaged me and started in with the insults. The owner was saying really rude things and it hurt my feelings, so I blocked them as well. Finally, the account found my personal page and started threatening me. I got scared and blocked the page and made myself private.
For a long time, I wondered why anyone would decide to attack me for no reason, I hadn't done anything to anyone that should have warranted this kind of attack. But the truth is that many cyberbullies attack their victims just to feel powerful, for no reason at all. And by the way, to this day I still don't know who the troll was since the page didn't reveal the person behind it, but I honestly don't really care.
But just because I don’t care doesn’t mean everyone else is like me, it’s pretty obvious that it affects many teens every single day and leads to depression, anxiety and even suicide. So just like “traditional” bullying, cyberbullying should be addressed and become a concern for everyone with or without social media.
|
<urn:uuid:ae578eb8-9601-40ef-9fbe-225be2577f7c>
|
CC-MAIN-2022-40
|
https://blogs.blackberry.com/en/2020/01/a-teens-take-on-cyberbullying
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00228.warc.gz
|
en
| 0.983101 | 1,216 | 3.171875 | 3 |
Quantum Annealing Computers for Training Artificial Neural Networks
(TowardsDataScience) The future of quantum computing and artificial intelligence are hopelessly entangled thanks to quantum algorithms. This is because quantum annealing succeeds where classical algorithms often struggle or altogether fail: training artificial neural networks.
The first difference to notice between relatively conventional quantum computers and quantum annealing computers is the number of qubits they use. While the state-of-the-art in conventional quantum computers is pushing a few dozen qubits in 2018, the leading quantum annealer has more than 2000 qubits.
On August 31st, 2017, the Universities Space Research Association (USRA) announced that in partnership with NASA and Google it had upgraded the quantum annealing computer at the Quantum Artificial Intelligence Lab (Quantum AI Lab) to a D-Wave 2000Q. Partner Google has their eye on AI: “We are particularly interested in applying quantum computing to artificial intelligence and machine learning.”
This includes in-depth technical discussion of problems:
|
<urn:uuid:b0c46a83-e945-4d25-a523-b18fc9d11bb5>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/quantum-annealing-computers-training-artificial-neural-networks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00428.warc.gz
|
en
| 0.902987 | 230 | 3.3125 | 3 |
As fiber cable network is built by drawing the long lines of physical cables, it is highly impossible to lay a continuous cable end-to-end. Then there comes the optical fiber pigtail, one of the cable assemblies, has a connector on one end and a length of exposed fiber on another end to melt together with fiber optic cable. By melting together the glass fiber cable, it can reach a minimum insertion loss.
Pigtails are terminated on one end with a connector, and typically the other side is spliced to OSP (Outside Plant Cable). They may be simplex: (single fiber), or multi-fiber up to 144 fibers. Pigtails do have male and female connectors in which male connectors will be used for direct plugging of an optical transceiver while the female connectors are mounted on a wall mount or patch panel. Fiber optical pigtails are usually used to realize the connection between patch panels in a Central Office or Head End and OSP cable. Often times they may also provide a connection to another splice point outside of the Head End or central office. The purpose of this is because various jacket materials may only be used a limited distance inside the building.
You may confused the purpose between fiber optic connector, fiber optic patch cord and fiber optic pigtail. Here we will figure it out.
Fiber optic connector is used for connecting fiber. Using one or two fiber optic connectors in one cable has two items with different assistance in fiber optical solutions.
Fiber optic patch cords(or called fiber jumpers) used as a connection from a patch panel to a network element. Fiber optic patch cords, thick protective layer, generally used in the connection between the optical transceiver and the terminal box.
Fiber Optic Pigtail called pigtail line, only one end of the connector, while the other end is a cable core decapitation. Welding and connecting to other fiber optic cable core, often appear in the fiber optic terminal box, used to connect fiber optic cable, etc.
Fiber optic cable can be terminated in a cross connect patch panel using both pigtail or field-installable connector fiber termination techniques. The pigtail approach requires that a splice be made and a splice tray be used in the patch panel. The pigtail approach provides the best quality connection and is usually the quickest.
Fiber pigtails are with premium grade connectors and with typical 0.9mm outer diameter cables. Simplex fiber pigtail and duplex fiber pigtails are available, with different cable color, cable diameter and jacket types optional. The most common is known as the fusion splice on pigtail, this is done easy in field with a multi-fiber trunk to break out the multi-fibers cable into its component for connection to the end equipment. And the 12 fiber or 6 fiber multi color pigtail are easy to install and provide a premium quality fiber optic connection. Fiber optic pigtails can be with various types of fiber optic terminations such as SC, FC, ST, LC, MU, MT-RJ, MTP, MPO, etc.
Pigtails offer low insertion loss and low back-reflection. They are especially designed for high count fiber fusion splicing. Pigtails are often bought in pairs to be connected to endpoints or other fiber runs with patch cables.
|
<urn:uuid:54411814-bd6e-4f14-b75a-ae0e5c8911a6>
|
CC-MAIN-2022-40
|
https://www.fiber-optic-tutorial.com/category/fiber-cabling/fiber-optic-pigtail
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00428.warc.gz
|
en
| 0.920899 | 679 | 3.125 | 3 |
Reverse NDR attack is one of the most common method of spamming a mail server by the hackers. Even though if they are unable to compromise any user accounts by this method in an organization they can increase the load on the messaging system and our network bandwidth by bouncing the NDR’s back and forth. This makes the end users more annoying to think why they got NDR’s for the message which they never sent.
What is Reverse NDR Attack?
1) Spammer creates and email address with the spam victim’s address in the sender field since sender can always be anonymous and in the recipient he addresses them with random common names at your domain.
Ex: from:[email protected] , To:[email protected],[email protected]
2) He attaches an spam email and sends to the random addressed recipients of the victims domain.
3) Your mail server cannot deliver the message and sends an NDR email back to what appears to be the sender of the original message, the spam victim.
4)The return email carries the non-delivery report and possibly the original spam message. Thinking it is email they sent, the spam victim reads the NDR and the included spam.
Microsoft has brought some basic filtering setup for this Backscatter detection in EOP(Exchange Online Protection) which is more beneficiary. It uses a method called BATV( Bounce Address Tag Validation)
What is BATV ?
BATV( Bounce Address Tag Validation) is a standard internet draft of validating a reverse NDR email to see whether it is legitimate with a tag value or not.
How does this works ?
It uses a cryptographic hash. This cryptographic hash contains a valid return path of an email address, time stamp in the encoded format.So any NDR that is returned to a system without this cryptographic has tag value will be halted/rejected and hence no bounce backs.
BATV replaces an envelope sender like [email protected] with [email protected], where prvs, called “Simple Private Signature” . This PRVS is one of the possible method of tagging the values though there are few more in the standards followed.
This cryptographic token cannot be forged at any cost until they come to know the PRVS tag value.
For on-premise setup If you have this reverse NDR filtering setup in your anti-spam filtering agent you need not worry about this setup since your spam filtering will take care of this part.
If you are an on-premise customer and if you have your email filtering with EOP then Microsoft recommends to turn on this feature .
If your Mailboxes are hosted with Office 365 you no need to worry about turning on this feature. However Microsoft recommends to turn this feature ON if your outbound email goes through Office365(Not sure why)
Below are the steps to turn on this feature in through EAC
Open EAC – Click on protection – Navigate to your Policy – Click Advanced
Turn on the NDR back scatter option
Enabling this option will definitely add additionally layer of security especially for reverse NDR attacks. Hope this helps.
MVP – Exchange Server
|
<urn:uuid:20a8df7d-5fb3-4158-9248-2b7ee4e582cc>
|
CC-MAIN-2022-40
|
https://ezcloudinfo.com/2015/03/25/new-boomerang-feature-to-prevent-backscatter-reverse-ndr-attack/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00428.warc.gz
|
en
| 0.896867 | 699 | 2.890625 | 3 |
- Hospital Ramón y Cajal, Hospital 12 de Octubre and Hospital Sant Pau improve COVID-19 diagnosis with chest X-ray using Distributed Artificial Intelligence.
- The Federated Learning platform, developed by Capgemini, allows the hospitals to share trained Artificial Intelligence (AI) models to create a global model that is significantly better than any of the local versions, while assuring the protection of sensitive patient information.
Paris, Madrid, November 24, 2021 – A collaboration among radiologists from three Spanish hospitals with a high volume of patients – Hospital 12 de Octubre and Hospital Ramón y Cajal, in Madrid, and Hospital Sant Pau in Barcelona – plus technology partner experts in Artificial Intelligence (AI) and IT, is rapidly progressing the use of cutting-edge technologies in healthcare, while maintaining patient data privacy by applying federated learning with hardware-enhanced security. This collaboration aggregates the clinical experience of each hospital to develop automated medical diagnosis models, to facilitate and improve patient care.
Although the definitive diagnosis of COVID-19 is made by microbiological tests – such as PCR or antigen tests -, the main symptomatic alteration in patients is respiratory. Therefore, the chest X-Ray has become the default initial screening test for all patients with suspected symptoms, and its availability and immediacy make it crucial.
During the pandemic, radiologists have analyzed a large number of chest X-rays, combining their previous experience with the learning derived from the findings of the X-rays of thousands of patients. However, the need to analyze a huge number of images with subtle findings requires time, training, and experience, making Artificial Intelligence a highly suitable tool for this purpose.
The Federated Learning platform developed by Capgemini for the research study, based on sharing AI models trained with image data, allows the creation of a global diagnostic model that significantly improves local versions, especially benefiting healthcare facilities with less experience. The accuracy in the diagnosis of COVID-19, obtained in this research study is 89% for the global model, while the previous best of the local versions reached only 71% accuracy. And all this in addition to rigorously preserving patient data privacy.
The clinical protocol was developed as part of a collaboration between Capgemini and the Multisystemic Diseases Group of the Ramón y Cajal Institute for Health Research (IRYCIS). They have also had the support of various technology partners such as Cisco, Intel, Vodafone Spain and Microsoft with clinical cases from the hospitals mentioned above. Gilead Sciences Spain, a pharmaceutical company expert in virology and a pioneer in developing an effective treatment for COVID-19, has supported this project from the beginning, providing its knowledge and experience to contribute to the success.
Computation is also essential. Cisco and Intel provided the computing infrastructure to perform the experiments. Each hospital has a local computing node – fueled by third-generation Intel Xeon Scalable processors with Intel Software Guard Extensions (SGX). Furthermore, each has built-in AI accelerators and Cisco UCS servers – which contains the model that learns from radiological images.
According to Dr. José Carmelo Albillos, Head of Radiology Department at Hospital 12 de Octubre, “AI allows us to analyze large numbers of images almost automatically and with high precision, which makes it easier to prioritize their review and reporting. For this reason, it reduces the workload while speeding up diagnosis.”
Dr. Javier Blázquez, Head of Radiology Department at Hospital Ramón y Cajal, highlights: “Federated learning allows us to improve our diagnostic reliability without disrupting data privacy, since the experience of a hospital is shared among several others, the results improve a lot with respect to those obtained separately.”
Dr. Beatriz Gomez-Anson, Clinical Head and Principal Investigator at Sant Pau Hospital, points out that “this project shows the added value of AI tools to be implemented by medical specialists in radiology”.
“At Capgemini we are very proud to be driving this project forward by contributing our knowledge to improve diagnostic methods without the need to share private data. Furthermore, thanks to the application of the latest areas of research in Artificial Intelligence, we can ensure patient privacy and data security,” highlights Daniel Iglesias, Managing Director of Capgemini Engineering in Spain.
The project has been financed with contributions from Intel and the Cisco’s Country Digital Acceleration (CDA) program for Spain, called Digitaliza.
Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of over 300,000 team members in nearly 50 countries. With its strong 50 year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. The Group reported in 2020 global revenues of €16 billion.
Get The Future You Want | www.capgemini.com
For more information:
Capgemini Press contact: Florence Lievre – [email protected]
About Hospital de Sant Pau
Hospital de la Santa Creu i Sant Pau is a high-complexity hospital that has existed for more than six centuries. Its activity is mainly centered in Barcelona but extends throughout Catalonia; it also has considerable influence in the rest of Spain and notable international projection.
Vodafone is a leading telecommunications company in Europe and Africa. Our purpose is to “connect for a better future” and our expertise and scale gives us a unique opportunity to drive positive change for society. Our networks keep family, friends, businesses and governments connected and – as COVID-19 has clearly demonstrated – we play a vital role in keeping economies running and the functioning of critical sectors like education and healthcare.
Vodafone is the largest mobile and fixed network operator in Europe and a leading global IoT connectivity provider. Our M-Pesa technology platform in Africa enables over 50m people to benefit from access to mobile payments and financial services. We operate mobile and fixed networks in 21 countries and partner with mobile networks in 49 more. As of 30 June 2021, we had over 300m mobile customers, more than 28m fixed broadband customers, over 22m TV customers and we connected more than 127m IoT devices.
We support diversity and inclusion through our maternity and parental leave policies, empowering women through connectivity and improving access to education and digital skills for women, girls, and society at large. We are respectful of all individuals, irrespective of race, ethnicity, disability, age, sexual orientation, gender identity, belief, culture or religion.
Vodafone is also taking significant steps to reduce our impact on our planet by reducing our greenhouse gas emissions by 50% by 2025 and becoming net zero by 2040, purchasing 100% of our electricity from renewable sources by 2025, and reusing, reselling or recycling 100% of our redundant network equipment.
About Gilead Sciences
Gilead Sciences, Inc. is a biopharmaceutical company that has been researching and advancing medicine for more than three decades, with the goal of achieving a healthier world for all people. The company is committed to advancing innovative drugs to prevent and treat life-threatening diseases such as HIV, viral hepatitis, and cancer. Gilead operates in more than 35 countries around the world and has its headquartered in Foster City, California.
For more information:
|
<urn:uuid:de1141e9-ecb6-4b21-b547-80d750f8df6e>
|
CC-MAIN-2022-40
|
https://www.capgemini.com/ch-en/news/three-spanish-hospitals-use-privacy-preserving-artificial-intelligence-to-help-the-speed-and-accuracy-of-covid-19-screening/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00428.warc.gz
|
en
| 0.925568 | 1,620 | 2.59375 | 3 |
Modern businesses absolutely need to have a good understanding of their bandwidth requirements. How much data do you share daily? How fast do you need to share it? While Bandwidth is often measured in speed, this can be misleading. When determining the best bandwidth speed for your business, think of your bandwidth requirements in terms of capacity.
Bandwidth is the capacity of your network to transmit data. A simple plumbing illustration can make things clear: bandwidth is like water flowing through a pipe. How much water the pipe can transfer over time is similar to bandwidth.
“The maximum amount of water possible is like the maximum possible data transfer rate of your network. In this way, bandwidth is synonymous with capacity. Therefore, if you need more water, install a larger pipe. If you need more data transfer capacity, raise your available bandwidth.”
The bandwidth requirements for your business depend mainly on how much data you use and share on a regular basis. Every industry and individual business must review the number and types of connected devices used, and the amount of data that is shared, in order to calculate an accurate appraisal of bandwidth requirements.
Calculating Bandwidth Requirements for Different Industries
Various industries will have different bandwidth requirements, depending on how they use data. For example, retail industries that do a large amount of business over the Internet will have huge data needs. Consumers will be viewing product images, reading text about products, adding selections to their shopping cart, making purchases using secure financial data, and communicating with the business about other needs and options.
Other industries may use more connected monitoring or communication devices to send data to a central hub for storage or action. Still more industries may send loads of financial data that all must be completely secure, while others may send measurements or other information with less or no security requirements.
Here are some common industries and their typical demands for data:
Modern vehicles are equipped with an array of factory-installed electronic components that gather data. As far back as 2014, a McKinsey & Company study estimated that connected vehicles generate around 25 GB of data each hour of operation. Fast-forward to today, and how much could that number have increased? Today’s autos come with sensors, cameras, ECUs, and more that generate real-time data. This data is then captured and collated by various stakeholders.
Consider also the bandwidth requirements involved on the manufacturing side of the automotive industry. There also, one finds numerous sensors, monitoring devices, and more that drive the manufacturing process, helping with quality control, shipping, and safety.
The Biotech industry is rushing to adopt advanced technologies that use the Internet of Things (IoT) to facilitate even better outcomes in agriculture, pharmaceuticals, safety, and secure data transmission. Consider all the components and devices used in biotech that provide real-time data over a local network and the Internet:
- Irrigation Modules
- Photovoltaic Panels
- Power Supplies
- Security Modules
- Tracking Devices
Construction has evolved into an industry that is highly dependent on technology and real-time data.
- AI that records data from how workers move about the site
- Digitized safety mechanisms collect data on work processes
- Apps help site managers track every member of the construction team in real-time
- Mobile applications improve measurements, placement of objects, and inspections
- Automated drones and ground-based site rovers make accurate photo records
- Drones are programmed to inspect and count material inventories
Consider also that many construction sites are far removed from available fiber networks, so their connectivity solutions for adequate bandwidth requirements can be especially difficult.
Today’s education industry demands that colleges and universities be fully accessible online to remain competitive. Students and faculty alike require advanced technology and connectivity options to facilitate learning, finances, research, and administration. How many students depend on an app or web board to keep up with classes, assignments, financial payments and scholarship data, communication with instructors, and even digital books from the library?
This hyperconnected learning environment is also populated with students that spend enormous amounts of time on the Internet for other pursuits as well. Streaming games, television, videos, as well as digital learning components, demands a provider that can offer scalable bandwidth requirements to meet increasing demands.
Banks and other institutions in the financial industry move incredible amounts of data each day. Both employees and customers demand fast access to data that must also be secure. The national and global economy relies on fast, secure transfer of financial data. Delays from problems with bandwidth requirements can mean lost deals, lost customers, and lost revenue.
As governments provide more resources for citizens, government organizations must be aware of their own bandwidth requirements for providing easy-to-use, reliable, and dependable online support. Much of this data must also be secure and meet the demands of various privacy guidelines. Plus, many government organizations share data, making it necessary for a huge, interconnected (traditionally slow and inefficient) entity to operate even more efficiently.
The healthcare sector has leapt eagerly into the digital age with increasing connectivity and data-sharing capabilities. Approximately 50% of healthcare IoT is used for remote operation and control, and 47% connect their devices to location-based services. Electronic Health Records (EHR) has almost reached universal adoption, allowing hospitals and other providers to share secure patient medical information with a few clicks.
All this connectivity must be fast and secure to meet patient care needs while meeting HIPAA and other regulations for privacy. The COVID-19 pandemic caused this need to increase exponentially, as remote testing centers and an increase in patient load placed incredible demands on healthcare providers and institutions.
Technology is often overlooked in the hospitality industry. Hotels and other hospitality-related venues increasingly depend on higher bandwidth requirements to meet the demands of client conveniences and needs. Providing better in-room experiences, adequate needs for large conferences, expedited resource planning, and reservation planning all relies on fast, reliable Internet connectivity and networking.
Logistics and warehousing in our modern age relies heavily on demanding bandwidth requirements. Consider how tracking, storing, and delivering an incalculable number of goods is made easier and faster with modern data-transfer technology:
- Digital inventory control sensors and counters
- Drones to monitor and assess inventory levels
- Smart damage detection devices
- Handheld scanners connected to the IoT
- Digital shipping manifests shared with several stakeholders
What Are Your Bandwidth Requirements?
Think about your unique business and how much data you use/share daily. How many users are connected? How many automated devices? Consider all this when calculating your bandwidth requirements. Below is a general guide of how much bandwidth you may require.
- Low (20 Mbps or less) — Few users, with connected laptops, desktops, E-fax machines, VoIP phones; email and web research are most of your active usage.
- Medium (50-500 Mbps) — Few users, but with more intensive web browsing, research, streaming, emailing, and downloading larger files.
- High (100-200 Mbps) — A number of users that take regular advantage of cloud-based platforms and software programs (CRS, PoS, ERP, etc.). High flow of email and downloads.
- Intense (200-500 Mbps and higher) — A number of users that regularly employ HD video conference devices and platforms, large data transfers, constant flow of data between branch locations.
A handy tool to help with your calculations is MHO’s Download Simulator. This free online tool illustrates how long it would take to download a variety of files on different bandwidth speeds per employee. Try it now and get a feel for your company’s bandwidth requirements.
You can also contact the Internet and connectivity specialists at MHO with your questions about scalable bandwidth needs, Fixed Wireless Internet and Networking, and our availability in your area.
|
<urn:uuid:a670253d-fb28-4c53-90d1-f0133007c71f>
|
CC-MAIN-2022-40
|
https://blog.mho.com/what-is-the-best-bandwidth-speed-for-my-business
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00428.warc.gz
|
en
| 0.922159 | 1,664 | 2.75 | 3 |
Fundamental concept: Solving business problems with data science starts with analytical engineering: designing an analytical solution, based on the data, tools, and techniques available.
Exemplary technique: Expected value as a framework for data science solution design.
Targeting the Best Prospects for a Charity Mailing
The Expected Value Framework: Decomposing the Business Problem and Recomposing the Solution Pieces
A Brief Digression on Selection Bias
Our Churn Example Revisited with Even More Sophistication
The Expected Value Framework: Structuring a More Complicated Business Problem
Assessing the Influence of the Incentive
From an Expected Value Decomposition to a Data Science Solution
12. Other Data Science Tasks and Techniques
Fundamental concepts: Our fundamental concepts as the basis of many common data science techniques; The importance of familiarity with the building blocks of data science.
Exemplary techniques: Association and co-occurrences; Behavior profiling; Link prediction; Data reduction; Latent information mining; Movie recommendation; Bias-variance decomposition of error; Ensembles of models; Causal reasoning from data.
Co-occurrences and Associations: Finding Items That Go Together
Measuring Surprise: Lift and Leverage
Example: Beer and Lottery Tickets
Associations Among Facebook Likes
Profiling: Finding Typical Behavior
Link Prediction and Social Recommendation
Data Reduction, Latent Information, and Movie Recommendation
Bias, Variance, and Ensemble Methods
Data-Driven Causal Explanation and a Viral Marketing Example
13. Data Science and Business Strategy
Fundamental concepts: Our principles as the basis of success for a data-driven business; Acquiring and sustaining competitive advantage via data science; The importance of careful curation of data science capability.
Thinking Data-Analytically, Redux
Achieving Competitive Advantage with Data Science
Sustaining Competitive Advantage with Data Science
Formidable Historical Advantage
Unique Intellectual Property
Unique Intangible Collateral Assets
Superior Data Scientists
Superior Data Science Management
Attracting and Nurturing Data Scientists and Their Teams
Examine Data Science Case Studies
Be Ready to Accept Creative Ideas from Any Source
Be Ready to Evaluate Proposals for Data Science Projects
Example Data Mining Proposal
Flaws in the Big Red Proposal
A Firm’s Data Science Maturity
The Fundamental Concepts of Data Science
Applying Our Fundamental Concepts to a New Problem: Mining Mobile Device Data
Changing the Way We Think about Solutions to Business Problems
What Data Can’t Do: Humans in the Loop, Revisited
Privacy, Ethics, and Mining Data About Individuals
Is There More to Data Science?
Final Example: From Crowd-Sourcing to Cloud-Sourcing
|
<urn:uuid:c62396d6-8ef0-4a99-82a4-886bb20c0c0d>
|
CC-MAIN-2022-40
|
https://data-science-for-biz.com/contents/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00428.warc.gz
|
en
| 0.730167 | 587 | 2.875 | 3 |
In December 2015 the Australian Broadcasting Corporation (ABC) revealed that a supercomputer operated by Australialian Bureau of Meteorology (BoM) was hit by a cyber attack. The Bureau of Meteorology is Australia’s national weather, climate, and water agency, it is the analog of the USA’s National Weather Service.
The supercomputer of the Australian Bureau of Meteorology targeted by the hackers is also used to provide weather data to defence agencies, its disclosure could give a significant advantage to a persistent attacker for numerous reasons.
Initial media reports blamed China for the cyber attack, in 2013 Chinese hackers were accused by authorities of stealing the top-secret documents and projects of Australia’s new intelligence agency headquarters.
“China is being blamed for a major cyber attack on the computers at the Bureau of Meteorology, which has compromised sensitive systems across the Federal Government.” states the ABC. “The bureau owns one of Australia’s largest supercomputers and provides critical information to a host of agencies. Its systems straddle the nation, including one link into the Department of Defence at Russell Offices in Canberra.”
The systems at the Bureau of Meteorology elaborate a huge quantity of information and weather data that are provided to various industries, including the military one.
The consequence of a cyber attack on such kind of systems could represent a menace to the homeland security.
Now new information was disclosed by the government’s Australian Cyber Security Centre that Wednesday published a report on the incident. The experts at Australian Cyber Security Centre attributed “the primary compromise to a foreign intelligence service,” they did not provide any information of the culprit.
“We don’t narrow it down to specific countries, and we do that deliberately,” said the minister for cybersecurity, Dan Tehan. “But what we have indicated is that cyber espionage is alive and well,” he told ABC News 24. “We have to make sure that we’re taking all the steps necessary to keep us safe, because the threat is there. The threat is real. Cybersecurity is something that we, as a nation, have to take very seriously.”
The report confirms the presence of a malware in the system of the Australian Bureau of Meteorology. The national cyber security agency, Australian Signals Directorate (ASD), detected a Remote Access Tool (RAT) malware “popular with state-sponsored cyber adversaries,” and confirmed that the same malicious code was used to compromise other Australian government networks in the past.
“ASD identified evidence of the adversary searching for and copying an unknown quantity of documents from the Bureau’s network. This information is likely to have been stolen by the adversary.” reads the report.
Another interesting aspect of the report is the opinion of the experts of the terrorist cyber threat, they explained that cyber capabilities of terrorists remain rudimentary.
“Apart from demonstrating a savvy understanding of social media and exploiting the internet for propaganda purposes, terrorist cyber capabilities generally remain rudimentary and show few signs of improving significantly in the near future,” states the report.
(Security Affairs – Australian Bureau of Meteorology, hacking)
|
<urn:uuid:b62d8b47-9dcd-4275-8920-a63a9e4ec964>
|
CC-MAIN-2022-40
|
http://securityaffairs.co/wordpress/52179/intelligence/australian-bureau-of-meteorology-hack.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00628.warc.gz
|
en
| 0.944323 | 656 | 2.59375 | 3 |
The goals of this assignment are to get more familiar with TTPs, vulnerabilities, get hands on with network reconnaissance, and start on cybersecurity principles.
Create and Use a new Branch
We will create a new git branch called
hw11 for use in this assignment. The branch you create must exactly match the one I’ve given you for you to receive any credit for this homework.
Prior to creating this new branch make sure your working directory is “clean”, i.e., consistent with the last commit you did when you turned in homework 10. Follow the procedures in GitHub for Classroom Use to create the new branch, i.e.,
git checkout -b hw11. Review the section on submission for using push with a new branch.
README.md for Answers
You will modify the
README.md file in your repo to contain the answers to this homework.
Question 1. (10 pts) TTPs
Understanding TTPs via the Mitre ATT&CK model. Below are three different attack analysis written up by Talos Intelligence. You will read one of these based on the last digit of your NetId
- Masslogger – digits 1-3
- Xanthe Miner – digits 4-6
- Wasted Locker – dgits 7-9
(a) Mitre ATT&CK Items
List the identifiers of the Mitre ATT&CK technique mentioned in the article, i.e., an identifier like: “T1059.003”, or “T1059”. You do not need to explain them here.
(b) Mitre ATT&CK Techniques Explanation
For four of the techniques you listed about provide an explanation of that technique in a way understandable to someone who is taking this class and not a security, windows, Mac, or Linux expert. If the reader cannot understand your explanation you will lose points.
Question 2. (10 pts) CVE and NVD
Go to the CVE list search page and search for dnsmasq which is a library commonly used in home routers. Show a screenshot of a portion (first couple) of results. Review the first four or so returns. List the “the highest threat from this vulnerability” for each of the first four CVE you found here. In these first four CVEs did you see any mention of an attack that we studied in class? If so what was that attack?
For one of the CVEs you found in part (a) look up that item in the NIST NVD, see NVD Search page. Show a screenshot of what you see. Show the severity score here. Show the common weakness enumeration here.
Question 3. (10 pts) NMAP Basics
For this question you will need to download NMap and install it on your machine.
(a) Show NMap Running
Take a screenshot of NMap running on your machine, either a GUI version or in a terminal. I show both below:
(b) Find the IPv4 Address and Type of your Machine
Different operating systems have different commands to determine the IP address of your machine. In addition a machine can have multiple IP addresses for different purposes. Find the IPv4 address of your machine that is used for communicating with local network. Write that address here. For example my laptop has the address 192.168.1.228.
Is your IPv4 address a Private IPv4 address? For example my address is in the range 192.168.0.0 – 192.168.255.255 so is a private IPv4 address. Write your answer here.
Question 4. (10 pts) NMAP Scans
(a) Quick Scan your own machine
Use either the NMap GUI or the command
nmap -T4 -F your_ip_address to scan your own machine from your machine. Take a screenshot. I get:
How many open ports did NMap find on your machine? (answer here)
(b) Scan your cell phone
Find the IPv4 address of your cell phone and write it here. You need your cell and computer to be on the same WiFi network for this to work. For example my cell has IP address: 192.168.1.207 on my local network.
Scan your cell phone with an “intense scan” (GUI) or command
nmap -T4 -A -v your_ip_address. Take a screenshot of the results. How many open ports did NMap find? Did NMap correctly identify the device/operating system? Can you get device manufacturing information from the MAC address?
(c) Scan another device or subnetwork
Scan another device on your network or scan for devices on a subnetwork. Please respect others privacy and do not scan devices or networks without permission. Describe what you scanned and how well NMap identified devices here.
See an example of my home network scan and analysis in the course slides Recon: NMap Home Network.
Question 5. (10 pts)
Principles: for this problem you will may want to review the CyBOK introduction, and you will need to look up items in NIST SP 800-160 Vol. 1 Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems.
(a) Fail-Safe Defaults
In the context of information security and IP addresses/domain names what is “white listing”? What is “black listing”? Explain how these relate to the principle of fail-safe defaults.
(b) Separation of Privilege
Give an example of separation of privilege different from those discussed in class.
(c) NIST Principle of Continuous Protection
What is the NIST principle of continuous protection? See NIST SP800-160v1 appendix F. Use your own words.
|
<urn:uuid:edba4ada-5888-48aa-bcb7-d8a010f41e03>
|
CC-MAIN-2022-40
|
https://grotto-networking.com/CyberSecurity/files/assignments/Homework11.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00028.warc.gz
|
en
| 0.888105 | 1,250 | 2.578125 | 3 |
The German Aerospace Center (DLR) has put a new supercomputer into operation.
Launched in 1969, the Deutsches Zentrum für Luft- und Raumfahrt is the national center for aerospace, energy, and transportation research of Germany. It also acts as the German space agency.
The DLR’s new high-performance computer, named CARO (Computer for Advanced Research in Aerospace), was launched this week in Göttingen, in central Germany’s Lower Saxony.
The NEC-manufactured machine utilizes AMD Epyc processors and is capable of 3.46 petaflops sustained and a theoretical peak of 5.59 petaflops.
CARO will, among other things, accelerate the introduction of new technologies for more economical, environmentally friendly and safer flying through airflow and wind simulations. CARO can also be used in aerospace and transport research: for example in the area of future space transport or in next-generation trains. Another important field of research is the simulation of wind turbines.
DLR said €10.5 million ($10.7m) was invested in the project. The CARO computing cluster is operated by the Society for Scientific Data Processing in Göttingen (GWDG) in a new computing center in Göttingen in cooperation with the University of Göttingen.
“With CARO we have one of the world's most powerful supercomputers for aerospace. In Göttingen, the computer is at the cradle of aerodynamics, which will also be one of the most important users,” said Prof. Anke Kaysser-Pyzalla, Chairwoman of the DLR Executive Board.
CARO is one of two new high-performance computing clusters that DLR is building. The AMD-based sister system CARA in Dresden has been in use since 2020.
|
<urn:uuid:e51008bb-7324-4065-860f-f4eef2bf9048>
|
CC-MAIN-2022-40
|
https://www.datacenterdynamics.com/en/news/germanys-dlr-commissions-claro-supercomputer-for-aerospace-research/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00028.warc.gz
|
en
| 0.918117 | 392 | 3.15625 | 3 |
As the major source of human nutrition, grain production is a vital industry. Grain spoilage not only increases costs for the food industry, but also raises environmental and food safety issues.
The TGM System captures and models weather and sensor data, and automatically adjusts atmospheric conditions in grain bins to keep humidity within thresholds that minimize the risk of spoilage.
Improvesgrain quality, cutting cost and risk for farmers, food companies and consumers
Increasesfood safety by reducing the amount of spoiled grain that enters the food chain
Boostsyields and reduces waste, minimizing the environmental impact of grain production
Business challenge story
Disrupting a multi-billion-dollar industry
Food waste and food safety are among the hottest political topics in the United States. According to ReFED—an organization founded to create a roadmap to reduce food waste—the US spends USD218 billion per year on growing, processing, transporting and disposing of food that is never eaten.1
Much of this waste is caused by contamination during the production process, which also has an impact on food safety. In 2011, the US Food and Drug Administration (FDA) introduced the Food Safety Modernization Act (FSMA), which aims to increase food safety by focusing on preventing contamination rather than simply responding to it.
Currently, however, grain production is exempt from some of the requirements of the FSMA. This is a major limitation, since a significant proportion of human nutrition—some 60 to 80 percent, according to many estimates—comes either directly or indirectly from grain. Corn, wheat and other cereals, rice, soybeans, oilseeds, legumes and pulses are dietary staples both in the US and around the world; they also provide feed for livestock and poultry in the meat, egg and dairy industries. Grain that has been contaminated by mold or insect activity potentially poses serious health risks for both humans and animals—yet today, there is no easy way to prevent at least some proportion of spoiled kernels from finding their way into the food chain.
Daniel Kallestad, Founder of TGM, explains: “Historically, it has been almost impossible to establish a workable standard for grain purity—the technology simply hasn’t existed. At TGM, we aim to change that—and in doing so, we expect to transform the entire industry.
“Since the 1970s, we have been selling process control systems that help growers and other grain market participants store their grain safely. We learned that by monitoring the atmospheric conditions, we could provide a proactive approach—running fans to prevent grain spoilage, which works extremely well.
“By gathering information on the airflow through a grain mass, we can measure the progress of moisture change. This helps growers make smarter decisions about when to use the fans—which prevents the growth of mold and the incursion of insects, enabling grain to be stored for many months without any deterioration in quality. It also provides data that can be used to verify the conditions in which the grain has been stored, as well as helping growers lower their costs and optimize their marketing decisions.”
However, decades of experience in the grain storage sector have taught TGM that managing and monitoring aeration systems is no simple task. For busy farmers and traders, whose main business is growing or selling grain, storage management is not a core skill—and without the right expertise to manage and adjust the fan systems, there is still a risk that crops can be spoiled.
Christopher Sears, Vice President at TGM, says: “We realized that with over 1 million bin-years of experience, we were in a unique position to make the next generation of aeration systems even smarter. By using Internet of Things technologies to monitor weather and storage conditions in real time at each bin site, and harnessing sophisticated models and algorithms, we would be able to predict problems and proactively adjust the fan controllers in real time to maintain consistently perfect conditions in every bin.”
Daniel Kallestad adds: “More significant still: if we could capture and store all this data on the conditions in each bin, we would be able to prove that the grain within those bins had never been subjected to conditions that would allow spoilage to occur. This effectively amounts to a certification that the grain is 100 percent pure. And that’s what is really going to shake up this industry. Once food companies and regulators—and consumers—realize that it’s possible to obtain perfect grain, with full traceability from harvesting through to delivery, they won’t settle for anything less.”
Going against the grain
To turn its vision into a reality, TGM needed a database platform that would enable it to harness the power of Internet of Things (IoT) technologies, while providing complete reliability and scalability to meet the needs of tens of thousands of customers.
“We’ll be working with some of the largest and most powerful companies in the agriculture and food industries, so we can’t afford our technology to fail,” says Daniel Kallestad. “And the technical challenge is significant: we need to capture weather data from the bin site and sensor data from the bins themselves on a minute-by-minute basis, as well as capture their location down to the centimeter. With a potential market of more than 300,000 grain storage sites in the US alone, that’s an enormous amount of data to capture, store and analyze.”
Christopher Sears adds: “We decided to work with IBM for a couple of key reasons. The first was that IBM has a solid reputation for solving these kinds of large-scale problems, and is well-known and respected by the major players in government and the food industry. These are the organizations that will create demand for our solution, and having a big name like IBM in our corner is important to give them confidence that we can deliver what we promise.”
“The second was that the IBM® Informix® database platform offered exactly what we needed for our Internet of Things solution. IBM Informix is capable of being embedded in relatively small devices, such as IoT Gateways, and it has zero administrative overhead. It can reliably capture massive volumes of time-series and geospatial data at very high velocity, which is critical to IoT solutions.
“IBM Informix is also able to replicate that data to the cloud, where it can provide enterprise class hybrid data management capabilities across various data types, besides offering very fine-grained security whether the data is at rest in the database, or in motion. That last point is important: the grain industry is extremely sensitive about its quality and inventory levels, so we need to be able to assure our customers that all data will be in safe hands.”
The solution—known as the TGM System—includes an advanced weather station called a SiteLink, which captures very precise weather data from unique relative humidity sensors that use micro-beam technology. The SiteLink is connected to the other components of the system using Power-over-ethernet technology, which avoids the need to include a power supply in the SiteLink itself. This minimizes the heat generated within the SiteLink, thereby improving the accuracy of its readings.
The second major component is the Bin Intelligence device, or “Bintel”—a master controller that collects weather data from the SiteLink and combines it with data captured by the sensors and fan actuators that are installed on each storage bin. The bin sensors are connected to the Bintel via Ethernet-over-power links, which reduces costs and facilitates installation by transmitting data over existing electrical cabling.
The weather and bin data are captured by an IBM Informix database embedded in the Bintel, which in turn connects to TGM’s secure private network and replicates the information to a central IBM Informix database. The data can then be accessed via iPAC, a tablet app that gives authorized users a real-time dashboard that shows the status of each bin, and allows users to adjust the settings of the automated fan systems, or even take manual control if they wish.
Daniel Kallestad comments: “We looked at a more traditional relational database alongside Informix, and there was just no contest. Almost all of the capabilities we need are built into Informix as standard, so it has saved us a huge amount of development time—in particular around time-series data and replication. It’s the engine that makes the whole solution possible.”
Building a smarter, safer, more sustainable future
Over the next 12 months, TGM expects to have rolled out this new Internet of Things solution to more than 1,000 sites. Once growers and food companies begin to realize the benefits, TGM hopes to gain the momentum it needs to encourage transformation throughout the entire US grain industry.
Daniel Kallestad says: “Our competitors watch the grain; we watch the weather. They can tell you when your grain has started to spoil—but by that point, it’s already too late. Our system takes action to prevent spoilage from happening in the first place, by maintaining conditions that make mold and insect activity practically impossible.
“It’s a simple proposition, but it’s very difficult for growers to understand, because they’re coming from a mindset that grain spoilage is not preventable. For them, it’s just a matter of how long it takes before the level of spoilage reaches unacceptable levels. We’re saying that if you use the TGM System, your grain will not spoil—at all—within the time it takes to reach the end-customer.”
In a world where as much as 30 percent of grain production is currently ruined by spoilage, this is welcome news—not just for farmers and food companies, but for humanity as a whole. Avoiding food waste means we can feed more people while consuming fewer natural resources—and since rainforests and river basins are being depleted at an unsustainable rate to support a growing population, the more efficient our agriculture becomes, the better it is for everyone.
The government and healthcare sectors should also be keen to see traceability systems such as TGM’s gain wider traction in the grain industry. If grain shipments can be certified as free of contamination and full traceability from farm of origin to end-customer can be put in place, the risk of poisonous mycotoxins being introduced into the food chain could be significantly mitigated—potentially saving billions of dollars in healthcare costs.
On a more immediately practical level, the financial benefits of the solution should be significant. Research indicates that taking into account the costs of implementing a grain purity tracking system, growers will be able to make significant profits by commanding a higher price per bushel for purer qualities of grain. For the highest-quality identity-preserved grains, profits could be in excess of 50 percent per bushel.2
The solution could also help farmers significantly mitigate risk by making smarter decisions about how long to store grain and when to sell it. Without violating the privacy of individual participants, the TGM System will provide the best supply picture available, so that growers can better understand demand patterns, and avoid selling at a low price during the post-harvest glut.
“This also provides a new business opportunity for TGM,” explains Christopher Sears. “In the history of the farming industry, it has never been possible to buy grain spoilage insurance, because no insurer has ever been confident enough in traditional grain storage. We’ve taken out a patent on this type of insurance, because we’re confident that we could offer it to TGM customers with a minimum of risk.”
Daniel Kallestad concludes: “A change is coming to the grain industry. Soon enough, the food companies will understand that they don’t have to accept contamination in the grain they buy; the government will realize that quality standards can be applied to grain just as much as to other foods; and the general public will learn about the health and environmental cost of today’s outdated working practices.
“With IBM Informix, running on IBM POWER8® systems, powering our Internet of Things solution, we can not only help the industry solve all of these problems—we will also deliver measurable return on investment for growers and other market participants by eliminating waste and boosting profits. In our opinion, it’s a deal that is too good for the industry to ignore.”
Targeted Grain Management
Targeted Grain Management (TGM) provides technology and services to help growers and other participants in the grain industry safely store, manage and preserve grains of all types. Developed based on decades of experience in the grain storage industry, the TGM System provides a monitoring and control solution that harnesses the Internet of Things to proactively manage atmospheric conditions within storage bins to combat mold and insect activity, minimize spoilage and maximize quality.
- Consumer Products: Digital Operations
- Consumer Products: Supply Chain
- Software Services for Information Management
Take the next step
If you are a leader in the food industry and would like to learn more about how TGM can help your company transition to safer grain ingredients, please contact [email protected], or visit tgmsystem.com.
To learn more about IBM Informix, please contact your IBM representative or IBM Business Partner. IBM Analytics offers one of the world's deepest and broadest analytics platform, domain and industry solutions that deliver new value to businesses, governments and individuals. For more information about how IBM Analytics helps to transform industries and professions with data, visit ibm.com/analytics. Follow us on Twitter at @IBMAnalytics.
|
<urn:uuid:1e72c46d-6d93-4838-94ea-795a8637ce72>
|
CC-MAIN-2022-40
|
https://www.ibm.com/case-studies/targeted-grain-management
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00028.warc.gz
|
en
| 0.9399 | 2,853 | 2.875 | 3 |
SaaS – Definition, examples, opportunities and limitations
After having presented, in previous episodes, the cloud computing service IaaS (Infrastructure as a Service) and PaaS (Platform as a Service) today we move to SaaS, Software as a Service, remembering that the positioning that each type of service occupies within the pyramid, is not random, but represents the interdependence that exists between one and the other. In this sense, each software (SaaS) is based on a platform (PaaS) for which an infrastructure (IaaS) is required.
What is SaaS (Software as a Service)?
SaaS is a service that allows you to achieve a result without worrying about how it was achieved: you benefit from a service without worrying, in any way, about its management. Management is left to the provider.
A perfect example is the Microsoft 365 platform: this is nothing more than a set of guaranteed cloud services where no user intervention is required. Send an email? The user can do it without worrying about how it works and any problems with its operation. Should it not work? Support will resolve all issues.
The biggest advantage of SaaS (Software as a Service) is that no commitment to management is required: no specific technical skills are required to use it. There is certainty about the cost that in other types of service (IaaS and Paas) is difficult and complex to estimate.
SaaS Limits and Pricing
The main limit of SaaS is the cost: having a complete management, the cost will be higher than if the management was at the expense of the company.
In these 3 appointments we have tried to describe, completely, the types of service that can be provided in the cloud. Remember that there is no “one-size-fits-all” solution for cloud adoption. Companies should evaluate costs and benefits and, therefore, decide which is the best model. In a previous article we showed the main differences between IaaS, PaaS and SaaS in cloud computing.
|
<urn:uuid:7fc56d81-b968-404c-a1f9-63678c01a99d>
|
CC-MAIN-2022-40
|
https://www.iiot-world.com/industrial-iot/connected-industry/saas-definition-examples-opportunities-and-limitations/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00028.warc.gz
|
en
| 0.953613 | 438 | 2.6875 | 3 |
The most significant factor driving the growth of the content delivery network market is end user interaction with online content.
This interaction between a user and online content is far more complex today than it was a few years ago.
Today’s users are much more likely to be streaming a longer video from a mobile phone or accessing a SaaS portal when working from home. These are far more complex experiences that did not exist five or so years ago. Given the expected growth of the CDN market in the coming years, this guide will define exactly what a content delivery network is and how a CDN works.
What is a Content Delivery Network?
A content delivery network, also called a CDN, improves the website performance as well as its security and reliability. It does this by bringing web content closer to the geographic location of users. A CDN is essentially a geographically distributed network of servers and their data centers that help in web content distribution to users with minimal delay.
CDNs are especially useful for businesses which attract a large amount of web traffic. Video streaming platforms like Netflix, social media giants like Facebook and e-commerce giant Amazon all rely on CDNs to deliver their content to end users.
How CDNs Works
As mentioned above, a CDN works by bringing content closer to the geographic location of the end users. It does this through strategically located data centers known as Points of Presence (PoPs) . These are data centers situated around the world, and within each PoP are thousands of caching servers. Both the PoPs and servers help improve connectivity and accelerate the speed at which content is delivered to the end user.
To understand in detail how a CDN works, it helps to look at what happens in the absence of one.
Consider a user in Singapore trying to load the webpage of a business, say a streaming services provider. The user sends a request to the business’ web server to retrieve all the page’s components. The page could include text, images, HTML and dynamic content. The 源站服务器 。 could be located anywhere in the world. Let’s say it is in North America. Now this origin server which stores all the content on the web page has to deliver it to the user’s browser all the way across the globe. This simple fact of geographic distance can create delays and performance issues.
When a CDN is used, the content can be stored in the local PoPs that are set up closer to the end user. These PoPs cache the files on the web page and deliver it to the end user in much less time when requested, improving page load speed. If the CDN does not have the files requested by the user, it will load from the origin as needed.
CDNs are especially useful when websites have dynamic content. For such web pages, CDNs create a “super highway” to accelerate the delivery of content across a longer distance. An individual ISP cannot provide this.
How Does CDN Caching Work?
CDN Caching is a crucial part of what makes content delivery networks work. It is the process of storing a copy of files delivered to a user the first time and reusing those stored copies of the assets for subsequent requests instead of the original files. In a CDN, the edge servers are where the data is cached.
CDN caching works roughly as per the following steps:
- An end user requests for static assets on your web page for the first time
- The assets are retrieved from the origin server and once delivered are stored in the PoP edge caching server close to the end user.
- When the same user requests the same assets the next time, the requests don’t go to the origin server. Instead the requests go to the cached files from the PoP server to see if the stored assets are still available and deliver them to the user. If they are not available or the caching server has not cached the assets yet, the request is sent to the origin server again.
Once your static assets are cached on all the CDN servers for a particular location, all subsequent website visitor requests for static assets will be delivered from these edge servers instead of the origin, thus reducing origin load and improving scalability.
Why Is It Important You Know How CDN Works?
CDNs are important for businesses that rely on distributing content to users across the globe. They comprise a network of strategically distributed CDN servers, each of which aids in content delivery.
They help optimize bandwidth and latency
The main benefits of using CDNs involve bandwidth and latency reduction. Latency refers to the time it takes for web pages to load. By moving and storing website content closer to the users, CDNs help in reducing page load times and optimization of the browsing experience.
For example, consider a cloud gaming company or a business that provides video streaming services. Their data centers could be located in New York or Los Angeles in the United States. But their end users and consumers could be located all over the world. It may still be relatively straightforward for users in Austin or Maryland to download all of their content when using their services. But what about users located miles across the globe in Australia or Japan? Without a CDN, each of these users will also have to download all the content every time and this will lead to delays and an inconsistent user experience.
At the end of the day, the geographic distance between the web server and end user makes a big difference. Using a CDN, this distance can be minimized and the user experience optimized. This has a direct impact on the business revenue as unsatisfied users and customers can be turned away to other competitors.
They help improve website security
The distributed nature of CDNs makes it ideal for handling large amounts of web traffic such as those from DDoS attacks that could otherwise result in server failure and downtime. Techniques such as HTTP load balancing in a CDN help in preventing and detecting such DDoS threats.
Another functionality of CDNs include the provision of fresh TLS/SSL certificates for better authentication, encryption, and integrity standards. They also improve content availability and redundancy and ensure that even if one server goes offline, others can pick up the web traffic. By the same token, CDNs can also offer Distributed Denial of Service or DDoS protection by distributing the malicious requests across the network.
They help control access to different regions
CDNs are also important for your engineering teams and website owners to manage access to your platform or services. If your business has users or consumers who are distributed across the globe, you may often have to allow access for some regions and deny access for others.
CDNs can help your engineering teams offload web application logic to the edge servers. They can use CDNs to delegate authentication tasks to the edge, respond to requests from different regions based on request attributes and process the header and body attributes in requests and responses.
CDNs also help in collecting logs and analyzing user-generated data. This is crucial if your business attracts a high number of website visitors and when you need to analyze the web traffic in real-time.
They allow for prefetching content for faster delivery
They are a cost-effective for managing traffic
Above all, the billing model for CDNs allow you to pay according to the traffic and the amount of requests, although HTTPS requests can involve additional costs due to the extra computing resources required.
Why Use CDN?
With nearly every business today relying on digital channels to attract customers, the importance of CDN services has become undoubted. Cloud and online gaming, media and entertainment including video streaming, e-commerce and advertising are just some industries where CDN providers can help distribute content to users across different parts of the world.
That doesn’t mean CDN will be a good choice for everyone. If your business operates from a localized website and if your user-base is also concentrated around your server, CDNs may end up being overkill. In fact, in such cases, CDNs can even harm your users’ experience on your website as there will be unnecessary nodes that stand in between the server and the client.
This guide has described how a CDN works and if you are looking for a way to deliver content to your customers quickly and seamlessly, irrespective of where they are located in the world, talk to a CDN provider today.
|
<urn:uuid:a73dd62a-a780-49d4-b84d-b0c428d7b577>
|
CC-MAIN-2022-40
|
https://www.cdnetworks.com/cn/web-performance-blog/how-content-delivery-networks-work/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00028.warc.gz
|
en
| 0.934032 | 1,812 | 2.640625 | 3 |
With modern technologies now more accessible, the healthcare sector is continually shifting to more digitized methods. The World Economic Forum states that 64% of healthcare leaders invest in digital health technology, with 19% prioritizing AI software. These technologies are useful in expediting operations within the healthcare sector. However, it’s worth noting that they can also benefit healthcare cybersecurity — in particular, analytics as a means of heightening data security.
Concerns Surrounding Healthcare Data Security: What You Should Know
Healthcare institutions hold thousands of patient records, each containing sensitive data like patients’ medical records and payment details. When hackers obtain this information, they can use it to gain access to the user’s financial resources. In 2020, the healthcare sector experienced nearly 600 data breaches, a 55% increase from 2019. A report by Bitglass revealed that 67.3% of all healthcare breaches resulted from hacking and IT incidents. This is a huge difference, compared to unauthorized disclosures and loss of devices, which accounted for only 21.5% and 8.7% of breaches, respectively.
Aside from data breaches, medical institutions also face other cyber threats. Ransomware attacks are one of the most prevalent. They compromise healthcare data and entire data systems, making them far more destructive than data breaches. The healthcare industry as a whole has lost over $25 billion to ransomware incidents. Statistics from Emsisoft show that 560 healthcare facilities fell victim to ransomware. Some attacks even forced facilities to halt operations temporarily.
How Data Analytics Can Help
These statistics show why data protection measures have become a necessity within the healthcare sector. Medical institutions can turn to AI and predictive analytics solutions to assist in identifying potential risks. Such software must be capable of deriving actionable information from the organization’s current systems and security products. Otherwise, the data ends up unused.
For example, analytics programs can be configured to pinpoint vulnerable points within an internal system. With machine learning, these programs can analyze huge amounts of data in real-time and use that to forecast aberrant behavior. It can also draw insights from historical data to acquire baseline information on different entities within the system. With these types of analytics programs, the workforce can identify where a cyber-attack might occur and work towards preventing it. This is done by strengthening cybersecurity measures around the weak point or moving the sensitive data to a safer place.
Alternatively, institutions can use analytics programs to plan for what to do should an attack occur. This involves pinpointing priority files and software that the facility needs to maintain the most basic operations. By either boosting the cybersecurity measures surrounding these assets or backing them up in a secure drive, then the facility can continue operations, even after experiencing a cyber attack.
The Need for More Data Experts
Analytics as data protection is an excellent means for boosting cybersecurity within the healthcare sector. However, managing such software often requires the support of data professionals. Consequently, there’s been a steady uptick in demand for data experts in the field of healthcare. This demand has led to the proliferation of the analytics market, with a projected compound annual growth rate of 13.2% through 2022. Meanwhile, the global revenue for big data is forecasted to rise to $274.3 billion in 2022. With a strong market and the heightened demand for data experts, more organizations across all industries aim to hire data experts.
Higher education institutions have opened up both bachelor’s and master’s degrees to online learning in response to this. Online master’s degree courses in data analytics are 100% coursework. They are designed for professionals who want to increase their knowledge and experience in the field but are limited by work schedules or location. These degrees are highly advanced and cover complex topics like forecasting and predictive modeling. While some healthcare jobs now cover some amount of data training, like nurse informatics specialists, an expert with a formal data education will be especially useful for healthcare facilities.
Medical institutions need people who can help in synthesizing, gathering, and interpreting data. Even the FDA has expressed its intent to hire more data experts to “unleash the power of data” in healthcare. Data has plenty of uses aside from informing medical decisions. As shown in this article, it can also improve cybersecurity measures to secure healthcare data and protect against hackers. Therefore, medical institutions must start investing in both security analytics programs and professionals that can manage them.
|
<urn:uuid:95db106a-5aaf-47a6-ad93-691bc78acf23>
|
CC-MAIN-2022-40
|
https://cyberexperts.com/how-can-analytics-boost-healthcare-data-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00229.warc.gz
|
en
| 0.935363 | 895 | 2.828125 | 3 |
If branch offices have CCTV cameras and guards for physical security, firewalls are an integral part of any network infrastructure that acts as a barrier against malicious activities and online attackers. They contain a set of security rules and policies that determine which inbound traffic to allow and which should be blocked. Without a strong firewall system, it’s hard to guarantee the security of your company’s network, computers, and data.
So what is firewall management and how is it done? Firewall management is the practice of configuring and monitoring network firewall settings to ensure robust security against cyber threats. A company’s IT team and security administrators are usually the ones who take care of managing the firewall. It usually involves defining the security rules and protocols, tracking changes, monitoring logs, and conducting regular audits.
What Is Firewall Management and Why Your Business Needs It
Thanks to the Internet, companies are more connected now than ever. Sharing and exchanging information has become easier through emails and cloud servers where everything can be stored and sent. However, it can also be tricky to rely on online networks especially when there are cybercriminals and hackers lurking in the digital environment.
For this reason, businesses need a firewall for their peace of mind that all their computers and networks are safe and secure. There are many different types of firewalls but modern next-generation firewalls (NGFWs) offer extensive network protection and security components that can help identify, prevent, and resolve threats.
Here are some of the top advantages of having a firewall:
- Monitors incoming and outgoing network traffic to filter potential threats and spam
- Reviews the identity of users and examines the integrity of the files and documents
- Notifies about any existing malicious activity within the network and blocks them immediately
- Strengthens your security defenses against common threats like phishing and social engineering
- Ensures safe and protected web browsing by reviewing the sites that you are accessing
- Prevents hackers from infiltrating and stealing personal information and sensitive data
Basics of Managing A Firewall
Firewall management is a fundamental practice to assure that you set the right rules and protocols according to your unique security requirements. Managing a firewall is usually performed by the IT staff, security managers, and network admins. Here are some of the important elements of firewall management:
1) Create a standard firewall change management plan
As online attackers become more advanced and sophisticated with their tactics, your firewalls should also be regularly updated to maintain their security capabilities. For this reason, you should have a centralized firewall management plan to help monitor any changes or configurations that are done to the network. This works similar to an audit trail where you can review any unwanted changes and check if any unauthorized users had access to the firewall.
Some of the components that a firewall change management plan should have:
- Detailed objectives and enumeration of changes that are made to the plan
- Possible risks of policy and rule changes
- Mitigation plan or solutions to the potential risks
- Logs of who made the changes and when it was configured
2) Test the new firewall settings before network-wide implementation
After making updates or modifications to your firewall policies, it’s important to test how well it performs against possible intruders. You can recreate an online attack or check for vulnerabilities by building a test environment that simulates your network’s systems. If any weaknesses show during the test process, you can further fine-tune your firewall configuration to make sure it’s secure before going live.
3) Establish and clean up firewall rule base
A firewall rule base generally refers to the standard set of regulations about what kind of traffic is allowed and blocked by the firewall. It’s important to always check the rule base to maintain peak performance and efficiency of the firewall. Some reminders in optimizing the firewall rules include:
- Remove expired, unused, or shadowed rules
- Avoid conflicting rules that may make create vulnerabilities such as hidden backdoor points
- Review and revise incorrect rules to avoid network system failures
- Divide long rule sections to make it easier to understand and process
4) Review user access logs and schedule regular audits
When setting up a firewall, you should only assign configuration access to trusted and authorized members of the team. From time to time, you can review access logs to check if there have been any suspicious changes done by an unverified user that exists outside the network.
It’s also important to perform regular firewall audits to assure good cybersecurity posture and maintain compliance with industry regulations and network security guidelines. Audits should be done in the following situations:
- You’ve configured most of the settings in your network firewall
- You’ve updated or installed the latest firewall firmware
- There’s an ongoing firewall migration within the network systems
Common Mistakes In Firewall Management
The risk of cyberattacks is increased with poor firewall configuration or undefined security rules and protocols. To avoid leaving your network susceptible to threats and viruses, here are other mistakes that you need to avoid when managing a firewall:
- Lack of clearly defined rules, parameters, and policies
- Using open policy configuration that allows many users to access the firewall
- Failure to track changes and monitor user access logs
- Not updating the firewall hardware
- Inadequate security and user authentication such as allowing the use of weak passwords
Why Choose Abacus For Managed Firewall Services
If you don’t have enough human resources to establish an IT team, you can outsource managed firewall services from a third-party provider such as Abacus. Abacus has an experienced team of cybersecurity experts who can provide comprehensive firewall solutions to keep your network infrastructure safe and minimize the impact of security risks on your daily operations.
Other benefits of having Abacus IT managed services include:
- Round-the-clock support for firewall management – If you encounter any issues with firewall configuration or management, you can always call Abacus’s expert support staff. They can lend their knowledge and expertise in helping you resolve your issues right away.
- Customized security plans according to your unique needs – Abacus offers tailored security solutions that can address specific gaps in your network. They can also provide recommendations on which type of firewall is the best for your organization.
- Regular threats and vulnerabilities assessment – Proactive threat and vulnerability management is the goal of Abacus. They can perform regular system monitoring and risk assessments to address issues before they escalate to a serious problem.
- Up-to-date security systems – Abacus offers a multi-layered approach to cybersecurity and this includes constant security patching to solidify your network’s defenses from external threats.
Secure Your Network With Abacus Managed IT Services
Managing a firewall is essential to ensure optimal protection for your organization’s network. At Abacus, you can count on our expert IT engineers and managers to be your partner in strengthening your security capabilities.
For more than 15 years, we’ve helped countless businesses across different industries with customizable security solutions, consulting, systems integration, and strategic planning to assure the efficiency of their operations. Contact us now to learn more about our suite of business IT services.
|
<urn:uuid:ae8d091d-6215-4635-a26f-a634ef34b333>
|
CC-MAIN-2022-40
|
https://goabacus.com/abacus-services-firewall-management/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00229.warc.gz
|
en
| 0.926566 | 1,473 | 2.8125 | 3 |
Sunday, September 25, 2022
Published 2 Years Ago on Friday, Jul 24 2020 By Adnan Kayyali
Oxford University COVID-19 vaccine has been deemed ‘safe’ in provoking an immune response against the virus after the first and second stages of clinical trials with optimistic results.
Conclusions drawn from a recent publication by The Lancet medical journal and stated in a video on the University’s official YouTube channel:
“We have 2 different types of immunity. One is well known, that’s making antibodies which bind to the virus and try to clear it” Explains Professor Adrian Hill, director of the Jenner institute for Vaccine Research “the other is the cellular [immunity] where some white blood cells, called T-cells, can recognize a virus-infected cell and kill that cell and the virus with it”.
The current trials for the COVID-19 vaccine candidate are not enough to start mass production and distribution just yet, but the results have been described as promising. As professor Sarah Gilbert, Project lead at Oxford University explains “We know there’s an immune response to the vaccine, and it’s the kind of immune response we are looking for, [but] we don’t know how big that immune response needs to be”.
In other words, there are some unanswered questions such as how many shots of the vaccine might be needed, what is the required dose and what are the different vaccination strategies needed in people of different ages, particularly older adults.
“We saw the strongest immune response in the 10 participants who received two doses of the vaccine,” Says Professor Andrew Pollard, Chief Investigator of the COVID-19 vaccine study “indicating that this might be a good strategy for vaccination”
Nevertheless, the results of the study, which included over 1000 participants, are quite uplifting, as it is a major step in the right direction. The participants developed detectable neutralizing antibodies which are essential for immunization.
The potential vaccine does come with side effects in about 60% of participants. Namely fever, muscle aches, headaches, and injection site reaction. However, all the effects are considered mild and would all fade over time. These are expected “reactogenicity” responses, or bodily reactions and side effects, and symptoms were alleviated for the most part with paracetamol.
If this does prove to be successful, Oxford University in partnership with the pharmaceutical company AstraZeneca will produce 100 million doses of the vaccine. However, Mene Pangalos, Executive VP of BioPharmaceuticals Research and Development at AstraZeneca adds that “today’s data increases our confidence that the vaccine will work and allows us to continue our plans to manufacture the vaccine at scale for broad and equitable access around the world”.
Web 3 simply refers to the next version of the internet, which supports decentralized protocols and promises to lessen reliance on major tech firms, and grants ownership to the very people that use it. Having said that, how central is individuality in Web3? Web3, while still a vague concept, represents the read/write/own version of the […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved
|
<urn:uuid:5cd5a062-97b0-4ae2-99ac-dac48f21d3c5>
|
CC-MAIN-2022-40
|
https://insidetelecom.com/covid-19-vaccine-candidate-deemed-safe-by-oxford-university/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00229.warc.gz
|
en
| 0.943501 | 708 | 2.796875 | 3 |
Monday, September 26, 2022
Published 2 Years Ago on Saturday, Oct 03 2020 By Yehia El Amine
A new green revolution is coming our way, and it looks promising as the prospect of 5G comes ever closer within humanity’s grasp.
According to a study done by the United Nations, agriculture is a multimillion-dollar industry and one of the largest in the world, accounting for almost 1 percent of GDP in the UK, 6 percent in the U.S., and 12 percent in Australia.
It’s only going to go upwards as the demand for food and produce is increasing, as the world approaches eight billion in population.
However, as any big industry, its Achilles’ heel lies hugely affected by changes in temperature and moisture levels (among other things), meaning that these problems surface when the damage has already been done.
The digitization of farms will not only save billions in waste and losses, but also improve the quality of food that we consume. “The collective ability of farmers to produce more food more efficiently may be how the world feeds the global population, which will hit nine billion by 2030,” the study by the UN stated.
Thus making the deployment of 5G to be a saving grace for farms globally, especially in the fight against climate change.
Each degree Celsius increase in temperature could reduce global yields of soybeans by 3 percent, wheat by 6 percent, and corn by 7 percent, according to a report by the National Academy of Sciences.
“One reason is the effects of climate change on animal pests and diseases, as these changes can make them expand into new regions, creating invasive species that can be detrimental to crops,” the report added.
With this in mind, how can 5G simultaneously benefit farms and climate change in one swoop?
Let’s jump right in.
The advantage of smart farming is that it can calculate and estimate a wide array of risk factors as well as present farmers with important data to plan around these said risks.
It allows decision-makers to take the necessary actions with the best possible information presented to them while providing recommendations in the scope of efficiency and resource-friendly approaches.
Accurate weather forecasts, information on soil conditions, as well as maturity level of plants and insect detection, are all valuable information that demand drastic action from farmers to reduce waste and cost.
The Internet of Things (IoT) also has a huge role to play; an example of this is through autonomous vehicles, where managers can provide specific instructions on where and how to deliver certain crops, especially with products that need extra care such as milk, which requires immediate processing thus greatly increasing its quality.
Commonly known as “smart farming,” precision farming aims at giving extremely accurate treatments to crops; gone will be the days of treating entire fields under the same brush under the same level of care all around, but rather provides farmers with specific information about each crop row and what is needed.
This will not only reduce the waste of water, food, fertilizer, and pesticides, but will also give centralized information about the land and crops in real time.
With the help of Artificial Intelligence (AI) and machine learning algorithms, it will give farmers insight into predicting the susceptibility of crops to disease, and inform farmers of where exactly pests and diseases are located.
The emergence of 5G-powered drones will be the turning point for farmers far and wide; equipped with weed scanners and crop sprayers, they will have the ability to scan entire fields, apply pesticides at a more precise rate.
While many fear that the role of manual labor will fade, they still have a role to play in what’s to come; the drones will provide them with the information about the patches that need the most care, efficiently allocating their time accordingly.
An example of this can be seen through Volocopter’s new VoloDrone, which is an unmanned, fully electric, heavy-lifting utility drone capable of carrying loads up to 200kg. The drone can transport a wide array of things anywhere on the farm, such as transport boxes, liquids and equipment.
Or acting as an air-based scarecrow to fend off birds in the vicinity with the sound of its wings.
This is where AI-powered sensors come into play, which have the ability to detect, in real-time, data on weather, air, soil parameters, crop growth, and animal behavior. An example of this can be seen through sensors equipped onto drones to analyze nutrient status.
The data collected can be then merged with weather and a number of agronomical factors to apply an optimal quantity of fertilizer. Experiments done by Chinese telecom titan Huawei over the span of two years, showed that it was successful in decreasing the amount of nitrogen fertilizer by 10 percent without any yield loss.
Locating and monitoring the status of valuable livestock is considered critical to all farmers alike. Accurate tracking and monitoring can determine the health of cows, their food intake, and even their fertility level, which allows farmers the ability to make the best possible decision.
According to a study done by Huawei, 5G will enable very high levels of connectivity and geo-location services that can reduce cost, while increasing the performance and wellbeing of livestock far and wide.
Water is the most valuable resource humanity has, which is why every drop is critical, especially around dry, arid, and remote areas.
In a trial partnership with Algeria’s mobile network operator Djezzy, Nokia has created what it calls a Worldwide IoT Network Grid (WING) to equip Algerian peach farmers with practical data to help them achieve better yields. Soil probes buried 120cm under an irrigation line collect and send back data about the soil that allow the farmer to track soil moisture, water patterns and salinity.
The readings are analyzed so the farmer can accurately manage irrigation cycles and soil nutrition. After one month, Nokia’s trial saw the farmer reduce water consumption by 40 percent on a single irrigation line for one hectare, and increase his revenues up to 5 percent per hectare. WING operates on all mobile networks, but such trials will only be improved with 5G.
Another application is the automation of irrigation. Soil sensors are measuring the amount of available water, and dendrometers are measuring the water stress of the plants. This technology has shown on several farms to reduce the amount of irrigation water by 30 percent.
It is very important to note that many of the examples stated were done using the cooperation, funding, and aid of governments worldwide, relying on the private sector alone to make such a detrimental shift will only agonizingly prolong the stages ahead.
Thus partnerships need to be formed to encourage the need for different sectors and players to work together to make 5G applications a reality.
This has been highlighted within several reports by the UNDP such as the case of the city of Cauayan in the Philippines, which showed how an agricultural town can become a smart city through leveraging 5G technology.
The city government has partnered with the largest telco in the Philippines to pioneer the use of a 5G network in the country. The government also vowed to train and develop farmers’ skills and knowledge through the Digital Farmers Program, so they are better equipped to use these technologies.
Web 3 simply refers to the next version of the internet, which supports decentralized protocols and promises to lessen reliance on major tech firms, and grants ownership to the very people that use it. Having said that, how central is individuality in Web3? Web3, while still a vague concept, represents the read/write/own version of the […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved
|
<urn:uuid:273b0c0f-adf0-4621-9bf7-ac4ac2bb4b5f>
|
CC-MAIN-2022-40
|
https://insidetelecom.com/the-new-frontier-of-smart-farming-powered-by-5g/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00229.warc.gz
|
en
| 0.949873 | 1,613 | 2.921875 | 3 |
Semantic security refers to the concept that an attacker who sees a ciphertext should learn no more (or very little more) information than an attacker who does not see the ciphertext at all. This should hold even when the set of possible plaintexts is small, potentially even chosen by the attacker himself.
Because it’s difficult to formalize the definition above, we typically use a different definition that’s known to be equivalent. This definition is called “Indistinguishability under Chosen Plaintext Attack”, or just “IND-CPA” for short.
The definition is formalized as a game between an adversary and some honest “Challenger”. For the case of public key encryption the game looks like this:
- First the challenger generates an encryption keypair, and sends the public key to the adversary. (It keeps the secret key.)
- Next, the adversary selects a pair of messages (of equal length) and sends them to the challenger.
- The challenger picks a random bit and encrypts one of the two messages as . It sends back to the adversary.
- Finally, the adversary outputs a guess . We say the adversary “wins” if it guesses correctly: that is, if .
As mentioned in other articles, the adversary can always win with probability 1/2, just by guessing randomly in step (4). So we’re not interested in whether the adversary wins at all. Instead we’re interested in the adversary’s advantage, which is to say: how much better he does than he would if he just guessed randomly.
We can express this advantage as |Probability Adversary Wins – 1/2|. (This probability is taken over many runs of the experiment and all the randomness used — it’s not just the adversary’s success after one game.) An attacker who wins with probability exactly 1/2 will have “zero” advantage in the IND-CPA game. In general for a scheme to be IND-CPA secure it must hold that for all possible (time-bounded) adversaries, the adversary’s advantage will be negligibly small.
One obvious note about the IND-CPA game is that the attacker has the public key. (Recall that he gets it in step ). So sometimes people, upon seeing this definition for the first time, propose the following strategy for winning the game:
- The adversary picks two messages and then encrypts both of them using the public key.
- When the adversary receives the ciphertext in step (3), he just compares that ciphertext to the two he generated himself.
- Voila, the adversary can always figure out which message was encrypted!
If the encryption scheme is not randomized — meaning that every time you encrypt a message using a given public key, you get the same exact ciphertext — this attack works perfectly. In fact it works so well that an attacker who uses this strategy will always win the IND-CPA game, meaning that such a scheme cannot possibly satisfy the IND-CPA security definition.
The implication therefore, is that in order to satisfy the IND-CPA definition, any public-key encryption scheme must be randomized. That is, it must take in some random bits as part of the encryption algorithm — and it must use these bits in generating a ciphertext. Another way to think about this is that for every possible public key and message there are many possible ciphertexts that the encryption algorithm can output, all of which are valid encryptions of .
We see this use of randomization in many real-world encryption schemes. For example, most RSA encryption is done using some sort of “encryption padding” scheme, like OAEP or (the very broken) PKCS#1v1.5. Algorithms like Elgamal (and derivatives like ECIES) also use randomness in their encryption.
The important takeaway here is that this use of randomness in the encryption algorithm isn’t just for fun: it’s required in order to get semantic security.
The case of symmetric encryption. As a footnote, it’s also worth mentioning that IND-CPA can be applied to symmetric encryption. The major differences in that definition are (A) that in step (1) there is no public key to give to the adversary, and (B) to replace this lost functionality, the challenger must provide an “encryption” oracle to the adversary. Specifically, that means that the challenger must, when requested, encrypt messages of the adversary’s choice using its secret key. This can happen at any point in the game.
The caveats about deterministic encryption also apply in the symmetric case, and for basically the same reason. In short: ciphertexts must be diversified in some way so that two different encryptions of the same plaintext do not produce the same ciphertext. If this was not the case, then the adversary could easily win the game.
The only major difference is that in the symmetric setting, this diversification doesn’t always need to be done with randomness. Some schemes use “state”. For example, it’s possible to build an encryption scheme where the challenger keeps a simple counter between messages, and uses this counter in place of “true” randomness. (A good example of this is CTR mode encryption, where the attacker ensures that initial counters [IVs] never repeat.) This obviously only works in settings where there is a single encryptor, or all encryptors are guaranteed to not repeat different counters.
|
<urn:uuid:d8583f57-a017-4b34-9ea2-f5b631df6804>
|
CC-MAIN-2022-40
|
https://blog.cryptographyengineering.com/why-ind-cpa-implies-randomized-encryption/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00229.warc.gz
|
en
| 0.927519 | 1,178 | 2.921875 | 3 |
Network ports are what computers use to communicate between one another. Each port has a numeric value between 1 and 65535, but some of those port numbers are standardized. For example, webservers typically run on port 80 and port 443. However, attackers also know what ports weak or vulnerable services typically run on as well. In this article, we’ll discuss closing those open ports that either don’t need to be open to the world or are used vulnerable services.
Note: The instructions provided here will not work in shared hosting environments. You’ll need to contact your provider if you’re using a shared host.
Detecting Open Ports
There are a few ways to detect open ports on a system. We’ll discuss three methods here.
Coalition Insureds have free access to BinaryEdge – an enterprise Internet scanning tool. Binary Edge makes this process simple. To check what data already exists about your IP address, simply log in to your BinaryEdge portal and enter your IP address into the Host screen, then click Search.
This action will return ports based on data already collected by BinaryEdge scans. You can also use BinaryEdge to perform an updated active scan.
Navigate to the Scan screen and select New Scan
2. Select Simple, give your scan job a title, enter the IP address, and select Submit
When the scan is complete, you’ll have a list of all open ports.
NMAP is an IT networking tool designed to look for open network ports. This is a technical application that should be used by engineers. After installing nmap, simply type nmap ipaddress from a remote network (to detect ports from the outside).
The open ports are listed in the output, along with the name of the service this port is most commonly related to.
Netstat is an internal command that will show open ports on a computer. This will show what ports are open on the computer/server, but not necessarily open to the Internet. To check which ports are open to the internet, you’ll want to use one of the previous methods. However, with the proper option, Netstat will show you what service is using a specific ports. Simply type netstat -nlp to see open ports and services.
Closing Network Ports
There are two primary methods for closing network ports: (1) Disabling the service or (2) Firewalling the service.
Disabling the Service
This is typically the most straightforward remediation when it’s possible. Services that don’t need to be running shouldn’t be running. This will vary by operating system but be sure that (1) you know the impact of disabling the service and (2) once disabled correctly, the service will remain disabled after reboot.
Firewalls should generally follow the rule of Deny-All-Permit-By-Exception (DAPE) principal. In general, you shouldn’t let any inbound connections to your network that you don’t specifically authorize. There are a few ways to do this:
Network Firewall Rules. Using your network firewall, remove all rules that allow inbound network access. This is specific to each firewall vendor, but generally an easy process. (Always backup your firewall configuration)
Disable UPNP on Firewall. Many consumer firewalls come with a feature called UPNP enabled. This feature allows computers on your network to automatically open network ports. This is dangerous in most business environments and should be disabled. (Note: This requires testing after enabling to make sure all your services still work as intended)
Enable Host-Based Firewall. Depending on your operating system, you’ll want to enable the firewall on your computer/server in the same way you would for a network firewall. In Windows, you can use the Windows Firewall and in Mac Firewall.
As a matter of best-practice, you want to enable BOTH a network firewall and a host-based firewall. This is called Defense in Depth, as it prevents a change in the network from inadvertently exposing the server (and vice-versa).
Enabling firewall rules will block external services – by design – and requires caution and testing. Please seek the advice of an IT professional if you are unsure of how to proceed.
For more information on this topic please reach out to us; We’re here to help!
|
<urn:uuid:c67c756a-2986-45f0-9425-f7281b102941>
|
CC-MAIN-2022-40
|
https://help.coalitioninc.com/en/articles/3740263-closing-unused-network-ports
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00229.warc.gz
|
en
| 0.902949 | 927 | 2.796875 | 3 |
Computer Scientists Focusing on QC Systems Using Trapped Ion Technology; Part of Software-Tailored Architecture for Quantum Co-Design Project
(Phys.org) Computer scientists at Princeton University and physicists from Duke University collaborated to develop methods to design the next generation of quantum computers. Their study focused on QC systems built using trapped ion (TI) technology, which is one of the current front-running QC hardware technologies. By bringing together computer architecture techniques and device simulations, the team showed that co-designing near-term hardware with applications can potentially improve the reliability of TI systems by up to four orders of magnitude.
Their study was conducted as a part of the Software-Tailored Architecture for Quantum co-design (STAQ) project, an NSF funded collaborative research effort to build an trapped-ion quantum computer and the NSF CISE Expedition in Computing Enabling Practical-Scale Quantum Computing (EPiQC) project. It was published recently in the 2020 ACM/IEEE International Symposium on Computer Architecture.
To build the next generation of QCCD systems with 50 to 100 qubits, hardware designers have to tackle a variety of conflicting design choices. “How many ions should we place in each trap? What communication topologies work well for near-term QC applications? What are the best methods for implementing gates and shuttling operations in hardware? These are key design questions that our work seeks to answer,” said Prakash Murali, a graduate student at Princeton University.
Computer architecture and simulation-based design have been a key enabler of technology progress in classical computing. By leveraging these techniques for QC design and adopting a full system-view of the design space, rather than focusing on hardware alone, this study seeks to accelerate the progress towards the next major milestone of 50 to100 qubits.
|
<urn:uuid:ce8c9afc-6673-40d3-83ed-e1f3dbe14b2f>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/computer-scientists-focusing-on-qc-systems-using-trapped-ion-technology-part-of-software-tailored-architecture-for-quantum-co-design-project/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00229.warc.gz
|
en
| 0.923464 | 378 | 3.375 | 3 |
New things are coming in the world of Wi-Fi technology, in the form of beamforming and MU-MIMO. Beamforming makes it possible for routers to adjust their phase and power for better signal by allowing Wi-Fi routers and clients to exchange information about their locations. Beamforming, in either the explicit or implicit form, provides significantly better radio signals, faster forwarding, at greater distances. Devices manufactured in the last two years will support explicit beamforming, that allows client and Wi-Fi to communicate about their locations, and providing better steering of signals between the two. Implicit beamforming works in a similar way, steering signals based on the routers internal measurements rather than the respective locations of the router and the client. Prior to beamforming, Wi-Fi routers sent signals out in all directions. The signals need only go where Wi-Fi devices are, beamforming makes Wi-Fi capabilities more efficient by solving this problem and employing signals in the direction of devices. MU-MIMO, short for multi-user, multiple input, and multiple output, makes more bandwidth available to wireless users. Moving networking away from the one-at-a-time model to a more complex system, multiple devices can converse simultaneously.
David Newman with Network World, reviewed the MU-MIMO and beamforming capabilities within the Linksys EA-7500 and his results were noteworthy. After a quick power-up, online configuration, and downloaded firmware update, the router was up and running. Comparing transfer rates between the old 802.11n access point and the new Linksys EA-7500 router, Newman found a significant increase in results. The old 802.11n downloaded data at 25Mbps while the new Linskys EA-7500 downloaded data much faster at 58Mbps.
His conclusion is as follows:
So, can beamforming and MU-MIMO help you? The answer is “yes, for sure” if you’re in one of three categories:
- if you’ve got devices 2 years old or newer, beamforming can help
- if you’ve got distance issues, beamforming and MU-MIMO can help
- if you’ve got multiple devices, MU-MIMO can help
In all these cases, we saw significant improvements using MU-MIMO and beamforming technology in the Linksys EA7500 Wi-Fi router.
If you would like to educate yourself in more detail about the information presented in this blog post please visit: Review: Wave 2 Wi-Fi delivers dramatic performance boost for home networks
|
<urn:uuid:01413173-9fa3-4fe2-a104-20988d29071e>
|
CC-MAIN-2022-40
|
https://www.bvainc.com/2016/05/25/wi-fi-capabilities-increase-beamforming/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00429.warc.gz
|
en
| 0.918961 | 535 | 2.640625 | 3 |
While the haze around SOA is yet to settle, organizations are still toying with the concept and asking fundamental questions around its need, perceived benefits and application. In the meanwhile, another cloud is being formed over SOA―that of Cloud computing. How do the two relate? Are they parts of the same paradigm shift or only vaguely related? This article analyzes and compares SOA and the Cloud from different perspectives.
The terminologies used to discuss SOA and Cloud computing and the breadth of what they cover is often interpreted as per-need. One often sees the paradigm stereotyped to a manifestation or specific implementation: SOA to Web services for example or Cloud to computing units. The definitions below attempt to create a generalization of terms and thereby create broad applicability of the technologies, patterns of usage and deployment scenarios.
SOA – Service oriented architecture is an architectural approach for constructing software systems from a set of building blocks, called services. Services differ from components as services are autonomous, defined by their interface, loosely coupled and often support multiple technologies in integration. SOA adoption requires a methodology where these services are developed in tandem by both IT and the business. SOA adoption also requires a governance process where services are defined, changed, modified, combined, versioned, reused and orchestrated to support ever-changing business.
Cloud computing – As defined by Gartner, Cloud is a style of computing where massively scalable IT-related capabilities are provided as a service using Internet technologies to multiple external customers.
IBM describes it as an emerging computing paradigm where data and services reside in massively scalable data centers and can be ubiquitously accessed from any connected devices over the Internet.
With SOA and Cloud computing, cost reduction and business agility are common drivers, yet the benefits are achieved in different ways. Cost savings is a medium- to long-term driver for SOA as cost savings occur only when reuse of services reach levels where the cost of building the service can be offset.
Don’t make the mistake of expecting cost savings in your first SOA project. Your costs will in fact be higher. On the other hand, cost savings are immediate when a Cloud based infrastructure is leveraged appropriately. While rationalization is also a common need of SOA and Cloud computing, the targets vary: business functionality in case of the former and infrastructure that is used to deliver them in case of the latter. The chart below outlines the similarities and differences of the drivers for SOA and Cloud.
Specific stakeholders of an organization are common to both SOA and the Cloud and may have significant interests. This refers to select members of the IT organization―CIO, delivery teams and data center personnel who work to implement services and maybe host them on the Cloud infrastructure.
It’s worth mentioning that failed initiatives under both these paradigms are mostly people related, as both require strong processes and governance frameworks to be put in place. While technology related challenges do occur, they are usually overcome eventually. Maturity levels of an organization in using IT and prior demonstration of agility in adopting IT related changes is a good measure of appetite and indication of outcome for such initiatives.
Stakeholders of SOAStakeholders of Cloud Computing Sponsored by Business and implemented by CIO organization, sometimes both by latterSponsored and implemented by CIO organization Requires active participation from Business, IT and Data Center operationsRequires active participation from IT and Data Center operations Often flows from strategic initiatives and mandates in the organizationPresently more often used to meet tactical and very specific needs
Provisioning and Lifecycle
Services often are created to meet tactical needs and evolve into enterprise class as use and reuse grows. Service lifecycle management governs this evolution and is also essential to provide a roadmap and sustenance to enterprise services. On the other hand, it’s simpler to procure and use Cloud services. Life cycle challenges for Cloud users are mostly limited to compatibility between upgrades, which again is often addressed through SLA guarantees from the service provider.
When it comes to managing the two classes of assets there are differences. However, synergies exist in the way one is used to manage the other and vice versa akin to a process running on an operating system that may itself be a collection of the running process instances. Ubiquity of technologies like HTTP, XML and the Web services standards that emerged, henceforth, have lead to them being used as preferred implementation platforms for SOA and administration of Cloud infrastructures.
Provisioning of ServicesProvisioning of Cloud Computing infrastructure Initiated by one project/program with the intention of being used by othersOften initiated to meet specific project/program needs. Others may follow suit *Service created by using components from diverse platforms and technologies or from other Services*Infrastructure may be created and administered through Web-Services Strong Governance needed throughout Service LifecycleNo concept of Life Cycle. Governance required to regulate employing Cloud based infrastructure Service may be deployed on a Cloud based platformCloud leverages specific software packages and hardware deployment configurations Paradigm Focus
In order to determine how to meet internal vs. external needs, enterprises commonly focus on SOA and Cloud computing as paradigms. These include the business opportunities that each enable for the other. Security concerns are often overlooked when deploying SOA services and overstated when using the Cloud. The reluctance to move critical systems to the Cloud is the biggest limiting factor for its adoption. Industry-wide mindset shifts are needed to create the wave of Cloud adoption.
SOA ParadigmCloud Computing Paradigm Used predominantly to expose services within the Enterprise and with partnersMay be used to further the reach of services through infrastructure available with the Cloud vendor Often used to improve efficiency and reduce cost within the enterpriseMay also be used to realize new business models for service delivery and consumption like SaaS Used to design systems handling sensitive information as Security concern exists only within the EnterpriseEnterprises shy away from deploying sensitive data and systems on the Cloud as infrastructure is shared among Cloud customers
SOA is off the hype curve. It is becoming a viable approach for implementing enterprise applications. There are many lessons learned from successful and failed initiatives. Cloud, on the other hand, is still up there in the hype curve and can go either way.
The synergies between the two weigh slightly in favor of SOA as the Cloud can really further the reach of services beyond the enterprise and open up new business opportunities. Many of the regular concerns are getting trivialized, such as SLAs being met due to rapid growth of a customer base and upfront investments in infrastructure in an unknown market for services offered.
Regunath Balasubramanian leads the Architecture Services group at MindTree and is a practicing architect. He is an advocate of open source both in use and in contribution. He blogs frequently at mailto:http://regumindtrail.wordpress.com.
|
<urn:uuid:59c834ef-0be3-4b0a-b3a6-6e46ed75b43c>
|
CC-MAIN-2022-40
|
https://cioupdate.com/how-soa-and-the-cloud-relate/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00429.warc.gz
|
en
| 0.946968 | 1,398 | 2.625 | 3 |
What Is POS Security?
Point-of-sale security (POS security) creates safe environments for customers to make purchases and complete transactions. POS security measures are crucial to prevent unauthorized users from accessing electronic payment systems and reduce the risk of credit card information theft or fraud.
POS hacks represent a major opportunity for cyber criminals. POS applications contain a huge amount of customer data, including credit card information and personally identifiable information (PII) that could be used to steal money or commit wider identity fraud.
By hacking one application, malicious actors can potentially gain access to millions of credit or debit card details that they can either use fraudulently or sell to other hackers or third parties. Hackers can also exploit retailers’ compromised POS applications, which can give them access to vast amounts of customer data, as well as additional applications and systems the retailer operates.
Organizations must use point-of-sale systems security to protect their applications, prevent unauthorized access, defend against mobile malware, and prevent hackers from attacking their back-end systems.
How POS Security Works
Security is one of the biggest risks of POS system environments. Hackers are constantly on the lookout for holes in security and potential weaknesses that might allow them to launch attacks on POS applications.
An attack typically begins with a hacker gaining access to a target system by exploiting a vulnerability or using social engineering techniques. They will then install POS malware that is specifically designed to steal card details from POS systems and terminals, which spreads through an organization’s POS system memory to scrape and collect data. The hacker then moves data to another location for aggregation before transferring it to an external location that they can access.
Organizations can defend against these attack vectors by deploying technology that prevents POS malware. This includes whitelisting specific technology to protect against unauthorized practices, using code signing to prevent tampering, and using chip readers so customers do not have to swipe their credit and debit cards and make it more difficult for attackers to replicate card data.
6 Best Practices for POS Security
There are several measures that organizations can adopt and deploy to defend themselves against POS attacks and data breaches, prevent POS malware infection, and improve their POS security. Such measures include whitelisting applications, limiting POS application risks, ensuring POS software is always up to date, monitoring activity in POS systems, using complex and secure passwords, deploying two-factor authentication (2FA), using antivirus software, and considering physical security measures.
Here are six point-of-sale best practices for improving POS security:
Use iPads for POS
Many high-profile POS attacks have occurred as a result of malware being loaded into a POS system’s memory. This enables the hacker to upload malware applications and steal data without being spotted by users or retailers. But, crucially, this attack method requires a second application to be running.
As a result, Apple’s iOS systems can help prevent POS attacks because the operating system (OS) can only fully run one application at any time, whereas Windows devices rely on multiple applications at the same time. Organizations can, therefore, use iPad POS solutions to run their POS systems and reduce the chances of POS attacks.
Use End-to-End Encryption
One way for customer data to never become exposed to hackers is through encryption. Encrypting credit card and other sensitive data as soon as the POS device receives the data and when it gets sent to the POS software server will ensure it is never vulnerable, regardless of where and how hackers install malware.
Secure Your POS with an Anti-Virus
Antivirus software allows organizations to secure their systems and prevent POS attacks. It prevents malware from infiltrating organizations’ systems by scanning devices to detect anomalous or problematic applications, files, and user activity that need to be blocked or removed.
An antivirus alerts organizations when there is a potential issue and enables them to initiate the cleansing process to guarantee any present malware does not result in the loss or theft of data.
Lock Down Your Systems
The chances of employees using their organizations’ POS devices to initiate an attack are relatively low, but there is a potential for malicious insider activity or human error. Users could steal, lose, or accidentally misplace devices that have POS software installed, which could allow anyone that picks up the device to view or steal customer data.
Organizations need to lock down their systems to avoid these risks. This involves ensuring employees lock down their devices at the end of every working day, diligently keeping track of every corporate device throughout each day, and securing devices in locations that only a few trusted individuals have access to.
Avoid Connecting to External Networks
Sophisticated hackers can compromise POS systems remotely. This is typically possible through systems that can connect to external networks, which hackers will look to infiltrate through software that remains dormant until it connects to a POS system.
Organizations, therefore, need to avoid connecting to external networks and ensure their systems remain local, internal, and secure. They should look to restrict the handling of business-critical tasks, such as transactions and payment processing, to secure corporate networks.
Putting measures in place to manage and protect POS systems is crucial, but organizations also need to comply with the stipulations of data privacy and protection regulations. This includes the Payment Card Industry Data Security Standard (PCI DSS), which regulates security standards for any organization that handles credit cards from major providers. Organizations must comply on all transactions carried out on card readers, online shopping carts, networks, routers, servers, and paper files.
PCI DSS is mandated by financial organizations and administered by the PCI Security Standards Council, which is responsible for increasing cardholder data controls to reduce credit card fraud. The Council suggests that organizations eliminate cardholder data where possible, as well as maintain communication with major financial organizations and credit card providers to reduce fraud or theft issues.
It also advises businesses to regularly monitor and take an inventory of their processes and IT assets to ensure they detect potential vulnerabilities as quickly as possible.
What Is the Need for POS Security?
POS security measures are crucial as data volumes increase exponentially alongside the growth in known and unknown attack vectors and security threats. The data held within POS systems is hugely valuable and could be highly damaging for organizations and their customers if it is lost or stolen.
Organizations that rely on POS systems must prioritize POS security to protect their sensitive customer data and prevent the breach of customer payment information. They must introduce measures that protect POS systems and safeguard customer transactions, and provide training for employees on the risks of POS security policies and incidents.
How Fortinet Can Help
Fortinet offers specific POS security solutions for retailers to safeguard their data and users and ensure they provide secure transactions. The Fortinet Retail Cybersecurity offering protects retailers against sophisticated, advanced attack methods while providing customers with positive shopping experiences.
Fortinet solutions provide the visibility, automation, proactive threat intelligence, and high-performance cybersecurity approach required to protect POS systems and ensure PCI DSS compliance. They include products like the Fortinet Security Fabric, which ensures centralized control and visibility across networks and cloud systems, and FortiGate next-generation firewalls (NGFWs), which provide advanced protection across organizations’ IT environments.
What is a POS attack?
A point-of-sale (POS) attack is a cyberattack targeting POS applications and systems that store or process customers’ credit card details or transactions.
What should you do when POS is down?
A POS system going down means customers cannot carry out transactions and will result in them losing trust in a retailer. Organizations need to deploy POS security technology with monitoring and incident response features that alert IT and security teams to an issue, detect and flag threats, and provide real-time response.
What is the importance of implementing security procedures when operating a POS?
Any POS system in operation needs to be secured to prevent hackers from stealing critical data, customer information, sensitive financial details, and committing wider fraud and identity theft. Organizations that rely on POS systems need to prioritize POS security to protect sensitive customer data, prevent the breach of customer payment information, protect POS systems, and safeguard customer transactions.
|
<urn:uuid:e8348383-3c0b-44a8-a657-f1306358ba1b>
|
CC-MAIN-2022-40
|
https://www.fortinet.com/kr/resources/cyberglossary/pos-security
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00429.warc.gz
|
en
| 0.933429 | 1,663 | 2.734375 | 3 |
Oracle 11g introduces a new feature called Advanced Compression that offers organizations the promise of tables that take up less space, therefore, equaling a smaller database. A smaller database taking up less disk space also equals a lower cost for disk storage for databases. With database sizes continuing to grow at an alarming rate, the ability to increase the amount of data stored per gigabyte is exciting. There is also the potential performance benefits from large read operations like full table scans, where Oracle would need to read fewer physical blocks to complete a full table scan, as well as the potential buffer cache memory savings by allowing more data to be stored in the SGA with the compressed blocks.
Oracle first introduced compression in 8i with index key compression, and then in 9i Oracle added compression for tables. Oracle 9i table compression was limited as compression could be used only upon creation via operations like CREATE TABLE AS SELECT, direct loads, or INSERT with APPEND. This compression was well suited for initial loads of data, but over time the table had to be reorganized to recompress, which required maintenance and downtime to maintain compression over time. With pressure to increase availability of database table, compression was not well suited for normal OLTP systems since most data was not direct loaded. Oracle’s introduction of Advanced Compression changes that and allows a table to maintain compression as data is updated and inserted into a table, as shown in the CREATE TABLE:
Consider the following Advanced Compression settings:
- NOCOMPRESS The table or partition is not compressed. This is the default action.
- COMPRESS Suitable for data warehouses. Compression enabled during direct-path inserts only.
- COMPRESS FOR DIRECT_LOAD OPERATIONS Same effect as the simple COMPRESS.
- COMPRESS FOR ALL OPERATIONS Suitable for OLTP systems. Compression for all operations, including regular DML statements. Requires COMPATIBLE to be set to 11.1.0 or higher.
- COMPRESS FOR OLTP Suitable for OLTP systems. Enables compression for OLTP operations, including regular DML statements. Requires COMPATIBLE to be set to 11.1.0 or higher and, in 11.2, replaces the COMPRESS FOR ALL OPERATIONS syntax, but COMPRESS FOR ALL OPERATIONS syntax still exists and is still valid.
|
<urn:uuid:3c5d731d-db8d-4f72-ad99-b5dd2f7b51e2>
|
CC-MAIN-2022-40
|
https://logicalread.com/oracle-11g-advanced-compression-mc02/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00429.warc.gz
|
en
| 0.905758 | 476 | 2.546875 | 3 |
Business is more data-driven today than ever before. Data usage, storage, and exchange all consume energy. As an eco-conscious business, it is imperative to consider the implication of data-related energy consumption. Does it enhance or undermine your sustainability efforts? What can you do to attain energy efficiency in your data-related processes? You can find answers to these questions at the intersection of data and energy.
Evolution of Data and Energy
Datacenters are going green, while power grids become smarter. Energy grids have evolved from traditional pole-and-line to IoT-enabled smart networks. Today’s power grids facilitate two-way transmissions, allowing users to integrate renewable energy resources. You can install solar panels on your business premises and channel surplus power into the grid. Smart grids use IoT devices to collect data from across the network to optimize forecasting, billing, and consumption. However, data center operators like Microsoft are the most influential drivers of sustainability at the intersection of data and energy.
The ever-increasing number of eco-conscious businesses is enough to entice data center operators to adopt sustainable practices. Today, about 43% of multi-tenant data centers leverage innovative technologies to enhance energy efficiency and sustainability. Microsoft started integrating its data centers and renewable power generation in 2012. The company’s vision for "data plants" provides a roadmap for other operators to adopt sustainable practices and renewable energy resources in their facilities. Whether you own a data center or data-driven company, you can find solutions to business problems at the intersection of data and energy.
What are the Top Trends at the Intersection of Data and Energy?
Datacenter operators build their facilities closer to renewable energy sources. Joining server farms to power stations that use wind or waste gases can cut greenhouse gas emissions. Top trends at the intersection of data and energy go beyond purchasing green energy from external sources. Datacenter operators are redefining how they acquire and distribute power in their data centers.
On-site power plants
Integrating power plants into the data center eliminates transmission lines, substations, and transformers from the equation. This approach reduces transmission losses and improves energy efficiency. It also increases operators’ control over infrastructural changes. They can adopt green and smart technologies quickly and easily to enhance sustainability and reliability.
Data as energy
According to stats, internal distribution losses account for 10%-12% of the energy consumed by data centers. Companies like Google and Facebook redesigned their data centers and power distribution units (PDUs) to reduce energy loss. Because data is a form of energy, improving data transmission can help enhance efficiency. For example, Microsoft’s data plants use integrated optical grids to transmit data. This concept refines power distribution, reducing energy losses.
Electric companies and data center operators leverage IoT-enabled devices and cloud-based analytics for real-time tracking and monitoring. These solutions enable extensive data collection and analysis, crucial for efficient resource management. IoT and analytics play a critical role at the intersection of data and energy.
Partnering on power
Moving bits over fiber is more cost-effective than transmitting electricity over the grid. As a business owner, partner with energy companies to define how data and power flow through your ecosystem. For example, shorten the grid and extend the fiber to transmit power cost-effectively. Being part of an energy distribution network allows you to track electric fees charged by different providers and switch to the cheapest source.
Operators and electric companies have invested billions in research and technologies to address challenges at the intersection of data and energy. Business owners can find data centers that use renewable energy sources to reduce their carbon footprint. Datacenter operators can borrow a page from Microsoft, Google, and Facebook to cut costs and attain sustainability and reliability in their facilities. Evolve to thrive in today’s data-driven world.
|
<urn:uuid:38b3a38f-6b22-4ff5-babf-f31ec7d666bc>
|
CC-MAIN-2022-40
|
https://www.datacenters.com/news/the-integration-of-big-data-and-energy-is-underway
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00429.warc.gz
|
en
| 0.905135 | 790 | 2.765625 | 3 |
Lessons Learned or History Repeated?
One of the lessons that have been repeated over and over again throughout the cellular generations is that the theoretical performance, that which is designed and specified in the standards is not reflected in the real world. Such was the case with 2.5G (GPRS), 2.75G (EDGE), or 3.5G (HSPA+) and even LTE. The standards bodies write up specifications that have certain performance requirements and technologies are put together to try and meet those performance specifications. LTE has a theoretical speed of 100 Mbps. But according to Open Signal—a well-respected open-source reporting agency on the cellular experience around the world—the average LTE speed around the world (as of November 2016) is only 17.4 Mbps. In other words, if the target is 100 Mbps, then real-world LTE is only hitting 17.4% of its theoretical capability. This simply fits the pattern: real-world performance of cellular technologies fall short—far short—of their expected performance.
The point here isn’t to disparage LTE, far from it. LTE is the workhorse of cellular, powerful, and serves consumers and businesses very well. The point is to show that while 3GPP, at least until very recently, touted 10+ years of battery life, it is likely that it will fall far short once it hits the real world. If, as a hypothetical, LTE-M were to meet its battery projections like LTE meets its throughput specs, then it would last only 1 year and 9 months, or 17.4% of 10 years. While that is obviously conjecture, what isn’t is the fact that LTE-M’s actual battery-life performance is yet to be borne out.
And that is really the major problem here, nobody knows LTE-M’s actual battery life because battery-life can’t be known until the final technology is deployed on a real device, on an actual commercial network, in real-world conditions.
Here are the steps to really knowing LTE-M’s battery life:
- Finalize the written standard
- Build a chip to the completed standard
- Build an actual commercial product with the chip integrated
- Assess chip performance in device in lab conditions
- Deploy in real world conditions and confirm battery life performance
Until that day when all five steps are completed nobody can really know.
|
<urn:uuid:dc229bf3-813a-4f01-9ed2-5ecd7bd0844c>
|
CC-MAIN-2022-40
|
https://www.ingenu.com/2017/05/lte-m-battery-life-theory-versus-praxis/?doing_wp_cron=1664511912.8594679832458496093750
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00429.warc.gz
|
en
| 0.952537 | 498 | 2.671875 | 3 |
Recent advances in computing and storage systems have empowered a new wave of Artificial Intelligence (AI) applications that take advantage of deep neural networks and deep learning (DL). In the past decade, the deep learning requires very large amounts of training datasets, which were hardly available and very expensive to manage. Nowadays, DL is used in a variety of applications, which expose exceptional performance and enable functionalities that were not possible few years before. As a prominent example, Google’s Alpha AI has recently defeated Korean grandmaster Lee Sedol in GO, who is considered a genius in this particular game.
One of the main characteristics of AI techniques is their ability to learn patterns in massive data sets, as a means of enabling systems that require minimum or even no human intervention. To this end, AI experts have to build quite complex analytics models, which are trained and evaluated using very large amounts of data. Based on these models, AI disrupts entire business domains by undertaking complex problem solving, increasing automation and eliminating human-mediated, error-prone processes. In these ways, it optimizes business processes, thereby ensuring that tasks are carried out in a safer and in a more reliable fashion.
The detection and alleviation of fraudulent transactions are one of the primary applications of AI in areas such as banking, insurance, and retail payments. Fraud detection and prevention is typically based on the automated discovery of high-risk fraud-related patterns across very large volumes of transactional datasets, including streaming transaction data that feature very high ingestion rates.
There are different types of patterns that are indicative of potential fraud. As a prominent example, human (e.g., customer) behavior patterns can lead an AI agent in altering a risk profile towards increasing the potential risk. In particular, transactions that occur in particular geographical areas might raise suspicion and increase risk. Likewise, transactions linked to high-risk parties (e.g., merchants without a good track record or a bad reputation) can be an indication of fraudulent activity. Other factors that can raise suspicion include the IP addresses of the parties involved in a transaction, relationships to suspicious persons or activity in social media, as well as the emergence of one or more transactions with unusual monetary value. An AI analytics model will typically consider all the different fraud-related indicators based on different weights and scores. To this end, the use of large amounts of training data enables the specification of parameters and weights for complex neural networks that can effectively score and classify fraudulent transactions.
One of the main trends in using AI for fraud detection is a shift towards predictive and preventive detection. The latter refers to the timely detection of fraud-related patterns i.e. before any fraudulent transaction occurs, as a preventive measure. Such a proactive detection requires systems that can automatically collect customer-related datasets (e.g., credit card transactions, mobile payment), while at the same time analyzing them nearly in real-time. Hence, real-time AI requires a real-time technical architecture, which can handle streams of high-velocity at a very low latency.
Another success factor for AI-based fraud detection is the presence of domain experts in the data scientist team. Such experts must have a strong knowledge of the fraud domain, which will help them build analytics models with appropriate parameters and weights for the fraud detection task at hand. Accordingly, they can improve the AI learning process through tweaking parameters and refining weights based on feedback from the models’ evaluation using real-life datasets. In this way, experts can develop fraud analytics algorithms with optimal performance. Moreover, they can use the evaluation results towards improving their knowledge about fraudulent attempts, through observing and detailing the characteristics of normal purchasing behaviors, while at the same time differentiating them from fraudulent processes.
AI scientists that specialized in fraud detection employ the following best practices towards developing and deploying effective solutions:
Like in many domains, AI in fraud detection promises to deliver exceptional automation and intelligence. This could allow the development of systems that are very effective in detecting fraudulent transactions, including predictive analytics systems that are able to identify fraud-related patterns and indicators even before actual fraud occurs. AI systems are currently used in conjunction with human intelligence in order to reduce manual tasks, save time and reduce costs. For example, an AI system could automatically classify 99% of potentially suspicions transactions and act upon them by asking for extra verification, leaving to humans the rest 1% of borderline cases. That’s certainly part of how AI is disrupting the finance, retail, and insurance sectors.
The emerging role of Autonomic Systems for Advanced IT Service Management
How to Create an Inclusive Conversational UI
Achieving Operational Excellence through Digital Transformation
Keeping ML models on track with greater safety and predictability
How education technology enables a bright future for learning
Significance of Customer Involvement in Agile Methodology
Quantum Computing for Business – Hype or Opportunity?
Why is Data Fabric gaining traction in Enterprise Data Management?
How Metaverse could change the business landscape
We're here to help!
No obligation quotes in 48 hours. Teams setup within 2 weeks.
If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch.
Outsource with Confidence to high quality Service Providers.
If you are a Service Provider looking to register, please fill out
this Information Request and someone will get in
Enter your email id and we'll send a link to reset your password to the address
we have for your account.
The IT Exchange service provider network is exclusive and by-invite. There is
no cost to get on-board;
if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
|
<urn:uuid:fb84b8a6-2a6a-4202-9403-2b0d5b7ce6fc>
|
CC-MAIN-2022-40
|
https://www.itexchangeweb.com/blog/fraud-detection-in-the-era-of-ai/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00429.warc.gz
|
en
| 0.93107 | 1,167 | 2.84375 | 3 |
The European Union formulated the EU General Data Protection Regulation (GDPR) to strengthen and unify data protection for all individuals living within the European Union, with transparency, compliance, and punishment being its three biggest pillars.
In this article, we’ll discuss one of the most crucial parts of GDPR, Article 32, a section that’s responsible for the security of personal data being processed, and how you can comply with its requirements.
What is GDPR Article 32 Anyway?
Here’s what the legal text (well, most of it) states about GDPR Article 32:
Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:
(a) the pseudonymisation and encryption of personal data;
(b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
(c) the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
(d) a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.
To put it simply, GDPR Article 32 makes it mandatory for organizations to have technical and organizational security measures in place. These measures are based on different factors, like the degree of sensitivity of the personal data and the purpose for which it’s being acquired.
Business owners have to ensure that they fulfill the legally binding requirements and securely handle customer data. They must adopt specific processing systems that cater to these GDPR requirements by providing appropriate privacy measures via data segregation, access controls, and identity management capabilities.
Interestingly, the regulation doesn’t go into too much detail about what the precautionary processes should look like. This is mainly because of the dynamic nature of technology-based practices, which are constantly evolving, becoming better and more responsive.
Nevertheless, Article 32 compliance requirements do enough to protect customer data, giving individuals the ultimate right to decide how their information can be used.
How GDPR Article 32 Works
At the very minimum, data security measures should:
- Encrypt or pseudonymize personal data.
- Maintain ongoing confidentiality, integrity, availability, accessibility, and resilience of processing systems and services.
- Restore access to and the availability of personal data in the event of a physical or technical security breach.
- Test and evaluate the effectiveness of technical and organizational measures.
Let’s elaborate on the minimum compliance requirement in GDPR Article 32 in more detail below:
Pseudonymization of Personal Data Measures
This requirement focuses on reducing the risk in case information ever gets exposed.
It’s a simple data security approach, where you replace the names and unique identifiers of data subjects with reference numbers. You can then cross-reference this number via a separate document to keep track of your consumers.
Admittedly, pseudonymizing personal data only helps to a limited extent. If a cybercriminal successfully hacks into the corresponding data set, they’ll be able to identify your data subjects, which will defeat the whole purpose of hiding the names in the first place.
You can go the extra mile and encrypt your data, which will make it unreadable for everyone unless you have another piece of information, a.k.a. your decryption key.
Of course, adding this extra security level will prolong the steps to access the data. Keeping this in mind, it’s best to encrypt only those databases that are either in the archives, stored in devices where the risk of exposure is high, or accessed occasionally.
Protection of Confidentiality, Integrity, and Availability of Personal Data Measures
Confidentiality is the assurance that all your critical information will only be accessible to authorized parties, while integrity and availability of information refer to the assurance that the information is always accurate and the assurance the information is always viewable, respectively.
You must keep two crucial things in mind when it comes to data confidentiality:
- How to prevent hackers from getting access to your system
- How to prevent your employees from exposing sensitive information
While hackers can be controlled using anti-malware software, vulnerability scans, and staff awareness training, you can also limit insider misuse by creating strict policies concerning data handling, as well as preventing employees from misusing information.
Prompt Data Restoration Measures in Case of Any Disruptions
Suppose a physical and technical incident takes place in your system. Quick damage control is required here, where you restore access to personal data as soon as there’s a disruption.
This is why Article 32 outlines creating and maintaining offsite backups to minimize data loss. You can also have an incident response plan that lets you switch to backups with minimal delay.
Regular Testing of Effectiveness of the Adopted Measures
You have to be confident about your adopted technical and organizational measures and continue testing them to ensure they work as intended.
You must keep reviewing all your precautionary measures to discover when a process isn’t being followed properly, or which technology has become faulty.
Problems can also arise when you modify your organizational structure. Changing your system layout can bring quite a few notable changes, which, in turn, can result in specific processes becoming irrelevant.
The main idea here is to regularly test all your adopted technical or organizational measures. Our recommendation would be to schedule periodic audits, penetration tests, or vulnerability scans.
Now, let’s explore how three security features—data segregation, role-based access control, and identify management—can assist in meeting Article 32’s requirements, helping ensure security of processing.
Security Feature #1: Data Segregation
Data segregation lets you separate content into different portals to set up different access control for every type of content.
You can divide all your data by distributing users into different groups. For instance, you can have departmental groups or project-wise groups, whichever suits you best based on your requirements.
Still, this isn’t enough to meet all GDPR Article 32 requirements by itself. You need other complementing security measures in places like role-based access control, multi-factor authentication, and SSO integration.
The good news is that these measures together can really boost your system’s security, making all the effort worth it.
Security Measure #2: Role-Based Access Control
Role-based access control can work in combination with data segregation to prevent unauthorized access to data.
You can set different user roles based on an individual employee’s position, authority, and trust level. As a result, you won’t have to worry about your data being used by unauthorized eyes.
Alternatively, you can have project-specific or department-wise user groups or multiple autonomous portals to meet GDPR requirements using data segregation capabilities for teams not confined to a specific user role.
Security Measure #3: Identity Management
Having an identity management system is another great tactic to ensure that only authorized and relevant users can access your data so that everything is in compliance with Article 32 requirements.
You can use single sign-on integration to protect customer identities. This integration supports multiple authentication providers like directory services (AWS, Azure Active Directory, etc.), Identity Access Management (IAM) services (OneLogin, Okta, etc.), and third-party login services (Facebook, Google, etc.), which restricts hackers from entering your databases.
Furthermore, experts found that while 56% of Europeans have experienced some kind of fraud, 1/3 of them have faced identity theft. This sheds light on how important identity management is for every organization—more so, for those who deal with sensitive information.
How to Get Started With GDPR Article 32
GDPR Article 32 is definitely a positive development from a customer’s perspective. Business owners have to take several data privacy measures to ensure compliance and avoid penalties, ensuring client data is protected to the best possible measures.
Let’s take a look at what you need to do to meet GDPR Article 32 requirements.
- Step 1: Review the state of the art (an evaluation of the recently released and advanced data security and privacy enhancement tools) and cost of implementation when considering information security measures.
- Step 2: Make an information security policy and other specific policies to track technical and organizational measures and address information security measures.
- Step 3: Regularly review adopted policies to ensure they keep working as you want them to. You should always look for improvement opportunities whenever possible.
- Step 4: Analyze whether new measures have to be implemented to boost efficiency.
- Step 5: Implement basic technical controls as specified by established frameworks like Cyber Essentials.
- Step 6: Test and review technical and organizational measures to identify areas for improvement in your systems.
- Step 7: Take necessary measures to protect your data’s confidentiality, integrity, and availability.
- Step 8: Implement measures to restore your access to personal data in case of any disruption.
- Step 9: Implement measures, where possible, that adhere to an approved certification mechanism or code of conduct.
- Step 10: Ensure the data processor implements appropriate technical and organizational measures.
|
<urn:uuid:d3cf0c8d-70dd-4868-b775-91e50bba10cb>
|
CC-MAIN-2022-40
|
https://nira.com/gdpr-article-32/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00429.warc.gz
|
en
| 0.90226 | 1,941 | 2.953125 | 3 |
Over the last few years, we’ve witnessed a wave of planted malware and cyber-attacks directed at governments, companies and other organizations. These range from the Stuxnet and Flame viruses infecting industrial control systems and computers in the Middle East to North Korea reportedly attacking systems in South Korea.
Yet the boundaries of cyber-warfare continue to expand. A new “Threat Intelligence Report” from Arbor’s Security Engineering & Response Team (ASERT) reports that there has been an uptick in advanced persistent threat (APT) activity aimed at members of the Tibetan community, as well as journalists and human rights workers based in Hong Kong and Taiwan.
A tool to exploit the victims, dubbed the “Four Element Sword Builder,” relies on weaponized Microsoft Office RTF documents to conduct these campaigns. Researchers examined 12 different targeted exploitation incidents from a larger universe of attacks and found links to pre-existing patterns referred to as the “Five Poisons.” These targeted groups include Uyghurs, Tibetans, Falun Gong, members of the democracy movement and advocates for an independent Taiwan.
The targeting scheme, along with various malware artifacts and associated metadata, suggest that the threat actors have a Chinese nexus, Arbor reports. The perpetrators use spear-phishing techniques and malware to view and steal data, encrypt and lock files, and engage in other destructive activities. The payloads include Grabber, T9000, Kivars, PlugX, Gh0StRAT and Agent.XST.
Arbor has also identified Remote Access Trojan (RAT) Poison Ivy attacks aimed at activists in Myanmar—including behavior that the threat detection firm hadn’t witnessed previously. In fact, these types of attacks have been unleashed across Asia over the past 12 months.
Unfortunately, this is just the tip of the proverbial iceberg. Various other nation states, terrorist groups and others are ratcheting up the ferocity of the assaults. In fact, NPR recently reported that the U.S. is stepping up cyber-attacks on ISIS, including the use of spying and geolocation tools to conduct surveillance and identify the organization’s leaders.
Amid all of this, there are few, if any, rules of engagement, and there are few of the protocols and restrictions that apply to conventional warfare. How and when cyber-attacks are justified remains murky.
For example, as the NPR story points out, what happens if the U.S. and its allies target terrorists using a cellular network, but mistakenly take down a hospital with them? When and how does a country fight back, particularly if it’s not entirely clear who is perpetrating the attack?
For now, there are many more questions than answers, including what, if any, international treaties should exist. Michael Sulmeyer, previous director of plans and operations for cyber-policy at the U.S. Defense Department, describes cyber-warfare as the “fifth domain”— following land, sea, air and space.
|
<urn:uuid:c0213f2d-74bb-4944-8f2c-5a4c19ce726c>
|
CC-MAIN-2022-40
|
https://www.baselinemag.com/blogs/cyber-war-gets-real/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00429.warc.gz
|
en
| 0.946358 | 620 | 2.515625 | 3 |
Metadata management is key to wringing all the value possible from data assets.
However, most organizations don’t use all the data at their disposal to reach deeper conclusions about how to drive revenue, achieve regulatory compliance or accomplish other strategic objectives.
What Is Metadata?
Analyst firm Gartner defines metadata as “information that describes various facets of an information asset to improve its usability throughout its life cycle. It is metadata that turns information into an asset.”
Quite simply, metadata is data about data. It’s generated every time data is captured at a source, accessed by users, moved through an organization, integrated or augmented with other data from other sources, profiled, cleansed and analyzed.
It’s valuable because it provides information about the attributes of data elements that can be used to guide strategic and operational decision-making. Metadata management is the administration of data that describes other data, with an emphasis on associations and lineage. It involves establishing policies and processes to ensure information can be integrated, accessed, shared, linked, analyzed and maintained across an organization.
Metadata Answers Key Questions
A strong data management strategy and supporting technology enables the data quality the business requires, including data cataloging (integration of data sets from various sources), mapping, versioning, business rules and glossaries maintenance and metadata management (associations and lineage).
Metadata answers a lot of important questions:
- What data do we have?
- Where did it come from?
- Where is it now?
- How has it changed since it was originally created or captured?
- Who is authorized to use it and how?
- Is it sensitive or are there any risks associated with it?
Metadata also helps your organization to:
- Discover data. Identify and interrogate metadata from various data management silos.
- Harvest data. Automate the collection of metadata from various data management silos and consolidate it into a single source.
- Structure and deploy data sources. Connect physical metadata to specific data models, business terms, definitions and reusable design standards.
- Analyze metadata. Understand how data relates to the business and what attributes it has.
- Map data flows. Identify where to integrate data and track how it moves and transforms.
- Govern data. Develop a governance model to manage standards, policies and best practices and associate them with physical assets.
- Socialize data. Empower stakeholders to see data in one place and in the context of their roles.
The Benefits of Metadata Management
1. Better data quality. With automation, data quality is systemically assured with the data pipeline seamlessly governed and operationalized to the benefit of all stakeholders. Data issues and inconsistencies within integrated data sources or targets are identified in real time to improve overall data quality by increasing time to insights and/or repair. It’s easier to map, move and test data for regular maintenance of existing structures, movement from legacy systems to new systems during a merger or acquisition or a modernization effort.
2. Quicker project delivery. Automated enterprise metadata management provides greater accuracy and up to 70 percent acceleration in project delivery for data movement and/or deployment projects. It harvests metadata from various data sources and maps any data element from source to target and harmonize data integration across platforms. With this accurate picture of your metadata landscape, you can accelerate Big Data deployments, Data Vaults, data warehouse modernization, cloud migration, etc.
3. Faster speed to insights. High-paid knowledge workers like data scientists spend up to 80 percent of their time finding and understanding source data and resolving errors or inconsistencies, rather than analyzing it for real value. That equation can be reversed with stronger data operations and analytics leading to insights more quickly, with access/connectivity to underlying metadata and its lineage. Technical resources are free to concentrate on the highest-value projects, while business analysts, data architects, ETL developers, testers and project managers can collaborate more easily for faster decision-making.
4. Greater productivity & reduced costs. Being able to rely on automated and repeatable metadata management processes results in greater productivity. For example, one erwin DI customer has experienced a steep improvement in productivity – more than 85 percent – because manually intensive and complex coding efforts have been automated and 70+ percent because of seamless access to and visibility of all metadata, including end-to-end lineage. Significant data design and conversion savings, up to 50 percent and 70 percent respectively, also are possible with data mapping costs going down as much as 80 percent.
5. Regulatory compliance. Regulations such as the General Data Protection Regulation (GDPR), Health Insurance and Portability Accountability Act (HIPAA), Basel Committee on Banking Supervision (BCBS) and The California Consumer Privacy Act (CCPA) particularly affect sectors such as finance, retail, healthcare and pharmaceutical/life sciences. When key data isn’t discovered, harvested, cataloged, defined and standardized as part of integration processes, audits may be flawed. Sensitive data is automatically tagged, its lineage automatically documented, and its flows depicted so that it is easily found and its use across workflows easily traced.
6. Digital transformation. Knowing what data exists and its value potential promotes digital transformation by 1) improving digital experiences because you understand how the organization interacts with and supports customers, 2) enhancing digital operations because data preparation and analysis projects happen faster, 3) driving digital innovation because data can be used to deliver new products and services, and 4) building digital ecosystems because organizations need to establish platforms and partnerships to scale and grow.
7. An enterprise data governance experience. Stakeholders include both IT and business users in collaborative relationships, so that makes data governance everyone’s business. Modern, strategic data governance must be an ongoing initiative, and it requires everyone from executives on down to rethink their data duties and assume new levels of cooperation and accountability. With business data stakeholders driving alignment between data governance and strategic enterprise goals and IT handling the technical mechanics of data management, the door opens to finding, trusting and using data to effectively meet any organizational objective.
An Automated Solution
When approached manually, metadata management is expensive, time-consuming, error-prone and can’t keep pace with a dynamic enterprise data management infrastructure.
And while integrating and automating data management and data governance is still a new concept for many organizations, its advantages are clear.
erwin’s metadata management offering, the erwin Data Intelligence Suite (erwin DI), includes data catalog, data literacy and automation capabilities for greater awareness of and access to data assets, guidance on their use, and guardrails to ensure data policies and best practices are followed. Its automated, metadata-driven framework gives organizations visibility and control over their disparate data streams – from harvesting to aggregation and integration, including transformation with complete upstream and downstream lineage and all the associated documentation.
erwin has been named a leader in the Gartner 2020 “Magic Quadrant for Metadata Management Solutions” for two consecutive years. Click here to download the full Gartner 2020 “Magic Quadrant for Metadata Management Solutions” report.
|
<urn:uuid:85ed5bbe-1f11-476f-9f1b-410552dd7089>
|
CC-MAIN-2022-40
|
https://bookshelf.erwin.com/tag/metadata-management/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00429.warc.gz
|
en
| 0.906755 | 1,466 | 2.828125 | 3 |
Biomethane Market size is projected to witness appreciable growth over 2021-2027, given the shifting preference from fossil fuels to biogas and biomethane to mitigate emission levels. Likewise, escalating product usage as a renewable alternative to natural gas, to supply heat and electricity in industries and buildings, will further stimulate the industry outlook. Purified biogas is also deployed across the transport sector, especially for heavy vehicles and vessels, to abate GHG emissions.
Biomethane refers to a naturally occurring gas derived through the purification or upgradation of biogas. The fuel, unlike renewable natural gas, is obtained from the enhancement of by-products and products of agro-industrial and agricultural chains, organic waste and sewage treatment, thus promoting local and circular economies.
Strong emphasis on the usage of biofuels across the automotive sector to pursue the goal of sustainable mobility is among the key trends bolstering the overall biomethane market expansion. The deployment of biomethane in vehicles can lead to the considerable reduction of NRPEC (Non-Renewable Primary Energy Consumption) and CO? emissions, thus evolving as a preferred eco-friendly gas. Increasing cancer burden due to the high exposure to air pollutants emitted from petroleum extraction and refinery plants is anticipated to further boost the market share in the years ahead.
However, the production of unpleasant odors, the highly-explosive nature of methane, and the limited power generation are expected to act as major restraining factors for the industry in years to come.
With regards to type, the agricultural waste segment is set to register a significant revenue by 2027. This can be attributed to the growing interest in biogas from agricultural residues and agro-food wastes as it can avoid land use conflicts regarding the use of dedicated crops. Furthermore, the strong focus on anaerobic digestion of residual agricultural biomass to produce biogas and reduce GHG emissions will also boost the segmental growth over the estimated timeline.
Regionally, the North America biomethane market will account for a substantial share over 2021-2027, due to the emergence of wastewater treatment plants (WWTPs) as a potential source of energy in the region. As per the Water Environment Federation and the National Association of Clean Water Agencies, the energy generated at the WWTPs in the U.S. could meet over 12% of the national electricity demand. Mounting adoption of anaerobic digester systems to produce biogas will further boost the regional industry outlook in the upcoming years.
The competitive landscape of the global biomethane industry consists of companies such as J V Energen, Gazasia, ETW Energietechnik, SoCalGas (San Diego Gas & Electric, Pacific Enterprises, Sempra Energy), Green Elephant, Verbio, and Mailhem Ikos Environment, among others. Strategic product launches and mergers and acquisitions are the major strategies being employed by these market players to boost their business operations across the global market.
For instance, in February 2021, Verbio Vereinigte BioEnergie AG announced that its straw-to-biomethane facility would begin operations by fall 2021. The facility, which is under development in Nevada, Iowa, has been designed to contain air pollution caused by the massive burning of waste straws.
The novel coronavirus pandemic has imposed unprecedented constraints to the economic and social activities, along with severe impacts on energy use. In 2020, a considerable decline in transport biofuel production and renewable heat consumption was observed worldwide. In addition, volatile oil and gas prices have made renewable heat technologies and biofuels adoption depend on cost-competitiveness. These aforesaid factors, in turn, presented various challenges to biomethane production.
However, the share of renewable sources in electricity production has surged at record levels in various countries amid the pandemic. In less than 10 weeks of the lockdown, the U.S. and India increased their renewable energy consumption by 40% and 45% respectively. Moreover, Spain, Germany, and Italy also set new records for the integration of variable renewable energy sources to the grid, which could help the biomethane market regain momentum in the near future.
|
<urn:uuid:d02e2630-53d5-4d49-82d8-98b28da92dda>
|
CC-MAIN-2022-40
|
https://www.gminsights.com/industry-analysis/biomethane-market
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00429.warc.gz
|
en
| 0.940145 | 871 | 2.71875 | 3 |
NFS has traditionally been a semi-robust method of sharing files between Unix-based computers. The IETF has been working on NFSv4 since early 2000, and implementations have finally started springing up everywhere. The Linux kernel team has focused its efforts in NFSv4, providing its least buggy NFS implementation yet. If that alone isn’t reason enough to start using v4, read on.
Some key features are:
- POSIX ACL support, including Windows ACL interoperability.
- Locking enhancements, including advisory and mandatory locks.
- Data replication or migration is made easier with NFS’s help.
- TCP-only, with tons of improvements, making NFS over WAN links viable.
- No more portmap, lock manager, mount and RPC hell; NFSv4 uses RPC, but all over port 2049.
- Security, for the first time: authentication, cryptographic integrity and encryption are all possible.
In short, NFSv4 has addressed every major complaint ever registered about NFS. Being a more robust mechanism for sharing files means that open source folk were enthusiastic about implementing it, and they have done well. Companies poured money into the Open Source Development Lab (OSDL) in Beaverton, Oregon to stimulate more robust testing and development of NFSv4 in Linux. Solaris, Linux, AIX, (and other Unixes) and Windows can successfully share files using the new NFS protocol, with no insurmountable compatibility issues.
To expand on the highlights of NFSv4 outlined above, let’s begin with ACL support. A fundamental change in the way NFS looks at files was needed. The new model makes sense to Unix as well as Windows, and supports an extended set of permissions attributes. Even the notion of a File Handle has been completely rethought and for conceptual purposes, can be thought of as deprecated.
NFS has always been good at dealing with network failures. Writes to the file systems will block, and when operations resume, they will complete. One limitation has always been with locking, though. NFSv4 now supports a finer granularity in locking, implementing advisory and mandatory lock mechanisms. This means that clients can choose to lock files at more than just “I’m using it” levels, allowing greater amounts of concurrent access to files.
Date replication allows easy copies of file systems to be propagated to multiple servers, and some NFSv4 implementations (AIX, most notably) can even redirect a client to the appropriate server. Many companies are talking about ways to make NFS capable of failover, and IBM has implemented it already. Data migration is also part of the v4 specification, which can provide a simply way to move NFS services and the related data to new hardware.
NFS has historically been very bad over WAN, or high-latency links. For reliability, TCP has always been available, but performance has always been bad across non-local networks. UDP functionality has been removed, making TCP the only option. Couple that with tons of performance enhancements and WAN operation is not only possible, but very efficient. The protocol is also self-contained, enabling Internet usage without opening gaping holes in firewalls. Locking and mounting file systems all happen over port 2049, and if NFSv4 is the only NFS protocol enabled, opening that to the Internet can be quite secure.
Security had to be addressed if NFSv4 was to become an Internet-accessible protocol. The RPCSEC_GSS protocol is required for version 4 implementations, which means it will support: Kerberos v5, LIPKEY, and SPKM-3. A server will control which is allowed, along with the requirements for authentication and encryption. The new school of thought for NFS, similar to what CIFS in Windows requires, is that individual users get authenticated, not just the machines they are on.
If you’re running NFS in a small environment on newly installed Linux or Solaris machines, you’re probably already running NFSv4 in AUTH_SYS mode. That is the default, crufty old way for authenticating systems that NFS has always used. It “just works” for most people. For those in a larger environment, probably with multiple DNS subdomains accessing the same file server, you’ll probably run into problems.
To switch from earlier versions to NFSv4, the first order of business is to understand NFS domains.
The purpose of domains in NFS is to allow a more robust security policy for user access, and at the same time provide a nice management mechanism. In previous NFS versions, the identity of a person was the same, regardless from where they accessed an NFS share. If you exported a file system to multiple computers, user Bob with UID 3333 on one computer would own all of the files from both locations, even if Bob wasn’t UID 3333 on all systems. If someone was root on a different computer, they could simply create a user with UID 3333 and they had access to Bob’s files. This made Bob quite unhappy.
With NFSv4 domains, usernames (and groups) to UID (or GID) mappings are done based on “[email protected],” eliminating many shortcomings in NFS. Coupled with authentication, NFS deployments can now become very secure, manageable and arbitrarily complex. This is a giant leap forward. The easiest method for a smooth transition is to configure all your hosts to have the same domain. Of course you’ll eventually want to use the authentication mechanisms that NFS provides, but for trouble-free upgrades to the more robust NFSv4, the only real gotcha is configuring NFS domains.
We’ll explain more about domains when we follow up on implementation, next week.
Countless times Unix administrators have half-joked about using Samba and Windows file sharing protocols for authenticated access to Unix shares. Those thoughts can safely be forgotten, and there’s no need to feel dirty any longer! NFSv4 has come to the rescue, finally.
|
<urn:uuid:414a3689-1744-4456-906e-dbd4aaa4f355>
|
CC-MAIN-2022-40
|
https://www.enterprisenetworkingplanet.com/standards-protocols/nfsv4-a-unix-mainstay-learns-new-tricks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00429.warc.gz
|
en
| 0.939567 | 1,292 | 2.640625 | 3 |
Artificial intelligence, robotics and cloud computing are a catch-22. On one side of the equation, these technologies make our lives so much easier by automating tasks and simplifying work. They can be used to perfect a development or mechanical process too, as a machine will never tire or feel exhaustion like a human. They deliver even, consistent results almost endlessly.
They also have the potential to disrupt many industries by saving brands and organizations a considerable amount of money. Therein lies the problem. The rise of modern technologies like these could replace a huge number of jobs, especially among the middle-class.
In fact, it’s a reasonable and widespread worry that computers, machines and AI will soon replace most human workers.
Fast-food restaurants like McDonald’s are already toying with the idea of swapping out human cashiers for automated ordering kiosks (opens in new tab). IBM has deployed its machine-learning and AI platform Watson (opens in new tab) in many innovative ways. Even social media platforms like Facebook and Pinterest are using AI and machine-learning to deliver relevant content to their userbase (opens in new tab).
As for robotics, companies like Amazon are using them in warehouses to work alongside their employees and reduce delivery times. There’s even talk of using drones to make local deliveries.
Ultimately, it’s not difficult to see where this is headed and how this will affect the future, particularly when it comes to work. In fact, if there is one thing we’ve learned from this, it’s that we have absolutely no idea what careers, jobs or projects could be uprooted by these technologies. It’s permeating just about every industry. Even journalists have something to fear, as AI isn’t too bad at writing, either (opens in new tab)!
Skills That Will Help You Keep Your IT Job
So, how do you ensure you keep your job in an ever-changing world, specifically one that’s moving at hyper-speed? You could be out of work in an instant, replaced by a robot, computer or machine of some kind.
You can’t outperform a machine — it’s just not possible. Humans need breaks, and they need to eat and sleep. Machines and AI, on the other hand, require none of those things.
It won’t be long before you can’t outsmart modern AI and computers either. Thanks to cloud computing and big data, these systems will have access to an infinite and immediate source of information. It’s not necessarily that they will be smarter than humans, but they will certainly be armed with more data and can process it faster.
That’s OK, you probably cost a lot less than most of these technologies, right? Wrong.
At nearly every turn, it seems as though these platforms have you beat. That’s why so many people are afraid they’re going to be out of work soon. Sadly, that’s true in many instances. They will soon be obsolete, but that doesn’t have to happen to you.
As an IT professional — hopefully with a long list of experience — there are some things you can do to boost your job security.
1. Get Creative
One trait in particular that humans have that most computers could never dream to rival is true creativity. Looking at many of the jobs being replaced currently, most require analytics tools, critical thinking and planned reactions — like being a cashier. What they have little of, however, is creative thinking.
Yes, some AI tools are being used to write and generate journalistic content. However, generated content will never rival true content — at least when it comes to creativity. Humans are infinitely more creative — think of artists, musicians, writers, architects, advertisers and marketers, computer programmers and game developers. These are all creative, hard-to-clone careers, at least when it comes to the products and services being created.
The problem with this, unfortunately, is that not everyone is born creative.
The higher the degree of creativity a job requires, the less likely you and your co-workers will be replaced by AI and computers.
Someday, AI and computers may replace every job, every career — or they may not. Before that can happen, the relevant systems need to be developed and the appropriate data needs to be collected, which could take years. Without the proper analytical tools, data sources and algorithms, AI simply cannot do its job.
That means specialized jobs and skillsets are protected for some time to come. The more specialized and niche your skillset, the less likely you are to be replaced.
Obviously, every industry and career is going to be different. A good bet is to stick with soft skills, things that machines and computers have a difficult time emulating. This includes tasks like providing health care as a nurse, anything that requires public or event speaking, and team-based cooperative responsibilities.
Once you specialize in something, don’t stop there. The trick is to continue improving your skill and knowledge sets as much as possible. Become a tour de force in your industry.
3. Learn to Work With AI, Computers or Robots
This idea is brilliant, really. No matter how many machines, systems or robots are used to replace jobs, they will always need maintenance and administrative teams to keep them afloat. What happens when a computer or robot breaks down? What happens when there’s a computer glitch caused by jumbled code?
So many people are focused on the jobs and careers that are going to be taken away, they’re not paying attention to the ones that are going to be created.
Just to name a few, these platforms need AI trainers, programmers, maintenance crews, installers and data analysts. All those data sources the systems are plugging into need to be monitored to make sure they’re handled properly.
If you’re in IT, you already have a lot of the skills required to handle these responsibilities. Don’t overlook the possibility of working with these platforms.
4. Go Manual
Maybe it’s just not an option for you, or maybe after years behind a desk or sitting in one place, you’ll enjoy manual labor. Either way, many conventional labor tasks and responsibilities are just too much for AI and machines to handle.
In construction, for instance, there’s only so much that AI and automated systems can be used for. You can use 3-D printing (opens in new tab) to develop the prefabs for an entire building, yet you still need someone to physically assemble everything.
Even small tasks like folding a towel fall into this category. Machines or robots trying to fold a towel can take up to 25 minutes (opens in new tab) to do the work, while a human can do it in mere seconds. In other words, some things are just better off done the old-fashioned way.
The Future Is Bright
Despite how things may seem, the future really is bright. While you may be inclined to look at these technologies as disruptive in a negative or bad way, that’s not always the case. In fact, you might find a life dominated by these platforms is much easier and more convenient.
Think of how many menial, tedious tasks you’ve had to perform over the course of your career. Now imagine never having to do any of that again, because a computer can do it for you. You never know — this may all be a blessing in disguise.
Either way, focus on honing the skills mentioned above and you’ll come out on top.
Kayla Matthews, technology writer and cybersecurity blogger
Image Credit: Duncan Andison / Shutterstock
|
<urn:uuid:c5b046ab-0e9b-416a-b51e-947e6961e8ea>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/features/4-skills-all-it-professionals-should-consider-learning/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00629.warc.gz
|
en
| 0.955226 | 1,611 | 2.671875 | 3 |
In a world of computers and technology, there are a few things you are expected to know if you are in anyway associated with information technology. Technical abbreviations tend to quickly slide themselves into the vernacular which is why if you are about to get associated with it, going through a brief of the terminologies used in the market is necessary to dwell in it.
To catch the pace of the constantly growing IT field, it’s essential to know every term ranging from common to advanced on the scale. This blog will take you through a series of the most-used terminologies and will also explain how are they important in the IT profession.
Adaptive technology is known for helping the disabled to function as simply as possible. It is a set of technological tools and products that helps them work efficiently in their daily tasks like getting dressed, eating, writing, etc.
Importance of Adaptive Technology
The importance of being independent in today’s world is rising and Adaptive Technology gives exactly that to the disabled. Moreover, the inclusive environment of the work world enhances indiscrimination in the work place and Adaptive Technology makes it possible by eliminating the hurdles faced by the disabled with basic day-to-day activities.
The processes involved in creating a software is usually indicated as Agile Development. This set of processes are usually creative and flexible which means the code is kept simple and is tested often. This term emphasizes on incremental delivery, team collaboration, continual delivery, etc., and it’s incremental yet functional parts are released as soon as they are ready.
Importance of Agile Development
Business requirements are demanding more flexibility year after year. With the ever-changing demands, there is a much-needed change expected in the supply world. Agile Development, with its focus on individuals rather than tools and processes opens up the possibility of a higher quality and functional product.
Business Intelligence (BI)
This term refers to leveraging tools and softwares to analyze and transform raw data sets into actionable insights to drive better business decisions. Business Intelligence (BI) involves data mining, analyzing trends, and deriving insights to streamline the business efforts.
Importance of Business Intelligence (BI)
First and foremost, Business Inteligence takes up a lot of human work i.e., pulling up data for you managers, and cuts on your extra effort and cost. It can make target-driven business decisions and provide a new set of opportunities for your business.
BYOD (Bring your own device)
This refers to a policy or a situation where an employee is required to use a personal device such as a mobile phone, tablet, laptop, etc., and integrate it with the organization’s network. BYOD allows employees to gain the IT support of their personal devices from the organization.
Importance of BYOD
When an employee brings their own devices at work, it will automatically reduce your IT costs and free up your resources. Moreover, their familiarity with their own devices makes them more productive and creative. These devices hold a risk of security which needs enhanced protection to be safe from any possible threats.
Also known as multi-platform software and platform-indepent software, Cross Platform refers to a product that is interopable among a variety of operating environments. This is majorly targeted towards creating softwares that can run on any operating systems i.e. Microsoft Windows, Linux & MacOS.
Why are Cross Platforms important?
To encourage user flexibility and to increase the number of potential users, cross-platform softwares are a must. With the technology innovating itself everyday, people are using a variety of operating systems and to keep up with it, coming up with a cross-platform software ensures that you always dominate in your market niche.
When you outsource your work to a set of people from a specific area, it is referred as Crowdsourcing. This approach towards work takes away the in-house efforts of the task and saves time as the work is assigned to a bigger group of people i.e., general public.
Importance of Crowdsourcing
When you crowdsource and turn to a larger group of people for ideas and solutions, it can enhance the ideation process of the organization. Not only you get to access great ideas, but you can also utilize this to get unexpected feedbacks from the real-time users.
It refers to a new technology that significantly alters the ways consumers, and businesses operate and primarily, replaces an old one. It sweeps away the existing systems and habits to replace it with something more capable, useful, and recognizable.
Why is disruptive technology important?
Staying updated is a major requirement for IT professionals and with disruptive technology, you will also need to polish your skills according to the advancement/replacement. Moreover, to adapt to Disruptive Technology, it necessary to always be on a lookout for something ‘better’.
To implement the best of strategies is as important as coming up with them. Enterprise Architecture is a function that help organizations standardize and organize their IT operations. It is important when the work process has to be aligned with the set of business goals which is possible through Enterprise Architecture.
Importance of Enterprise Infrastructure
To gain a better visibility towards the business goals, enterprise architecture plays an important role. The enterprise architecture tree provides a holistic view of the organization which eventually makes them more efficient and confident for their future goals.
Also known as environmental technology or clean technology, is an umbrella that describes environment-friendly innovations and technology that directly relate to recycle waste, safety, health, energy efficiency, etc. Green Technology encompasses an evolving group of methods and materials in order to come up with techniques for generating energy i.e., solar cells, reusable bottled water, etc.
Importance of Green Technology
It’s a win-win situation when the environment is preserved and the organization’s costs are reduced. Green Technology reduces energy usage which translates into lower carbondioxide emissions and also minimized the use of electricity.
ITIL (IT Infrastructure Library)
IT Infrastructure Library is a collection of volumes which narrates the best practices for delivering IT services. It has been revised a few times but currently has a total of five books.
Relevancy of ITIL
ITIL feeds us with a systematic and professional approach towards the management of IT service. It is also a highly credible framework and is the finest guide for an IT professional. Not just this, the certification for ITIL can bring numerous benefits to your career.
How did you find it useful?
IT is a broad and a constantly growing field and we can’t ignore the growth and innovations it demands. With the growing age, sticking up to these terminologies and always learning the new ones that come by is essential for a rising growth graph curve no matter if you are experiencing managed IT services in Vancouver.
Getting a catch of these terms can help you better understand the context and intricate terminological details. On this note, tell us how these terminologies helped you. After all, your outputs can be as strong as your communication skills!
|
<urn:uuid:7a52d925-6ee3-4996-ac64-ce11a5c74458>
|
CC-MAIN-2022-40
|
https://www.activeco.com/know-your-managed-it-terminology/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00629.warc.gz
|
en
| 0.947177 | 1,482 | 2.75 | 3 |
Privacy and data protection on the Internet has always been a sensitive topic and one of the main barriers against the extensive adoption and use of Internet services. During the last couple of years, there have been heated debates regarding the need to protect citizens’ personal data online, as a result of the emergence of the General Data Protection Regulation (GDPR) in the European Union (EU). At the same time, the world has witnessed major privacy-leaks incidents, which demonstrated the data protection limitations of major online platforms such as the Facebook, Yahoo, and Instagram. The recent leak of nearly 87 million Facebook users’ data as part of the Cambridge Analytica case is only the tip of the iceberg, as it adds up to a number of similar incidents that have occurred during the last decade. For example, personal information of 57 million Uber users and of 600,000 drivers were accidentally exposed in late 2016, while an unauthorized access to “high-profile” Instagram user accounts took place in 2017. Back in 2010, it was found that several Facebook apps were transmitting user identifiers. Moreover, in 2013 a large-scale data breach at Yahoo’s infrastructure occurred, which affected 3 billion user accounts. Despite their adverse effects, these incidents have raised awareness regarding the privacy risks that are associated with the user’s data on the internet. In this context, individual users and our society as a whole are deeply concerned about the privacy implications of Internet services.
Most of the privacy and data protection vulnerabilities of online services stem from their centralized model for data collection, storage, and processing. This centralized model makes it extremely difficult for individuals to ensure that their sensitive data (e.g., location, purchase behavior, interactions in social media, browsing history) are used solely for the purposes that they are originally provided. In the centralized approach, end-users are forced to entrust their data to a third party, which has the power to abuse the data or even share it with other parties.
A decentralized approach to personal data management is therefore proposed as a remedy to the above-listed challenges. The decentralized approach does not rely on a single party for data collection and processing, but rather provides end-users with fine-grained control over their personal data. Likewise, they also enable new models of trust, governance and data management, which makes users the active participants in the collection, processing, and use of their data in various applications. In the scope of a decentralized approach to data management, sensitive data remains under the control of the user, who decides when and with whom to share his/her data.
The implementation of the decentralized approach to personal data management is not a purely theoretical concept. We are already witnessing practical implementations which are propelled by the rise of distributed ledger (i.e. blockchain) infrastructures. The distributed ledgers enable data control by the peer nodes of the blockchain network rather than aggregating and processing data centrally. As a prominent example, Dock.io is providing one of the world’s first decentralized social network, which aims at alleviating the proclaimed privacy vulnerabilities of mainstream social networks. As another example, the Enigma project provides scalable privacy mechanisms over any blockchain infrastructure. Enigma promotes a unique and disruptive approach to data processing, which employs advanced mathematics in order to allow execution of queries over encrypted data without ever decrypting them. In this way, it guarantees privacy and data protection at all times, in addition to ensuring decentralized data ownership and control.
The decentralization of data ownership and control is also an enabler for entirely new business models, which rely on end-user’s participation in the data management process. In particular, each user can be incentivized to approve access to his/her data, which is a foundation for a personal data market. The interested stakeholders can then ask for permission to access an individual’s data. The process may involve grating monetary (or other) benefits to the end-user in order for him/her to allow access to his/her data. Moreover, end-users should be able to negotiate the access of a third-party to their data, either through asking for higher rewards or even through requesting a higher privacy or data protection level (e.g., use of a reduced dataset with less sensitive data).
The main characteristic of a personal data market is that it alleviates the “silo” nature of personal datasets which are used in most of the current applications. Nowadays, personal datasets are provided by end-users for use within specific applications. It is not technically easy and legally allowable to reuse and repurpose personal datasets across different applications. The personal data market will alleviate this limitation, as data will be always accessed and shared following the end-users’ consent. Data processors will be therefore able to access and repurpose datasets according to the needs of different applications, provided that citizens give their consent.
Personal data markets could be the next evolutionary step towards a transparent and privacy-preserving use of sensitive data. However, they are currently at a research stage due to the existence of both technological and regulatory barriers. At the technology forefront, the advent of blockchains holds the promise to facilitate decentralized data management. Similarly, at the regulatory forefront, the advent of the GDPR regulation provides a framework for regulating the operation of the personal data market. Note however that the human aspects of the personal data market should be also researched, including the impact of personal preferences and of the type of the personal data that is accessed.
GDPR is the European Union’s new data protection law. As already outlined, it can be the framework that will regulate the operation of the emerging personal data markets. It will take effect on May 25, 2018, i.e. later this month. GDPR is destined to replace the Data Protection Directive (“Directive”), which has been in effect since 1995. While it preserves many of the principles established in the previous “Directive”, it also gives individuals greater control over their personal data and imposes many new obligations on organizations that collect, handle or analyze personal data. As such it is more appropriate for supporting personal data markets. At the same time, it provides national regulators with the power of imposing significant fines on organizations that breach the law.
Despite being an EU regulation, GDPR has received global attention. This is because it is considered as a role model for dealing with privacy issues on a global scale. Therefore, it is expected that GDPR will be adopted in several other countries over time. Currently, it applies to organizations that collect and process data within the EU, as well as to the processing of personal data of individuals who reside in the EU by organizations established outside the EU.
GDPR, Cambridge Analytica, Blockchains for data management and personal data markets are some of the concepts that will redefine the way personal data are handled on the Internet. In the next few years, we will witness radical changes in the collection, storage, and processing of personal data by established and emerging data providers. Recent announcements by Facebook in this forefront confirms the above statement. However, many more are yet to come.
Smart Contracts for Innovative Supply chain management
Increasing Trust in the Pharma Value Chain using Blockchain Technology
Top Technology Trends for the Future of Insurance
On Blockchain’s Transformative Power in Seven Industrial Sectors
Are Blockchains ready for Industrial Applications?
Significance of Customer Involvement in Agile Methodology
Quantum Computing for Business – Hype or Opportunity?
The emerging role of Autonomic Systems for Advanced IT Service Management
Why is Data Fabric gaining traction in Enterprise Data Management?
How Metaverse could change the business landscape
We're here to help!
No obligation quotes in 48 hours. Teams setup within 2 weeks.
If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch.
Outsource with Confidence to high quality Service Providers.
If you are a Service Provider looking to register, please fill out
this Information Request and someone will get in
Enter your email id and we'll send a link to reset your password to the address
we have for your account.
The IT Exchange service provider network is exclusive and by-invite. There is
no cost to get on-board;
if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
|
<urn:uuid:0d4d9ae3-3028-4ba2-8a14-0a02baf89846>
|
CC-MAIN-2022-40
|
https://www.itexchangeweb.com/blog/personal-data-markets-decentralizing-data-control-and-ownership/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00629.warc.gz
|
en
| 0.931844 | 1,730 | 3.046875 | 3 |
The FAA expects 30,000 remotely-piloted aircraft in the skies by 2015.
Are you ready for 30,000 unmanned aircraft flying our friendly skies? Whether you are or not, Congress and the President mandated that the FAA integrate drones into the national airspace system by 2015. Addressing the security and privacy concerns, while meeting the integration deadline, will likely require federal, state, and local officials to experiment with new regulatory models.
There are a number of inventive and creative options emerging for facilitating the integration of remotely-piloted aircraft domestically. These models offer new ways to manage and regulate this disruptive technology:
The Licensing Model. To reduce the risk of privacy creep, unmanned aircraft licensing could correspond with the type of tools and capabilities that are employed by the technology itself. If a local weather service wants to access the airspace to take atmospheric readings, a “Category A” license would be required. However, if the local emergency manager wants to access the skies to conduct a disaster assessment by a high resolution camera, a “Category B” license would be required. Commercial truck drivers use a similar model already. For example, trucks carrying toxic chemicals on public roads are required to follow different rules than trucks carrying timber.
The ‘Swiss Army’ Drone. A counter to the licensing model, which keeps the unmanned system’s use intentionally narrow, this model is meant to reduce the number of drones in the sky, but increase their technical capabilities. A single aircraft would be supported by a central entity, who then leases out feeds. Imagine a single drone that has a lens for the local news traffic channel, a lens for local law enforcement, a Light Detection and Ranging sensor for the state EPA, and a thermal scanner for the local fisheries association. This is meant to limit the number of drones in the sky, while centralizing the point of regulation.
Shared Services Model. State and local jurisdictions already rely on memorandums of understanding and other shared service models to affordably address public needs. In the case of remotely-piloted aircraft, a central service center would manage the operations and oversight of the drone fleet. This is meant to encourage collaboration across jurisdictions and reduce regulatory inconsistency. A shared services model is also meant to reduce the overhead and aircraft downtime. For example, the city of Baltimore could use a shared services drone to survey traffic on Monday and the District of Columbia could use it to assess erosion on the Potomac River on Tuesday. A version of this model is already in use by Customs and Border Patrol, which loans UAVs to local law enforcement agencies.
The Payment Card Industry - Data Security Standards Model. To secure payment transactions, the Payment Card Industry enforces a rigorous set of information security controls and standards that focus on securing the networks that host and transfer credit card data. Like the PCI–DSS model, regulators could focus on securing data collected by drones, such as by encrypting feeds, hardening host servers, and obfuscating data, instead of focusing on the aircraft itself. This data-centric approach emphasizes the privacy of those exposed to drone surveillance.
A Smartphone that Flies. While much of the conversation has focused on airspace, the reality is that an unmanned platform is essentially a flying computer. During a recent public panel on drones and domestic surveillance, computer security expert Bruce Schneier noted that “drones are really just . . . mobile computers,” which means they have the same strengths and weaknesses as computers. Viewing drones this way provides policy makers a variety of security frameworks to reference for standards and regulations. These frameworks are accompanied by large networks of information security professionals in the form of the ISC2, CompTIA, and ISACA.
While debating the future of domestic unmanned systems, Schneier commented that everyday use of drones may seem far off, but “today’s expensive and rare becomes tomorrow’s commonplace.” Security and privacy issues resulting from remotely-piloted aircraft in our skies should be of concern to today’s policy makers and regulators. However, there are innovative and practical solutions that can be tailored to address the challenges while still harnessing their benefits.
Matt Caccavale is a senior consultant and GovLab Innovation Fellow at Deloitte & Touche LLP. He specializes in security and privacy for public-sector clients.
Samra Kasim is a senior consultant and GovLab Innovation Fellow at Deloitte Consulting LLP. Her research interests lie at the intersection of policy and technology.
|
<urn:uuid:5d467ce3-2d8f-4c9f-98c5-e79d42855e94>
|
CC-MAIN-2022-40
|
https://www.nextgov.com/ideas/2013/03/commentary-five-ways-integrate-drones-domestic-airspace/61951/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00629.warc.gz
|
en
| 0.94302 | 922 | 2.53125 | 3 |
Data Security refers to the set of practices and standards we use to protect digital information from accidental or unauthorized access, changes, and disclosure during its lifecycle. Data security can also include organizational practices, policies, frameworks, and technologies that can protect data against cyberattacks, malicious intrusions, or unintentional data leaks.
The practice of security data encompasses physical security of hardware and network assets containing the protected data, administrative controls and policies, and logical defense mechanism for software accessing the data.
This article will explain data security from a 1,000-foot view. We’ll look at:
- Why we have to secure our data
- The components you’ll need for a security strategy to actually work
Why is data security necessary?
Data security is a tricky subject, one that’s often regarded as an afterthought among organizations. This approach also leaves the unprepared organizations vulnerable to cyberthreats and they often realize this too late. As the U.S. Cyber Chief puts it,
“Either you know you’ve been hacked, or you’ve been hacked and you don’t know you’ve been hacked”.
Here are a few numbers to describe the security threats last year:
- 43% of data breach victims were small business organizations (Verizon)
- The average cost of a data breach is $3.9 million (IBM)
- A cyber-attack takes place every 39 seconds—that’s 2,244 times per day. (University of Maryland)
- Damages from cybercrime are projected to reach $6 trillion annually by the year 2021. (Cybersecurity Ventures)
- Cybersecurity unemployment is at 0%and over one million jobs are unfilled (CIO, Bureau of Labor Statistics)
Data security strategy in 3 components
An extensive data security strategy should contain the following key elements:
Risk assessment & the ABCs of data security
The cybersecurity landscape is evolving all the time. Its dynamic nature means there’s no silver bullet solution to data security and risk management. Therefore, the first step to develop a data security strategy is to identify and understand your data security posture.
To achieve this, use these components—the ABCs of data security:
- Align your organizational goals with IT. Understand the security threats and quantify the business impact of the underlying risks.
- Build future-proof security capabilities that can help your organization mitigate risk in the near future, accounting for the changing market dynamics, scale of operations, technologies in use, and the global cybersecurity risk landscape.
- Create an optimal tradeoff between your resource investments across your people, processes, and technologies. Advanced security solutions cannot work if you do not have a culture of security awareness, sufficient governance controls, and the necessary skilled talent in-house.
- Develop a thorough data security and risk management program to mitigate the security challenges.
Understanding cloud security
Cloud migration is inevitable for any organization that must scale their hardware resources to meet dynamic user demands—without having the necessary CapEx and in-house talent to manage the infrastructure.
When you migrate sensitive business information and data workloads to the cloud, organizations must protect the data at rest, in transition, and during processing. A comprehensive data security strategy for cloud-based data workloads should contain the following elements:
- Classify your cloud infrastructure. Choose between private, public, and hybrid cloud for data workloads based on security, cost and performance requirements.
- Encrypt data. To avoid data leak while it moves across the Internet, data encryption ensures that only the intended recipients with the decryption keys can make sense of your business information.
- Know your responsibilities. Cloud vendors offer a shared responsibility model for data security: business organizations must apply strong governance controls and encrypt data, while vendors protect the IT environment from external attacks and vulnerabilities.
- Adopt regulatory standards. Frameworks such as HIPAA and ISO 27000 Series impose the bare minimum data security and privacy requirements for tightly regulated organizations handling sensitive data in the cloud. Many organizations lose legal battles over non-compliance to such stringent regulations. That’s why it’s crucial to carefully adopt security tools and processes for cloud-based data workloads that guarantee compliance to applicable frameworks.
Security awareness and culture
Data security practices are only as good as their weakest link, which often comes down to the human element. According to an IBM research study, the human element is responsible for 95% of all security incidents.
You can enhance data security by strengthening the culture of security and awareness within your organization:
- Educate your employees on cybersecurity, irrespective of their technical or professional background.
- Follow the principle of least access privilege as an organizational policy.
- Employ and engage dedicated security professionals, especially in the executive decision-making sections of your company. A cybersecurity perspective on the tech-business and financial decisions at the executive level is valuable in guiding the growth toward strengthened security posture.
- Automate ITSM and governance functions to simplify routine tasks that may open the doors to network intrusions and unauthorized data access. Keep track of the right IT Service Management (ITSM) metrics to understand the security performance of your organization.
Stay ahead of the risk with technology
Finally, it’s important to keep on ahead of the curve. Use the latest and greatest technology solutions that the enterprise IT security world offers. If that feels unnecessary, consider this: cybercriminals increasingly use sophisticated means of hacking into your cloud and on-premise data center networks. Human errors will always exist despite all necessary governance controls in place.
The next generation of data security solutions rely on advanced AI capabilities for proactive and intelligent security defense, understanding the dynamic nature of IT environments and data workloads in detecting potentially anomalous activities on your network.
- BMC Security & Compliance Blog
- BMC Machine Learning & Big Data Blog
- Cybercrime Rising: 6 Steps To Prepare Your Business
- Security Analytics Explained
- Incident Management: A Beginner’s Guide
These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.
See an error or have a suggestion? Please let us know by emailing [email protected].
|
<urn:uuid:d592e625-84c8-4905-b4ef-a7c7bcf891a8>
|
CC-MAIN-2022-40
|
https://www.bmc.com/blogs/data-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00629.warc.gz
|
en
| 0.90336 | 1,283 | 2.9375 | 3 |
Data breaches occur when a cyber attacker illegally accesses confidential information. Investing in cybersecurity awareness training and a detection response solution is the best prevention against a data breach.
What Is a Data Breach?
A data breach occurs when an unauthorized party accesses private data.
Data breaches are most often intentional and part of a campaign by cybercriminals who work to steal valuable personal information. Once the information has been accessed, criminals leverage the data so they may profit at the expense of the victim.
In some instances, data breaches are accidental. When an employee accidentally exposes information on the Internet, criminals often pounce to benefit from the vulnerability.
Data breaches involve the theft of personal or corporate identification numbers. Targets may include:
- Personal data, including social security numbers, that enables identity theft.
- Financial information, including banking credentials and credit card numbers, that enable fraudulent purchases.
- Personally identifiable information, including email addresses, phone numbers, and social media accounts, then enable phishing attacks.
- Operational data, including contracts, suppliers, and the details of key business relationships, that compromise an organization’s credibility.
What Causes a Data Breach?
In the 2019 Data Breach Investigations Report, Verizon identified key patterns used by cyber attackers to steal data.
These patterns all result in data loss and data breaches. They include:
- Data leaks
- Lost, stolen, and cracked passwords
- Vulnerability exploits
- Poor configuration management
- Third and fourth-party data breaches
- Universal Plug and Play protocols
Why Do Attackers Create Data Breaches?
Data theft is financially driven. Once attackers have acquired data, they use it for profit. Their actions may include selling the data or committing fraud by impersonating individuals.
Data breaches require organization on the part of the attacker. Once the data has been breached, attackers must assess the data to identify the valuable information. They prioritize login credentials, financial information, social security numbers, names, and phone numbers.
After the data has been prioritized, attackers use several channels to make money.
Data Sales on the Black Market
Attackers typically resell data once it has been acquired. By anonymously selling data online, attackers enable 3rd parties to conduct fraudulent activities using an organization’s or person’s leaked data.
Attacks aren’t limited to simple fraud. In some cases, attackers purposely steal valuable intellectual property as part of an espionage campaign, a nation-sponsored attack, or straight-up play for riches. In 2018, intellectual property accounted for $500 billion dollars, or a full third of overall cybercrime.
Alternatively, attackers may use stolen data leverage to encourage a ransom payment.
In a more recent advancement in data breaches, ransomware-as-a-service evolved to enable affiliates to use ransomware tools to carry out ransomware attacks. Ransomware-as-a-Service also decentralizes attacks, making it difficult for authorities to trace attacks.
The creators of these tools take a percentage of each successful ransom payment. Affiliates general collect up to 80% from
each payment, while the developed collects 20%.
Protecting Your Organization from Data Breaches
Data breach prevention doesn’t come without a cost. However, the cost is significantly lower than data recovery after a breach.
Prevention requires an investment in education and solutions.
Cybersecurity Education is Key
Educating employees about cyber attacks is a key measure to prevention. There are key indicators to watch for – like unexpected links, strange spelling in emails, and odd attachments – that indicated a communication isn’t legitimate. It does, however, take awareness and time to learn to notice these indicators. By educating employees about what to watch for in email and text messages, your organization can greatly reduce the risk of a data breach.
Create Policies for Configuration & Password Management
Configuration management ensures cloud services do not inadvertently expose data to the wider Internet. By carefully managing and tracking configuration changes, your organization can identify and track data breaches and data leaks. Most frequently configuration management applies to servers, databases and storage systems, operating systems, networks, software, and apps.
Password security is an easy prevention for brute force attacks, where attackers try random password combinations to access a network. Organizations can support their employees by offering password managers so that strong passwords are easy to remember and hard to guess. Multifactor authentication adds an additional level of security by requiring a secure password and a one-time access code. That means an attacker would need access to many secure points to breach your organization’s data.
Solutions Provide Real-Time Monitoring Insights
Perhaps the greatest peace of mind comes from tools that monitor your computers, laptops, and servers for changes. These solutions scan for vulnerabilities and identify any suspicious activity. From software that passes organizational muster (like video games) or attempts to breach the network (via phishing or malware), real-time monitoring solutions identify and catch threats before they become obstructions to data security.
CYDEF Prepares MSPs for the Worst-Case Scenario
CYDEF’s suite of solutions protects your business – and your client’s data – from catastrophic loss related to data breaches. The combined power of our cybersecurity education solution and continuous endpoint monitoring enhances your security posture, and guarantees attacks are detected before data is revealed to the public. CYDEF’s free proof-of-value can provide a real-time sense of all that our solutions are capable of. Get your free trial today!
|
<urn:uuid:fa4ccc5c-16e5-477e-bef7-7b8e2dc42ea7>
|
CC-MAIN-2022-40
|
https://cydef.ca/blog/data-breaches-and-how-to-prevent-them/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00629.warc.gz
|
en
| 0.896361 | 1,149 | 3.53125 | 4 |
Researchers from the University of Luxembourg, in cooperation with the University of Strasbourg, have developed a computational method that could be used to guide surgeons during brain surgery.
Surgeons often operate in the dark.
They have a limited view of the surface of the organ, and can typically not see what lies hidden inside.
Quality images can routinely be taken prior to the surgery, but as soon as the operation begins, the position of the surgeon’s target and risky areas he must avoid, continuously change.
This forces practitioners to rely on their experience when navigating surgical instruments to, for example, remove a tumor without damaging healthy tissue or cutting through important blood supplies.
Stéphane Bordas, Professor in Computational Mechanics at the Faculty of Science, Technology and Communication of the University of Luxembourg, and his team have developed methods to train surgeons, help them rehearse for such complex operations and guide them during surgery.
To do this, the team develops mathematical models and numerical algorithms to predict the deformation of the organ during surgery and provide information on the current position of target and vulnerable areas.
With such tools, the practioner could virtually rehearse a particular operation to anticipate potential complications.
As the brain is a composite material, made up of grey matter, white matter and fluids, the researchers use data from medical imaging, such as MRI to decompose the brain into subvolumes, similar to lego blocks.
The colour of each lego block depends on which material it represents: white, grey or fluid. This colour-coded “digital lego brain” consists of thousands of these interacting and deforming blocks which are used to compute the deformation of the organ under the action of the surgeon.
The more blocks the researchers use to model the brain, the more accurate is the simulation. However, it becomes slower, as it requires more computing power.
For the user, it is therefore important to find the right balance between accuracy and speed when he decides how many blocks to use.
The crucial aspect of Prof Bordas’ work is that it allows, for the first time, to control both the accuracy and the computational time of the simulations.
“We developed a method that can save time and money to the user by telling them the minimum size these lego blocks should have to guarantee a given accuracy level.
For instance, we can say with certainty: if you can accept a ten per cent error range then your lego blocks should be maximum 1mm, if you are ok with twenty percent you could use 5mm elements,” he explains.
“The method has two advantages: You have an estimation of the quality and you can focus the computational effort only on areas where it is needed, thus saving precious computational time.”
Over time, the researchers’ goal is to provide surgeons with a solution that can be used during operations, constantly updating the simulation model in real time with data from the patient.
But, according to Prof Bordas, it will take a while before this is realized.
“We still need to develop robust methods to estimate the mechanical behavior of each lego block representing the brain.
We also must develop a user-friendly platform that surgeons can test and tell us if our tool is helpful,” he said.
Source: Thomas Klein – University of Luxembourg
Image Source: NeuroscienceNews.com image is credited to Legato Team / University of Luxembourg.
Original Research: Abstract for “Real-time Error Control for Surgical Simulation” by Huu Phuoc Bui; Satyendra Tomar; Hadrien Courtecuisse; Stephane Cotin; and Stephane Bordas in IEEE Transactions on Biomedical Engineering. Published online May 23 2017 doi:10.1109/TBME.2017.2695587
|
<urn:uuid:df931c3d-96d9-4b20-9003-3575ad551de0>
|
CC-MAIN-2022-40
|
https://debuglies.com/2017/06/17/new-simulation-tool-for-brain-surgeons/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00629.warc.gz
|
en
| 0.926372 | 785 | 3.765625 | 4 |
The National Autism Association states that autism is a bio-neurological developmental disability that generally appears before the age of three. Autism impacts the normal development of the brain in the areas of social interaction, communication skills, and cognitive function.
Autism is diagnosed four times more often in boys than girls. Its prevalence is not affected by race, region, or socio-economic status. Since autism was first recognised, the incidence has climbed to a rate of 1 in 59 children in the U.S.
The rate of autism has steadily grown over the last twenty years. Growing comeasurately alongside it is an increasing awareness and acceptance of autism in the media. According to modern pop culture, autism may in fact be a superpower. There seem to be a lot of doctors on TV now who have autism, like Dr. Temperance Brennan on Bones or Dr. Sheldon Cooper from The Big Bang Theory. We also get the occasional action hero such as Ryan Gosling’s The Driver or Lisbeth Salander from The Girl With the Dragon Tattoo. And of course, the classic American underdog heroes, Raymond Babbitt and Forrest Gump
Out here in the real world, people on the autism spectrum are all around you. Most do not have Salander-like superpowers, but rather are everyday Janes and Joes who work regular jobs and live their lives. If CDC statistics are accurate, there are nearly 6.8 million folk who register on varying degrees of the autism spectrum in the United States.
In this week’s episode of InSecurity, Matt Stephenson sits down with respected security writer Kim Crawley to talk about the current state of the cybersecurity world, some of the issues with locking down IoT, drumming… and Kim’s recent diagnosis as being on the autism spectrum.
Take a walk with Kim as she shares her experience in the security industry and why being on the autism spectrum is just another facet of her personality.
About Kim Crawley
Kimberly Crawley spent years working in consumer tech support. Malware-related tickets intrigued her, and her knowledge grew from fixing malware problems on thousands of client PCs. By 2011, she was writing study material for the InfoSec Institute’s CISSP and CEH certification exam preparation programs.
She’s since contributed articles on information security topics to CIO, CSO, Computerworld, SC Magazine, and 2600 Magazine. Her first solo-developed PC game, Hackers Versus Banksters, and was featured at the Toronto Comic Arts Festival in May 2016. She now writes for Tripwire, AT&T and BlackBerry Cylance.
About Matt Stephenson
Insecurity Podcast host Matt Stephenson (@packmatt73) leads the Security Technology team at Cylance, which puts him in front of crowds, cameras, and microphones all over the world. He is the regular host of the InSecurity podcast and host of CylanceTV.
Twenty years of work with the world’s largest security, storage, and recovery companies has introduced Stephenson to some of the most fascinating people in the industry. He wants to get those stories told so that others can learn from what has come before.
|
<urn:uuid:1950a763-aa85-463e-a629-936630774765>
|
CC-MAIN-2022-40
|
https://blogs.blackberry.com/en/2019/05/kim-crawley-why-we-need-a-diversity-of-brains-in-this-world
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00629.warc.gz
|
en
| 0.94394 | 650 | 2.640625 | 3 |
The Internet is awash with advertising. Companies like Google & Facebook earn billions of dollars a year serving ads. The news sites you likely visit serve ads. Blogging moms earn livings serving ads. But could those ads pose a threat to users and, by extension, your organization?
What is Adware?
At its most basic level, adware is just software that generates revenue for a developer by generating online advertisements. For example, Gmail is supported by adware. While you check your email, ads are served that Google thinks is relevant to you.
So far, so good.
The problem occurs when adware starts to become malware.
Malware vs. Adware
Some malware functions like adware, serving ads to you in order to use a piece of software.
Except, in many cases, malware installs itself without the knowledge or consent of the user. Often, malware presents unwanted advertisements to the user, forcing them to engage it to close the ad. You may have seen these kinds of ads before – they’re the ones with the uncloseable boxes that force you to close the browser tab, curse the dregs of society, and move on.
In other cases, it may track user activity and display ads in places where it shouldn’t have access.
Worse, sometimes this malware becomes spyware, and actually observes a user’s behavior, before reporting it back to the software developer.
At best, these things can be a mild nuisance.
At worse, they expose a vector for attack.
One way this malware can be installed on a machine is by downloading infected software, perhaps from a seemingly legitimate mirror site or via TOR.
In other instances, they can be installed via a Drive-by-Download event.
In still others, they may be installed via completely innocuous activities like reading, say, the New York Times or listening to Spotify.
In these instances, the user doesn’t click anything. They may not even interact with the ad directly.
Enter the world of malvertisting (malicious advertising).
With malvertising, malicious code is hidden inside an online (often display or popup) ad and, when your browser makes a request, the malicious payload is delivered alongside the other (legitimate) requests.
Note: In case you’re unaware, it’s not uncommon for a single web page to make dozens of requests to third-party applications, libraries, or even iframes. Malvertising works because malicious code can be hidden in one of these kinds of requests.
The malvertisement’s code may register an iframe that navigates to another page, where malware is hosted. The malware then infects the user’s system, looking for vulnerabilities. Finding them, it installs it’s payload and the user’s system is compromised.
How Malvertisers Get Away with It
One of the hardest things about combating malvertising is its ability to post as a legitimate ad.
Attackers effectively enter the same bidding competitions that legitimate advertisers do. They bid with real money in real auctions using essentially “booby-trapped” ads.
After the ad wins an auction, it gets propagated to the whole ad network, just like a legitimate ad.
Moreover, they can end up in rotation with regular ads for some time before they’re identified and snuffed out.
Unfortunately, they can also be hard to catch because they look and function like legitimate ads. Minus the exploity part.
How to Protect Yourself & Your Organization
First things first, make sure you have control over what kind of software users in your organization are allowed to download. At the very least, consider restricting download authorization to a limited few people in your organization. When a user needs a new piece of software installed, they will have to file a ticket or request help from someone with the appropriate authority to download the software.
Sure, your users will find that annoying.
But, it’s the best way to make sure they don’t inadvertently download something that may contain adware.
Secondly, make sure you’ve got good protections in place, including virus protection, anti-exploit, and/or anti-malware software. At a minimum, install ad blockers on user browsers and install tools to scan downloads before they’re downloaded.
These practices reduce the vectors available for malicious advertising to take root.
Thirdly, make sure you provide your users with the proper education needed to understand the risks they – and the organization – are exposed to. Oftentimes, users are merely unaware of the threats that are out there.
You want to make sure you educate them. You may not only save the organization, but also their personal data if they take some of those lessons to heart when they go home for the evening.
Finally, what makes malvertising-delivered malware so bad is its ability to infiltrate an organization so surreptitiously.
While media providers are responsible for – and take action towards – preventing malvertisers on their network, they are hard to catch. Having a good SIEM that’s mining system logs and monitored by a security operations team with expertise in deciphering events from incidents and preventing the latter will help to ensure that you catch threats before they become problems.
|
<urn:uuid:7d8237f6-d6d6-4458-8868-a958a27778f4>
|
CC-MAIN-2022-40
|
https://www.bitlyft.com/resources/how-to-protect-from-adware
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00629.warc.gz
|
en
| 0.922505 | 1,109 | 3.015625 | 3 |
In the second problem, we have the second classifier which is shown here. The first problem is solved using Perceptron or Artificial Neural Network. The same problem can be solved by using another classifier called Support Vector Machine(SVM). The objective here is to draw a line between the two classes so that the distance between them maximizes. We have two classes and the line can be drawn in either ways. However, finding an optimal point helps to maximize the distance between the two classes and such a model is known as Support Vector Machine. This helps to classify a class into multiple classes, and finding an optimal place not only maximizes the distance between the two classes but also the data points which are pretty close to the middle line; referred to as Support Vectors. This is an important model of Machine Learning.
|
<urn:uuid:09b2760c-4513-46e5-8bf6-b1364502cc49>
|
CC-MAIN-2022-40
|
https://www.greycampus.com/opencampus/machine-learning/perceptron-vs-svm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00029.warc.gz
|
en
| 0.962497 | 166 | 3.78125 | 4 |
Building the Internet of Humanitarian Things (IoHT)
The Internet of Things (IoT) represents our ability to connect devices, machines and infrastructure across wireless networks and enable them to send and receive data. IoT has allowed us to automate processes and connect our world in ways we never thought possible, transforming productivity and creating immense potential for applications across every industry. In the humanitarian world, this potential could help to save more lives and reduce the impact of disasters.
Technological advancements are already greatly enhancing humanitarian operations – as we are seeing in the use of electronic ID cards to manage claims for food assistance in the Philippines’ Food for Assets Programme and the deployment of iris scan technology for the purchasing of food in refugee camps in Syria, for example.
So when I consider how Motorola Solutions’ Industrial Internet of Things is proving life-saving by ensuring that critical infrastructure – such as power stations and water utilities – is kept up and running, then I cannot help but postulate what opportunities these solutions present for aid organisations.
Connecting People, Equipment and Infrastructure
The remote monitoring and control of infrastructure across resilient, secure wireless networks gives intelligence to critical assets, enabling them to detect malfunctions, fluctuations in temperature or leaks and raise alarms automatically to avert disaster. The application of such solutions abounds - from early warning systems that trigger alarms or broadcast pre-recorded messages across multiple control centres, to automatically adjusting well pumping, controlling water quality or regulating system pressure to maximise efficiency. Municipal infrastructure - such as motorways and street lights - is already being managed and controlled remotely, while real-time weather and soil data is being incorporated into the remote management and control of crop irrigation, to reduce waste and boost yields.
Our ability to attach sensors to virtually anything – people, machines, vehicles and infrastructure – enables us to improve the flow of real-time information and optimise efficiency way beyond critical infrastructure. Motorola Solutions has introduced sensors in innovative ways which have had a significant impact on public safety, enabling the command centre to receive notification when a police office pulls a gun from a holster, for example. Information is also relayed regarding the officer’s heart rate, registering increasingly intense activity which can save time and potentially lives. Our Augmented Reality (AR) headsets make it possible to provide a bird’s-eye, 3D view of an incident, combining holographic and virtual images that allow tactical response to be determined miles away from an incident.
We have the potential to share this technological expertise to help humanitarians benefit from a similar transformation in operational efficiency and the way in which data is used and managed.
Taking Wearable Technology, AR and iOT to Aid Workers
Motorola Solutions has invested in a number of organisations and start-ups to promote technological innovation that enables a smarter, more connected response. Here are just a few examples of the possibilities these partnerships present:
We’re constantly facing new challenges and imagining new ways to improve the safety and operational impact of first responders. If your organisation is looking to collaborate or pursue “IoHT” opportunities to make humanitarian operations smarter and more connected, please drop me a line.
|
<urn:uuid:d90450d3-3651-4129-be26-dc5fd5f58573>
|
CC-MAIN-2022-40
|
https://www.motorolasolutions.com/en_xu/communities/en/think-aid-connect.entry.html/2018/03/26/building_the_interne-lMMP.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00029.warc.gz
|
en
| 0.914301 | 644 | 2.6875 | 3 |
In 2004, a few unmanned vehicles showed up at the starting gate of the lengthy course across the Mojave Desert — this was the inaugural DARPA Grand Challenge. It signified the beginning of the technological race to develop a practical self-driving car, which sparked a global movement that continues even today.
The networking community too embarked on a similar journey to provide production-ready, economically feasible, Self-Driving Networks. Self-Driving Networks are autonomous networks that use Artificial Intelligence (AI) and Machine Learning (ML) to program independently and carry out prescribed intentions while eliminating complex programming and management tasks required today to run the networks. In view of this, the proliferation of data breaches and cyberattacks in today’s networking environment has also increased, leading to extensive repercussions across businesses. As such, ML-based security solutions have become a major cybersecurity investment for organizations today.
By Rohit Sawhney Systems Engineering Manager at Juniper Networks India
Leveraging AI to enhance your network security
Many experts believe that AI and ML will dominate cybersecurity in the future. Last year, at the Gartner IT Symposium/Xpo, analysts discussed how these two technologies will augment human decision-making, emotions, and relationships.
Rapid technological advances are enabling AI to disrupt the networking industry with new insights and automation. AI in the networking domain will be able to reduce IT costs and offer the best possible user experience. Not only will AI be able to reduce IT costs, but it will also bring in more productivity and efficiency in networking. Together, machine learning and AI could be key enablers, helping to reduce human effort and make cybersecurity faster, more consistent and accurate.
In fact, many Enterprises are already making greater investments to integrate solutions with machine learning algorithms into their existing security infrastructure. While traditional antivirus programs are still widely used to detect and neutralize threats, they do not have the capability to detect and mitigate sophisticated threats. ML-based security solutions like the Juniper ATP can help monitor potential threats in the network through threat intelligence features – allowing IT security teams to detect any suspicious activity before the attack occurs.
AI comes to the rescue as it reduces the number of monotonous tasks that take up an engineer’s time, while ensuring they are always completed accurately, regardless of frequency and quantity. This allows engineers to focus on other business strategic tasks while maintaining network health and safety.
Building an AI system for your network
In a recent survey conducted by KPMG for its report, Living in an AI World 2020, analysts found that 92% of respondents agree that leveraging spectrum of AI technologies will make their companies run more efficiently. However, in the networking domain, IT simply can’t meet the needs of today’s stringent network requirements, without a robust AI strategy. The following are some technology elements that an AI strategy should include:
- Data – Needless to say, without adequate and relevant data, ML algorithms are as good as the data one ingests in them. The more diverse the data collected, the smarter the AI solution becomes. The collection of real-time data with accuracy and speed is just as important. Edge devices like routers, mobiles, and IoT enabled solutions not only need to collect the data but get it processed quickly in a nearby edge computer or on the cloud, using AI algorithms, to make the network more adaptive.
- Domain-specific expertise – Unless you are a domain expert, it is simply impossible to replicate an AI system to diagnose wireless problems. Placing the metadata at the center of these problems and breaking it into small fractions will enable the AI system to understand the complexity and get trained.
- Data science deep dive – Machine Learning and Big Data techniques empower the data by extracting insights from the multiple chunks of metadata, divided into several domains.
- Virtual network assistants – When Netflix or Prime Video recommends movies basis the ones you’ve been watching, they’re using an ML technique known as collaborative filtering. Apart from recommendations, collaborative filtering can also be applied to sieve through large data sets and identify and correlate those that form an AI solution to a problem.
Benefits of AI/ML in the networking domain
ML a subset of AI, is a prerequisite for any successful deployment of AI technologies. ML uses algorithms to parse data, learn from it, and determines or predicts without requiring explicit instructions. With that said, AI/ML can be leveraged for the following tasks in the networking domain:
- Cybersecurity practices: Machine Learning is critical in building a secure networking infrastructure and is considered a top priority for most organizations today. ML enables security automation and helps in data classification, processing, filtering, and significantly managing and reducing the workload of the IT security team. Automating this task increases workload efficiency and reduces the risk of missing an important threat alert.
- Predict user experience to dynamically adjust bandwidth demand – When traffic spikes occur in today’s networks, it is difficult to distinguish between DDoS attacks from widespread downloading, of say, Arijit Singh’s latest album. By leveraging ML algorithms that interpret copious amounts of traffic behavior data, the Self-Driving Network will be able to predict performance issues before users are affected. In such an example, connections with algorithms that scrape Twitter feeds will confirm the hypothesis: Have hacking groups been threatening action against an enterprise? Or have fans been demanding for Arijit Singh’s album in the weeks leading up to the spike? The Self-Driving Network will analyze and adapt accordingly, either shutting down ports to isolate the DDoS attack or adding bandwidth to accommodate the surge in album downloads.
- Self-correct for maximum uptime – AI, through its intelligent algorithms, complemented by ML capabilities, enable systems to have a self-correction process in places to ensure maximum uptime. The powerful AI-driven networks can even capture data prior to a network event or outage, which accelerates troubleshooting.
- Instantly find root causes – AI can leverage multiple data-mining techniques to explore terabytes of data in a matter of minutes. This allows IT departments to instantly identify what network feature – be it the OS, device type, access point or switch – is most related to a network problem. This in turn allows IT departments to accelerate this problem resolution.
About the Author
Rohit Sawhney is a Systems Engineering Manager at Juniper Networks India. He leads the team of Technical Consultants supporting Juniper’s North/East India & SAARC business. Prior to joining Juniper Networks, he has worked with IBM India and has industry experience of over 20 years. Rohit is a certified by Juniper Networks, Cisco and VMWare. He holds a master’s degree in Computer Application from Sikkim Manipal University of Health, Medical and Technological Sciences and a Bachelor’s of Science in Electronics from Delhi University.
CISO MAG did not evaluate/test the products mentioned in this article, nor does it endorse any of the claims made by the writer. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same. CISO MAG does not guarantee the satisfactory performance of the products mentioned in this article.
|
<urn:uuid:8acb0f99-ccf6-4c51-98cd-56451584e9dd>
|
CC-MAIN-2022-40
|
https://cisomag.com/ai-in-networking/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00029.warc.gz
|
en
| 0.925358 | 1,485 | 2.609375 | 3 |
Packet Tracer file (PT Version 7.1): https://bit.ly/2wG1I3n
Get the Packet Tracer course for only $10 by clicking here: https://goo.gl/vikgKN
Get my ICND1 and ICND2 courses for $10 here: https://goo.gl/XR1xm9 (you will get ICND2 as a free bonus when you buy the ICND1 course).
For lots more content, visit http://www.davidbombal.com – learn about GNS3, CCNA, Packet Tracer, Python, Ansible and much, much more.
The Routing Information Protocol (RIP) is one of the oldest distance-vector routing protocols which employ the hop count as a routing metric. RIP prevents routing loops by implementing a limit on the number of hops allowed in a path from source to destination. The largest number of hops allowed for RIP is 15, which limits the size of networks that RIP can support.
RIP implements the split horizon, route poisoning and holddown mechanisms to prevent incorrect routing information from being propagated.
In RIPv1 router broadcast updates with their routing table every 30 seconds. In the early deployments, routing tables were small enough that the traffic was not significant. As networks grew in size, however, it became evident there could be a massive traffic burst every 30 seconds, even if the routers had been initialized at random times.
In most networking environments, RIP is not the preferred choice for routing as its time to converge and scalability are poor compared to EIGRP, OSPF, or IS-IS. However, it is easy to configure, because RIP does not require any parameters unlike other protocols.
RIP uses the User Datagram Protocol (UDP) as its transport protocol, and is assigned the reserved port number 520
Based on the Bellman–Ford algorithm and the Ford–Fulkerson algorithm distant-vector routing protocols started to be implemented from 1969 onwards in data networks such as the ARPANET and CYCLADES. The predecessor of RIP was the Gateway Information Protocol (GWINFO) which was developed by Xerox in the mid-1970s to route its experimental network. As part of the Xerox Network Systems (XNS) protocol suite GWINFO transformed into the XNS Routing Information Protocol. This XNS RIP in turn became the basis for early routing protocols, such as Novell’s IPX RIP, AppleTalk’s Routing Table Maintenance Protocol (RTMP), and the IP RIP. The 1982 Berkley Software Distribution of the UNIX operating system implemented RIP in the routed daemon. The 4.2BSD release proved popular and became the basis for subsequent UNIX versions, which implemented RIP in the routed or gated daemon. Ultimately RIP had been extensively deployed before the standard written by Charles Hedrick was passed as RIPv1 in 1988.
|
<urn:uuid:51b38b81-b9d6-480a-8fb3-67d77dc997d2>
|
CC-MAIN-2022-40
|
https://davidbombal.com/cisco-ccna-packet-tracer-ultimate-labs-rip-routing-lab-can-complete-lab/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00029.warc.gz
|
en
| 0.937813 | 603 | 3.046875 | 3 |
The U.S. Air Force Research Laboratory and Harvard Universityâs Wyss Institute for Biologically Inspired Engineering have jointly developed a new three-dimensional printing process to manufacture stretchable and flexible electronics.
Hybrid 3D printing is an additive manufacturing process that combines conductive material with material substrate to form stretchable and wearable electronics, the Air Force said Tuesday.
In a recent demonstration, 3D printed flexible silver-infused thermoplastic polyurethane was integrated with microcontroller chips and LED lights.
The resulting devices were able to function while enduring an above 30 percent stretch from its base size.
Dan Berrigan, a scientist at the AFRL Materials and Manufacturing Directorate, stated that the newly developed process holds potential for Air Force applications such as movement, temperature, fatigue and hydration monitoring from skin-worn electronics.
Succeeding phases of the development will focus on making a stretchable power source for the devices.
|
<urn:uuid:0e7c6dbb-8545-4625-baea-8d0a14dc84a0>
|
CC-MAIN-2022-40
|
https://executivegov.com/2017/10/afrl-harvard-develop-stretchable-electronics-manufacturing-process/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00029.warc.gz
|
en
| 0.918634 | 196 | 2.953125 | 3 |
What Is A DSL Filter?
DSL filters are basically components that have a high speed internet connection and are used for a Digital Subscriber Line. The internet connection is delivered through standard telephone lines. In order to establish connectivity to the internet, the telephone lines are used in conjunction with a DSL modem.
Thus, we call it an always-on service. This is because it is the type of internet connection which you never have to log on in order to access the service. A DSL filter is a device that is installed in a DSL connection line. They come in very handy as line interference can easily occur if both the telephone and the DSL service are sharing lines.
Therefore, to assist with the reduction of line interference, a DSL filter is installed in a DSL connection line. In order to judge the installation and necessity of a DSL filter, it is important to look at the method which was used to install the Digital Subscriber Line.
For example, let’s suppose a splitter method is being used during the DSL service installation. In this case, it is not necessary to use a DSL filter. This is because the need to reduce line interference is reduced in this method. When you use a splitter that is generally installed by a technician it splits the telephone line into two lines. Therefore, the telephone is connected to one line and the other line is dedicated to the DSL modem.
However, it is important to note one thing. If a splitter device is not installed with the Digital Subscriber Line then it is necessary to use the DSL filter. This is because the telephone and the DSL connection would be using the same line which could become problematic as mentioned before.
It will lead to line interference which is going to cause issues such as poor internet connection and telephone problems as well.
How Does A DSL Filter Work?
Let’s talk about how a DSL filter actually works. Firstly, if you do not have a technician, you have to install the splitter device yourself. Basically, a DSL filter is installed in the telephone jack in the wall. In simple words, it is a connecting device that has an RJ11 connector on each end of the device.
The only thing left for you to do is to disconnect the telephone line from the jack. After this, you have to connect the DSL filter to the RJ11 port in the wall jack. Lastly, you can connect the telephone line into the DSL filter.
One thing to keep in mind is that a DSL connection is different than a dial-up connection. This is because it does not occupy your phone like even though it shares the telephone line. By sharing the line and, a DSL device offers a much faster connection than the older dial-up method. It is way more efficient.
The DSL connection sends digital signals where your telephone sends voice signals. It makes use of the unused wires in the line for transmitting the digital signal. This is the main reason why you can use both your telephone and internet connection on one line. If you do not use a splitter, you will get better quality in the connection by installing the DSL filter as the wires are so close to each other.
Do I Need A Dsl Filter?
What Are The Convincing Features Of A Dsl Filter?
A DSL filter, also known as a micro-filter, is an analog low-pass filter between analog devices and a regular line for your home phone. So the question is if you really need a DSL filter. It comes in very handy due to various reasons as mentioned below:
1. Prevent Disruption Between Different Devices:
DSL functions prevent any sort of interference between devices and the DSL service on the same line. This is because the same line can disrupt your DSL internet connection. Thus, it eliminates signals or echoes from an analog device from compromising performance and causing connection issues with DSL service.
You will need DSL filters installed on every device that connects to a DSL phone line especially if you are using a home phone service without a splitter setup.
2. Filters Out Blockade:
As mentioned before, equipment such as phones, fax machines, and regular modems tend to disrupt the telephone wiring when they are being used. This leads to disruption with the DSL signal over phone lines which eventually results in a poor connection and it can even interrupt DSL service.
This persists as long as you are sending faxes, using the modem, or talking on the phone, etc. Now, this is where a DSL filter plays its part. What does it do? It basically filters out this blockade so that you are able to freely use your phone without worrying about it interfering with the DSL signal. That is why it is best to put these filters between any phones/fax/modems you have and the wall outlet.
3. Prevent DSL Signals From Reaching Other Devices:
Another reason why DSL filters come in handy is that they keep the high-frequency DSL signal from reaching your other devices such as phones and fax machines etc. This is because if these signals reach those devices, you will be facing numerous issues such as irritating phone calls or slow regular modem speeds.
What Are The Limitations With Dsl Filters?
Even though the benefits of DSL filters are endless, there are some limitations too. Firstly, you need to keep in mind that there is a limit to how many filters you can use, which are generally 4. This is because if too many filters are used at one time, it can again cause disruption with your phone line, and eventually, the disruption will start to interfere with the DSL signals as well.
The best thing to do is to use a whole-house splitter.
It separates the DSL and POTS frequencies right at the point of entry to your house. This, in turn, prevents the need for a filter at every phone. However, this becomes costly and time consuming for the phone companies as they have to send technicians to install the splitter and rewire a few of the phone jacks in your house.
Therefore, they just send you more filters that you put on all your devices. However, as mentioned above, this is not suitable, and using a whole house splitter is a much better idea. So if you are comfortable working with phone wiring and have some knowledge of it, you can install the splitter yourself.
|
<urn:uuid:26a576f5-3700-4b03-bab4-35e559efe41f>
|
CC-MAIN-2022-40
|
https://internet-access-guide.com/do-i-need-a-dsl-filter/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00029.warc.gz
|
en
| 0.959996 | 1,298 | 3.09375 | 3 |
Few industries can claim such a foundational impact on the United States as the manufacturing industry. Modern manufacturing began with the birth of the assembly line and the transformational effect it had on the automobile industry. Companies then adopted that approach to product manufacturing and logistics. The early phases of the next generation of manufacturing appeared as machine-to-machine (M2M) communication, a forbearer of the concept behind the Internet of Things (IoT). Eventually, IoT became so broad that specific designations were needed to differentiate between the consumer and industrial side of things, thus paving the way for the Industrial IoT (IIoT).
Today, manufacturing companies, while often on the leading edge of automation technology, are still scrambling to adapt to the explosion of sensors, communication platforms, big data and high-speed analytics to maximize efficiency and future-proof their products or designs. Some companies are touting the idea of retrofitting – a concept that has existed for some time – but some plant engineers may be wary of the need for continual updating to a system that is bound to become irrelevant at some point. Still, the process can be relatively painless, and is quickly becoming necessary, as Plant Magazine notes:
… Most food manufacturing and processing plants have motors powering essential equipment such as mixers, conveyors and packaging machines. But they’re just motors. They don’t play in the same league as other intelligent devices. With years of service to go, it’s difficult for plant managers to justify replacing motors that work just to make an upgrade with smart features. But motors can connect to the IIoT without a complete overhaul. Instead of investing in new, more intelligent/smart equipment, consider investing in sensors that provide similar functionality to connected devices. Smart sensors attach to almost any standard low-voltage induction motor.
Sensor technology is sophisticated enough to be small, functional and energy efficient. For certain kinds of manufacturing plants, a complete overhaul may not be necessary, and a ‘simple’ retrofitting process might easily solve the first part of the problem.
The second part of the problem, or challenge, is that along with smart hardware, plants also need the software and data processing capabilities to keep pace. Some plant engineers are solving these challenges by deploying programmable radios capable of hosting third-party applications so that the data can be transmitted in smaller, highly specific packets, making the transport both fast and easier to push into predictive analytics platforms.
From there, software companies are building in the ability to process data in the cloud, essentially running all critical data and software operations through either a fog or cloud computing process. Cloud software services have the potential to be highly customizable based on the needs of the manufacturing plant.
These technologies are good examples of the ongoing convergence between traditional information technology (IT) and operations technology (OT) needs in industrial markets. Currently, the manufacturing industry is sitting in an interesting spot: leaders in the M2M world, but still adapting to the IoT world. Where the industry ends up in the next 10 years could be a strong indicator of the economic and financial temperature of the domestic and international marketplaces.
|
<urn:uuid:e7ec3666-aa1b-41ce-a215-35b6e0af2b2b>
|
CC-MAIN-2022-40
|
https://www.freewave.com/manufacturing-age-iiot/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00029.warc.gz
|
en
| 0.944988 | 637 | 2.78125 | 3 |
Ransomware is a type of malware that has become highly successful. This insidious form of malware uses various tactics, including social engineering and phishing, to infect networks to steal and encrypt data.
Once the data is encrypted, it becomes unusable and causes businesses to stall. This fact and threats to reveal the stolen data are used as leverage to extort money from the organisation.
Ransomware infections continue to trend upwards. Some sectors, such as healthcare, have seen a staggering 94% increase in ransomware infections in 2021-2022.
Phishing continues to be the preferred method for delivering malware, including ransomware. However, this human-centric cyber attack can be mitigated using employee education.
What’s the Difference Between Malware and Ransomware?
Malware is a portmanteau of two words, malicious software. There are many types of malware: malware that steals data; malware that captures login credentials as you type them in; malware used to mine cryptocurrency, and so on.
Ransomware is a type of malware that performs actions on a computer or other devices to cause business disruption. Ransomware typically locks a device so it becomes unusable or encrypts data across a network so that work cannot be carried out.
Once the device is locked or the data encrypted, the ransomware displays an on-screen ransom note. The note will typically request payment in a cryptocurrency, usually bitcoin, to access a decryption key. However, payment of a ransom is no guarantee that data will be decrypted or returned; a Sophos report found that only 65% of the encrypted data was restored after the ransom was paid.
Ransomware attacks plague all industries and affect companies from the smallest one-person business to international enterprises. In the first half of 2021, the U.S. Treasury Department reported that companies in the USA suffered from $590 million in ransomware-related costs.
In recent weeks, ransomware has hit the headlines again: the NHS became a target for ransomware gangs with an attack on the NHS 111 service, causing patient delays and general havoc. The NHS is no stranger to ransomware attacks, with the 2017 WannaCry attack causing widespread shutdowns.
Other industries suffer from ransomware too. The financial sector, retail and manufacturing all have come under the watchful eye of ransomware attackers. Banking, utilities, and retail were the three most targeted sectors in 2021.
Ransomware attackers changed tactics from a pure encryption approach to malware infection to a double-extortion attack. New ransomware infections involve stealing data before encrypting it on a network. This way, the cybercriminals can use the stolen data to threaten the company with data exposure if they don’t pay the ransom. A Cisco report has found that 70% of ransomware attacks now use this double-extortion method.
Ransomware is now a highly sophisticated and concerted criminal endeavour. Attackers regularly change tactics and approaches to avoid detection. A recent advisory from Sophos highlights a new tactic that involves multiple attacks where several different hacking gangs choose a target and attack either simultaneously or concurrently. Sophos notes that companies should see a ransomware attack as not “if, or when – but how many times?”
Why Not Just Use Ransomware Decryptors or Anti-Virus Software?
There are lots of ransomware and other malware variants. So many, that commercial ransomware decryptors generally only deal with specific well-known ransomware variants. The website NoMoreRansom holds a list of decryptors for each type of ransomware type.
However, ransomware actors are clever and work diligently to evade software tools by bringing out new variants regularly. Anti-virus software or anti-ransomware security tools have a similar problem in keeping up with the changes in software code and mechanisms used by malware.
Using security software tools and having secure backups for data is essential. Still, the critical factor in preventing a malware or ransomware infection is stopping it before it gets installed on a device. This is where training employees come in. Phishing simulations and Security Awareness Training are equivalent to having a human firewall around your organisation and its devices.
Five Things to Prevent Malware and Ransomware
Empowering employees through education is a vital security measure and fits into a holistic malware and ransomware prevention model. Employees are increasingly manipulated by ransomware actors via phishing emails or taken advantage of through poor security habits.
Here are five things that your organisation can do to help your employees mitigate malware and ransomware attacks:
Teach Good Security Habits
Help employees understand their role in keeping your organisation secure. For example, use Security Awareness Training packages with modules on what malware or ransomware is, how it infects a device, and the damage it can do. Make sure that these awareness training packages are interactive and use point-of-need learning experiences to help train employees on how to mitigate malware infection.
Phish Your Employees
Use a simulated phishing platform to send all employees out realistic looking, but spoofed phishing messages. Use a platform that offers many templates and tailor them to reflect typical phishing messages containing malware or ransomware threats.
Keep Remote Employees Safe
Remote employees are at high risk of phishing and other cyber attacks. Ensure all employees, particularly remote and homeworkers, use a secure VPN to securely access websites and securely transfer data and credentials.
Engage Your Employees in Active Malware Prevention
Encourage all employees to inform your IT team or line manager about any suspicious activity. This should include suspected phishing emails and text messages. This allows time to respond to a ransomware or malware threat to prevent it from becoming an incident.
Be Socially Aware
Social media is an excellent place for cybercriminals to find out information about an employee and a company. Many cyber attacks begin with a social engineering attack that is fed by information gathered through various channels, including social media. Teach employees about the dangers of oversharing personal and corporate information
A Cybersecurity Ventures report highlights that global ransomware damages will likely cost $250 Billion (£207 billion) by 2031. No organisation can feel safe from malware or ransomware infection without having the entire company onboard to prevent malware infection.
Well-trained employees provide a way to stop malware infection at the first hurdle and ultimately save your company from the distress caused by malware.
|
<urn:uuid:edf50030-ed53-4c08-9a45-bd859617f3f5>
|
CC-MAIN-2022-40
|
https://www.metacompliance.com/blog/phishing-and-ransomware/mitigate-malware-and-ransomware
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00029.warc.gz
|
en
| 0.922945 | 1,287 | 3.1875 | 3 |
AI Ethics: Building Trust by Following Ethical Practices (Part 2)
In our first blog post on the topic of AI Ethics, we covered the promise that artificial intelligence (AI) holds to improve the speed, accuracy, and operations of businesses across a range of industries. With the potential of AI, it’s hard to believe that businesses are hesitating to move forward with AI projects, but fear holds people back. They fear making mistakes that could damage their company’s reputation or doing something illegal or unethical.
Many of these pitfalls can be avoided by following the four main principles that govern ethics around AI. In part one, we covered the first two main principles. In this blog, we’ll take a look at principles three and four, Disclosure and Governance.
- Principle 1: Ethical Purpose
- Principle 2: Fairness
- Principle 3: Disclosure
- Principle 4: Governance
Principle 3: Disclosure
One of the four fundamental principles of ethics is respect for autonomy. This means respecting the autonomy of other persons and respecting the decisions made by other people concerning their own lives. Applying this to AI ethics, we have a duty to disclose to stakeholders about their interactions with an AI so that they can make informed decisions.
In other words, AI systems should not represent themselves as humans to users. Where practical, give the choice to opt out of interacting with an AI.
Whenever an AI’s decision has a significant impact on people’s lives, it should be possible for them to demand a suitable explanation of the AI’s decision-making process in human-friendly language and at a level tailored to the knowledge and expertise of the person. In some regulatory domains this is a legal requirement, such as the EU’s General Data Protection Regulation (GDPR) “right to explanation” and the “adverse action” disclosure requirements in the Fair Credit Reporting Act (FCRA) in the U.S.
Principle 4: Governance
An organization’s governance of AI refers to its duty to ensure that its AI systems are secure, reliable and robust and that appropriate processes are in place to ensure responsibility and accountability for those AI systems.
Like any other technology, AI can be used for ethical or unethical purposes, and AI can be secure or dangerous. With the possibility of negative outcomes from AI failures comes the obligation to manage AIs and to apply high standards of governance and risk management.
Humans must be responsible and accountable for the AIs they design and deploy. The comparative advantage of humans over computers in the areas of general knowledge, common sense, context, and ethical values means that the combination of humans plus AIs will deliver better results than AIs on their own.
For the full list of principles on how to implement ethical AI practices, download our white paper, AI Ethics. This paper also covers how to develop an AI Ethics Statement that will apply to all projects and how DataRobot’s automated machine learning platform can be a valuable tool to implement ethical AIs.
About the Author:
Colin Priest is the Sr. Director of Product Marketing for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.
|
<urn:uuid:b9fed938-6001-4c22-8f81-9df7a2a01fc8>
|
CC-MAIN-2022-40
|
https://www.datarobot.com/blog/ai-ethics-building-trust-by-following-ethical-practices-part-2/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00029.warc.gz
|
en
| 0.935343 | 745 | 2.984375 | 3 |
In a latest announcement from Mozilla, the company announced that anyone can create an Open Gateway to control the internet of things. Mozilla confirmed that it is still working on set of frameworks and open standards. The main idea behind the open gateway is that users shouldn’t end up with an internet of things controlled by big tech companies.
The Open Gateway
For the open gateway, first Mozilla wishes to create an open standard with the W3C around the Web of Things. The idea behind this is that accessory makers and service providers should use the same standard to make devices talk to each other.
Next, Mozilla will work on a Web of Things Gateway. Through this gateway, users can control all their IoT devices like Amazon Echo, Philips Hue hub, Apple TV, and Google Home with an open device.
Apple, Google, Amazon, Samsung, and others have been creating their own standard to control all the connected devices around your home. With the open gateway, manufacturers could leverage this work to create their own gateways, eventually. Maybe, developers too could build a bridge between the API of the HomeKit and Amazon. If that becomes a reality, all devices can work with Amazon Echo, Google Home, or iPhone without too much effort!
|
<urn:uuid:bde2eed3-d971-4fb1-92a5-1c56ea919ab8>
|
CC-MAIN-2022-40
|
https://www.ciobulletin.com/IoT/mozilla--open-gateway
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00029.warc.gz
|
en
| 0.917425 | 307 | 2.734375 | 3 |
Malware, much like all weapons, evolve based upon multiple factors, be it the protections of their intended target, the weapon operator and their organization or the general intent that it was created for. Unlike most weapons though, malware evolved with a pattern closer to that of a biological disease. Early variants were created and most of them failed, however useful traits were passed on to new generations of malware and as time went on, only the most stealthy and ruthless malware survived. This blog post is a quick summary of malware through the years, from its early origins in the late 60’s to the “super –malware” we all know and fear today.
How it all started
The concept of the modern day malware all started not with a program, but with an idea. The mathematician John von Neumann wrote an article about the “Theory of Self-Reproducing Automata” in 1966. The article compared and contrasted the internals of computers to the human nervous system. He then discusses the possibility of self-replicating software using mathematical analysis based upon the self-replication process of organisms found in nature.
Five years later in 1977, Bob Thomas of BBN Technologies built the “Creeper” virus which is generally accepted as the first computer worm. It would spread through the mainframe computer networks and display the message:“I’m the Creeper, catch me if you can!”
Near the same time, in order to combat the Creeper virus, another worm program was created, named “Reaper.” Reaper would also spread through the same mainframe systems but would delete Creeper upon contact. I find it very interesting that only a short time after the creation of the world’s first virus was the world’s first antivirus.
Three years after that, the world’s first Trojan Horse was developed, it was known as “Pervading Animal” and it was written by John Walker to be used on UNIVAC systems (or those really big computers that took up entire rooms). The Trojan would present the user with a game called ANIMAL where it would ask numerous questions in an attempt to guess what animal the user was thinking of. Meanwhile, another program called PERVADE, would copy both itself and ANIMAL to every directory which the user had access to.
The very late 60’s and early to mid-70’s were the origin years of malware. Computer systems were becoming more and more capable and autonomous and therefore curious programmers could write up all kinds of fun things to play pranks on their friends or just to see what they could do, this is how our modern malware began.
Infecting the home user
You may or may not believe this but some of the very first malware that was written in the early 1980’s was for Apple II systems. So next time you hear about the "Flashback virus", or something similar to it, and how it is changing the game because it is infecting Apple computers, just remember that malware has been on Apple hardware before. An example of one type of Apple II malware was called "Elk Cloner", it was created by Richard Skrenta a 15-year-old high school student. It infected the systems using the “boot sector” technique which means that if the user booted up their system from an infected Floppy Disk, a copy of the virus was placed in the memory of the computer. The virus itself was harmless but spread to all disks attached to a system and spread like wildfire, being referred to as the first large-scale computer virus outbreak in history.
From 1983-1986 numerous types of early viruses were developed for IBM PC’s, these viruses had the ability to infect other legitimate files on the operating system, delete other files, and self-replicate. In 1987, as these viruses became more and more prevalent on user systems, IBM developed and released its own commercial antivirus. Prior to doing this, all antivirus technology was for IBM internal use only. Finally, in 1988, the "Morris worm" was created to infect users using UNIX systems connected to the internet and was considered the first worm to spread “in the wild.” It was also known as one of first programs to exploit buffer overflow vulnerabilities, a practice which is still used in many of today’s exploits.
It wasn’t until 1989 that malware began to really look like how we see it today. Take for example, the "Lamer Exterminator" virus. It was created for the Commodore Amiga and had the ability to hide itself by hooking into parts of the operating system and sending false data to any process which might detect it. It also encrypted its own file every time it was replicated.
Malware starts to get scary
Over the last few years we have had multiple types of “scares” as far as malware goes, including the most recent DNSChanger scare, which left millions of people thinking that they were going to lose their access to the internet. Well it wasn’t the first scare and back in 1992, the "Michelangelo" virus made a name for malware on a large scale.
The mass hysteria that surrounded "Michelangelo" was due to the belief that the virus would wipe all the information off of people’s computers on March 6th. When the date came and went, the damage was minimal and it turned out that the media had hyped up the story more than it needed to be.
In 1995, new methods of hiding and infecting are created with the first Macro virus known as “Concept”, which turned Microsoft Word documents into weapons. This leads us into the next 5 years of heavy email worms including the "Melissa" worm, "Kak" worm and "ILOVEYOU" worm. In March of 2004, the "Witty" worm exploited holes in several Internal Security Systems (ISS) products and was the first internet worm to carry a destructive payload.
New Frontiers and Advertising
The first half of the 21st century was witness to a shift in the intent and purpose of malware, from being malicious tools to cause harm and prank people, to tools of espionage where destroying the system was the last thing that the attacker wanted to be done, because it would mean not being able to steal more data. In June of 2004, the "Caribe" worm was found infecting mobile phones which were running the Symbian OS; it is the first case of mobile phone malware and spread to other phones via Bluetooth. Even later that year the "Vundo Trojan" caused popups and advertising for rogue antispyware programs and is one of the earlier versions of a type of malware which is commonly seen today.
The Age of Cyber-crime
In January of 2007, the "Storm Worm" was identified. It spread fast by using email spamming and gathered infected systems to be used as bots for the "Storm Botnet". By June it had infected 1.7 million computers and by September between 1 and 10 million. It was believed to have originated from Russia which means that it was most likely used by cyber-crime organizations. Nearly all large botnets are run by cyber-criminals who buy and sell bots to other criminals or to would-be criminals to spread spam or steal personal information.
In 2008 a few months before the "Koobface" worm first starts infecting users of Facebook, the "Torpig" Trojan infects users and turns off their antivirus. It also steals personal information such as log-in credentials and installs subsequent malware on the victims system. Then in November, the "Conficker" worm is discovered and infects anywhere from 9 to 15 million systems. Microsoft puts up a bounty of $250,000 for information leading to the arrest of the creator. Multiple government agencies and organizations from all over the world come together to find a way to combat "Conficker", ending with the eventual release of a patch by Microsoft in December, making everyone safe again
World War Malware
It was only a matter of time before malware started being used as government weapons or tools of espionage at a deeper level than any crime organization is capable of. In 2007, cyber-attacks against Georgia during a conflict with Russia were reported to be coming from infected systems using the Black Energy Botnet. It targeted government websites and news sources, attempting to cut off communication between the government and the people.
In July of 2009, multiple cyber-attacks were reported in both the United States and South Korea (a lot more than usual anyway), leading to a specific piece of malware known as Dozer. It is suspected that this malware was developed and deployed by the North Koreans but no one knows for sure. In 2010, the Trojan Stuxnet was discovered infecting SCADA systems at Iranian nuclear facilities, the malware disrupted systems and sent information back to the command and control servers, recently announced to be controlled by the U.S.
Finally, this year alone we have seen not only the use of Remote Access Trojans (RATs) like BlackShades and DarkComet being used by the Syrian government to spy on rebels but also the use of the Flame Trojan in Middle Eastern countries, a highly sophisticated piece of espionage malware which targeted government facilities and officials.
When you read the news and hear about horrifying malware that threatens the population, you might not always think that it all started with an idea and a little annoying yet harmless program. In the same way you don’t often think that a flood which is destroying a town all starts with a single drop of rain. The people who are using the malware and for what reason will always change and you can never say for sure what is going to happen. One thing is for sure however, Malware will continue to evolve into stealthier, more powerful and more dangerous weaponized software for as long as we integrate computer systems into our lives.
|
<urn:uuid:1f521057-0a96-4a58-9149-ca3172271bfc>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/blog/news/2012/07/the-malware-that-i-used-to-know
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00029.warc.gz
|
en
| 0.977383 | 2,057 | 2.96875 | 3 |
A chain is only as strong as its weakest link
Approximately 400 million malware attacks were executed across the world this past summer alone.
Businesses continue to be hit harder and harder by increasingly sophisticated cyber-attacks. While many attacks are perpetrated from outside company walls, the reality is that the most looming threat comes from within: the employees.
They are the most significant point of failure in terms of security vulnerabilities. From phishing attacks luring employees to click suspicious links that contain spyware, to weak passwords and leaks on social media – all attack vectors can be avoided with effective cybersecurity awareness training.
The idea that a company could lose its entire reputation as a result of inadequate or nonexistent training needs to become a thing of the past. There’s far too much at stake to forgo training.
If a company cannot train employees to protect themselves against cyber threats, how will those employees be able to help protect the company?
First and foremost, employees need to be trained to identify attack vectors. The most effective attack vector, and still the most substantial threat to Google account security, is phishing.
For a detailed explanation of phishing, see Align’s article, “Something Seems Phishy.”
In the early days of phishing, emails were highly suspect from recognizably dubious senders, but today they have become extremely convincing. It is for this reason that every single email received should be opened with caution.
Employees should be sent mock phishing emails to ensure that they can find the signs of a suspicious email. Each time an employee successfully identifies a phishing email, the subsequent phishing tests should increase in the level of difficulty, improving an employee’s ability to recognize telltale phish signs.
Basic phishing identification tactics must be instilled in users: scrutinizing the address and domain names of senders, not haphazardly clicking on embedded links or attachments, keeping an eye out for spelling mistakes or grammatical errors and knowing that emails from legitimate sources will never ask you to send out user credentials or personal information.
Another attack vector is unsafe websites. Employees need to know that navigating to dubious sites and downloading at random can expose their entire company to spyware.
Spyware, which aims to gather information surreptitiously, can be downloaded onto a machine without user permission or knowledge. Spyware can remain hidden for a while before it is discovered, and will undoubtedly steal and potentially profit from compromised data.
Additionally, malware that is self-downloaded and executed from suspect sites often comes in the form of worms. A worm is malware that is self-propagating, meaning it needs no form of user interaction to spread like wildfire throughout a network.
This means that one employee who has received little or no training, who innocently clicks on what appears to be a new site, could lead to the undoing of an entire company.
In addition to being wary when clicking on obscure websites, employees need not turn a blind eye to software updates. Updates for your operating system, browsers and antivirus software contain security updates that address bugs and vulnerabilities.
Applying security updates as soon as they become available is key to protecting your devices. The ultimate example of the improper application of security updates is Equifax.
If Equifax had applied Oracle’s Critical Patch Update (Equifax Breach 'Won't Be Isolated Incident') to address the Struts vulnerability, released months before being hacked, they might have looked a little less flagrantly responsible for their downfall.
Password Security Tips
The 2016 Verizon Data Breach Investigations Report (DBIR) reported that, “63% of confirmed data breaches leverage a weak, default, or stolen password.”
It goes without saying that employees need to abide by strong password rules. Here’s a brief checklist of password best practices:
- For a password to be considered “strong,” it should contain a combination of numbers, upper and lowercase letters, symbols and include a minimum of 8 characters
- Writing a password on a post-it and sticking it to your machine is unacceptable
- Passwords shouldn’t be easy to guess
- Change passwords routinely
- Never keep the default password
If you are envisioning a hacker typing and guessing individual passwords, think again. To give you an idea, a brute-force attack is one that simply tries all possible combinations of passwords.
- If a modern, somewhat slow computer can calculate 3,000,000 password variations in 1 second, it can crack a 6-character, single case password in 103 seconds, clearly making the name of your dog inadequate (266 password variations/3,000,000 password variations per second= 102.97 seconds).
- On the other hand, at this rate a 10-character password that is alphanumeric and contains special characters would take 208,095 years to crack, making this approach utterly useless to a hacker.
Takeaway: By increasing the number of characters and the password complexity, it becomes exponentially more difficult to crack.
Testing and Enforcement
Cybersecurity training can be tedious, but it doesn’t have to be. Employee education modules can be crafted to not only be engaging, but to improve information retention.
Effective security awareness training should cover the identification of risks, threats, mitigation and remediation.
Employees should also be tested and retrained on educational models in a variety of formats to offer convenience and 24/7 access. But how can you make sure your employees are making progress? Reporting of education, retention and performance can be provided to those managing employee education.
Empower your employees by making them an integral defender of your business environment with effective cybersecurity awareness training. Further, protect your business from advanced cyber-attacks with a comprehensive, state-of-the-art Cybersecurity Program.
To explore Align's award-winning services, check out the below links:
- Cybersecurity Advisory Services
- Managed Cloud Services
- Managed Threat Protection
- Vulnerability Management
- Cybersecurity Education
- Customized Cybersecurity Programs
- Client Portal (Access Your Cybersecurity Posture)
- Outsourced Virtual Chief Information Security Officer (vCISO)
|
<urn:uuid:e63d9804-a804-49a3-ab1d-dc268accbf08>
|
CC-MAIN-2022-40
|
https://www.align.com/blog/cybersecurity-education
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00229.warc.gz
|
en
| 0.937542 | 1,303 | 2.609375 | 3 |
Good time keeping is not an obvious priority for network administrators, but the more you think about it the clearer it is that accurate clocks have a crucial role to play on any network. Let the clocks on your networked devices get out of sync and you could end up losing valuable corporate data.
Here are just a few things that rely on hardware clocks which are accurately set and in sync with each other:
- Scheduled data backups
- Successful backups are vital to any organization. Systems that are too far out of sync may fail to back up correctly, or even at all.
- Network accelerators
- These and other devices that use caching and wide area file systems may rely heavily on file time stamps to work out which version of a piece of data is the most current. Bad time syncing could cause these systems to work incorrectly and use the wrong versions of data.
- Network management systems
- When things go wrong, examining system logs is a key part of fault diagnosis. But if the timing in these logs is out of sync it can take much longer than necessary to figure out what went wrong and to get systems up and running again
- Intrusion analysis
- In the event of a network intrusion, working out how your network was compromised and what data was accessed may only be possible if you have accurately time-stamped router and server logs. Hackers will often delete logs if they can, but if they don’t the job will be far harder, giving hackers more time to exploit your network, if the time data is inaccurate.
- Compliance regulations
- Sarbanes Oxley, HIPAA, GLBA and other regulations do or may in the future require accurate time stamping of some categories of transactions and data.
- Trading systems
- Companies in some sectors may make thousands of electronic trades per second. In this sort of environment system clocks need to be very accurate indeed.
Many companies set and synchronize their devices using Network Time Protocol (NTP), with NTP clients or daemons connecting to time servers on the network known as stratum-2 devices. To ensure these stratum-2 time servers are accurate, they are synced over the Internet through port 123 with a stratum-1 device . This public time server is connected directly (i.e. not over a network) to one or more stratum-0 devices– extremely accurate reference clocks.
Unfortunately, there are a number of potential problems with this approach. The most basic one is that the time that a stratum-2 server on a corporate network receives over the Internet from a stratum-1 server is not very precise. That’s because the time data has to travel over the Internet – from the time server to the corporate time source – in an unpredictable way, and at an unpredictable speed. This means it always has a varying, and unknown, error factor. Although all the devices on a local area network that update themselves from the same corporate stratum-2 time server may be reasonably well synchronized (to within anything from 1 to about 100 milliseconds), keeping the time synchronized between stratum-2 devices on different local area networks to a reasonable degree of accuracy can be difficult.
Security Risks with NTP Servers
There are also security risks involved in using public stratum-1 NTP servers, most notably:
NTP clients and daemons are in themselves a potential security risk. Vulnerabilities in this type of software could be (and have in the past been) exploited by hackers sending appropriately crafted packets through the corporate firewall on port 123.
Organizations that use public NTP servers are susceptible to denial of service attacks by a hacker sending spoofed NTP data, making time syncing impossible during the attack. For companies involved in activities such as financial trading—which requires very precise timing information—this could be very damaging.
One way to both avoid these potential security issues and to get more accurate time data is simply to run one or more stratum-1 servers inside your network, behind your corporate firewall.
Running Your Own Stratum-1 Servers
Stratum-1 time servers are available in a single 1U rack-mountable form factor that can easily be installed in your server room or data center and connected to your network, and most have a way of connecting to a stratum-0 reference clock built in. The most commonly used ways to connect to a stratum-0 device are by terrestrial radio or GPS signals.
Terrestrial radio based connections use radio signals such as WWVB out of Fort Collins, Colorado, MSF from Anthorn, UK, or DCF77 from Frankfurt, Germany. This is similar to the way consumer devices such as watches and alarm clocks update themselves with signals from reference clocks to keep accurate time.
Statum-1 time servers that sync with GPS satellite signals are more accurate, but are less convenient to install as they need to be connected to an antenna fitted in a suitable position on the roof of the building. Using time data from a number of satellites, and by calculating the distance of each satellite from the antenna, a stratum-1 time server that uses GPS reference clock signals is able to get the precise time to within 50 or so nanoseconds. More importantly, two or more of these servers at separate locations and running on separate local area networks can also remain in sync with each other to a similar degree of accuracy. Companies that supply this type of equipment include Symmetricom, EndRun Technologies and Time Tools.
To provide redundancy, some larger organizations install multiple GPS-based time servers at each location. An alternative is to have a radio-based time server as a back up to a GPS-based one in case the GPS server itself fails or, more likely, the GPS antenna is damaged, perhaps during bad weather. Given that most radio and GPS based time servers cost between $1,000 and $5,000, purchasing two or more time servers is not a major investment for a medium or large organization. Smaller companies, including those at isolated sites which are not connected to the Internet, can also use a low cost stratum-1 GPS PCI card (connected to an appropriate antenna) to enable a standard PC to act as a time server for the local area network, using the satellites as an external time source.
In the concluding piece in this series we’ll take a look at how to implement a GPS-based time server in your data center.
|
<urn:uuid:035de385-bca7-40d6-8c98-62418bf592f7>
|
CC-MAIN-2022-40
|
https://www.enterprisenetworkingplanet.com/standards-protocols/its-about-time-why-your-network-needs-an-ntp-server/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00229.warc.gz
|
en
| 0.945247 | 1,325 | 2.796875 | 3 |
What is the Cybersecurity Skills Gap?
The cybersecurity skills gap is a phrase often used to describe the difference between the number of cybersecurity positions available and the number of actual cybersecurity professionals that exist to fill those positions. Currently, there are far more cybersecurity positions than there are skilled practitioners available to fill them.
The cybersecurity skills gap is also sometimes referred to as the cybersecurity workforce gap.
The cybersecurity skills gap increases the risk of threats, breaches, and attacks by creating an environment in which organizations are unable to adequately staff security professionals possessing the right expertise and experience. Thus, critical positions remain unfilled, and organizations are unable to sufficiently protect and defend themselves.
Two major research studies have highlighted the ongoing cybersecurity skills gap problem. In 2019, (ISC)2 found that the “cybersecurity workforce needs to grow by 145% to close the skills gap and better defend organizations worldwide.” In 2021, a second study by (ISC)2 examined the current estimate of individuals working in cybersecurity (known as the Cybersecurity Workforce Estimate) and the Cybersecurity Workforce Gap, which is the number of additional security practitioners needed for organizations to adequately defend their assets. This 2021 study found that “Together, the Cybersecurity Workforce Estimate and Cybersecurity Workforce Gap suggest the global cybersecurity workforce needs to grow 65% to effectively defend organizations’ critical assets.”
Industry researchers are quick to point out that these studies only reflect the gap between the number of actual security professionals and the number currently needed by the industry. The statistics do not necessarily account for the anticipated increase in threats and attacks, which is expected to grow dramatically in the coming years.
How Can Organizations Manage Security Risks due to the Cybersecurity Skills Gap?
As cybersecurity staffing challenges continue to peak, organizations struggling to fill open cybersecurity roles should consider working with a Managed Detection and Response (MDR) provider. An MDR provider can offer skilled staff to augment the staff in Security Operations Centers (SOCs) and Security Operations (SecOps). Outsourcing security activities to an MDR also offers cost-effective benefits over attempting to manage security in house, as well as the ability to scale as workloads increase or decrease.
|
<urn:uuid:64e75fb2-c19d-4931-9d81-a8f9328e1eaf>
|
CC-MAIN-2022-40
|
https://www.deepwatch.com/education-center/what-is-the-cybersecurity-skills-gap/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00229.warc.gz
|
en
| 0.946345 | 451 | 2.890625 | 3 |
Phishers are back to using an old tactic in a new fashion to get hold of their victims’ credentials.
One of the first lessons you will learn during anti-phishing training is to hover over the links in a mail to see if they point to the site where you would expect them to point. Although good advice, this is NOT a guarantee that you are going to be safe.
Always visit sites directly, never follow the URLs presented to you in emails or attachments.
Phishing definitionPer Wikipedia:
Phishing is the attempt to acquire sensitive information such as usernames, passwords, and credit card details (and sometimes, indirectly, money), often for malicious reasons, by masquerading as a trustworthy entity in an electronic communication.
BlockedWhile giving the site owner some time to clean up his site, users of Malwarebytes Anti-Malware Premium will find that the phishing page is blocked if they have the Malicious Website Protection enabled.
LinkThe original blogpost about this particular phish, including screenshots and code snippets, can be found here: Very unusual PayPal phishing attack
|
<urn:uuid:2388a9c3-9f1a-4aba-af11-38e5fa668ada>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/blog/news/2016/06/advanced-phishing-tactics-used-to-steal-paypal-credentials
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00229.warc.gz
|
en
| 0.936884 | 396 | 2.765625 | 3 |
An Introduction to the Noise Protocol FrameworkNick Mooney March 5th, 2020 (Last Updated: March 5th, 2020)
Noise is a framework that can be used to construct secure channel protocols. Noise takes a fairly basic set of cryptographic operations and allows them to be combined in ways that provide various security properties. Noise is not a protocol itself: it is a protocol framework, so by "filling in the blanks" you get a concrete protocol that has essentially no knobs to twist. We’ll use the term “Noise protocol” to refer to a concrete protocol, and “Noise framework” to refer to the framework overall.
Every Noise protocol begins with a handshake that follows a particular pattern. The end result of a Noise handshake is an encrypted channel that provides various forms of confidentiality, integrity, and authenticity guarantees. Which of these guarantees you get depends on which handshake pattern is used, but a collection of standard handshakes with known security properties are provided. The Noise framework is fully agnostic to what is actually transmitted via the encrypted channel established with a handshake. You could transmit messages, video files, or anything else.
Noise is fundamentally based around Diffie-Hellman key agreement. There are many constructions that make use of DH, including perhaps the most simple DH construction which is to agree on a key that is then used directly for symmetric encryption. Noise has several advantages over building your own DH-based protocol. Some of the primary benefits are (1) that the structured nature of the Noise framework allows us to build protocols with exactly the properties we need, as well as analyze whether those properties are present, and (2) that “advanced” properties not provided by a simple DH construction (like message authentication) can be built into Noise protocols with combinations of Diffie-Hellman and the behavior of the Noise state machine. Noise Explorer is a tool that automatically analyzes handshake patterns and demonstrates the security guarantees present at each step of the handshake graphically. I refer to Noise Explorer often when trying to understand new handshake patterns.
The rigidity of a Noise protocol is one of its biggest assets. A web browser, using TLS in lieu of a Noise protocol, might have to connect to a wide variety of servers, each supporting different combinations of cryptographic algorithms. This additional capability on behalf of the web browser means that sometimes the browser might use less secure cryptography than it is capable of, or that bugs may be introduced by the logic that handles protocol negotiation. On the other hand, a Noise protocol uses a defined set of cryptographic algorithms and handshake messages that are chosen ahead of time. Noise fits well in homogeneous environments where negotiation is not generally required because both parties run software controlled by the same entity.
I will try to explain a bit more about the Noise framework and why it's neat, but I should mention that the Noise spec is very readable.
There are several reasons why I think the Noise framework is useful:
- The framework is flexible, but protocols built with the framework are concrete: each participant implements the same state machine, and the framework is built to ensure from the start that each participant sees the same interactions
- There is almost no built-in protocol negotiation*, reducing the risk of downgrade attacks (where an attacker forces a client to use a less-secure mode of operation), confused deputy attacks (where a client is "tricked" into misbehaving or leaking secret information), and other issues that are the result of two different parties having different views of the same interaction
- Noise requires a fairly minimal set of primitives to build a concrete protocol:
- An AEAD cipher (such as ChaChaPoly1305 or AES-GCM)
- A hashing algorithm (such as BLAKE2 or SHA-256)
- A Diffie-Hellman scheme (such as ECDH with Curve25519)
In short, Noise allows developers to build secure protocols that do not have a lot of surprising behavior.
* Noise supports fallback patterns, which allow for some negotiation in circumstances that cause an initial handshake to fail, such as when a long-term static key has changed. This is very limited compared to, say, TLS.
A Noise protocol begins with two parties exchanging handshake messages. During this handshake phase the parties exchange DH public keys and perform a sequence of DH operations, hashing the DH results into a shared secret key. After the handshake phase each party can use this shared key to send encrypted transport messages.
The Noise framework supports handshakes where each party has a long-term static key pair and/or an ephemeral key pair. A Noise handshake is described by a simple language. This language consists of tokens which are arranged into message patterns. Message patterns are arranged into handshake patterns.
A message pattern is a sequence of tokens that specifies the DH public keys that comprise a handshake message, and the DH operations that are performed when sending or receiving that message. A handshake pattern specifies the sequential exchange of messages that comprise a handshake.
A handshake pattern can be instantiated by DH functions, cipher functions, and hash functions to give a concrete Noise protocol.
A handshake consists of two parties, the initiator and the responder. Once a Noise handshake is completed, the result is an AEAD-protected transport channel, but it's also important to note that arbitrary message payloads can be transmitted during the handshake phase, before the full handshake is complete. This allows immediate transmission of protocol messages without the full round trip delay of the handshake. Payloads transmitted alongside handshake messages are partially protected, and will have different security guarantees depending on which handshake message they are attached to.
Whenever encrypted information is transmitted during a handshake (after keying material has been established, usually after the first Diffie-Hellman), the hash of the handshake transcript so far is included as the "associated data" in AEAD. This helps ensure that both parties have the same view of the handshake, even if the encrypted payload is empty.
The quote above mentions that the initiator and responder can each have a long-term static key pair and/or an ephemeral key pair. Noise handshake patterns are named after the state of these long-term static keys:
XN, etc. The first letter indicates the status of the initiator's long-term static key, and the second letter indicates the status of the responder's long-term static key. All Noise handshakes involve some combination of transmitting public keys and performing Diffie-Hellman operations. Static keys are used to provide long-term participant identity, so you can confirm that the party you’re talking to today is the same party you were talking to yesterday.
All the standard handshake patterns require an exchange of ephemeral keys: this is done to provide forward secrecy, so that a later compromise of long-term static keys would not reveal the plaintext contents of previous communications. Noise has this property in common with TLS 1.3, which also requires the exchange of ephemeral keys, an upgrade from previous versions of TLS where it was optional. Some Noise protocols also offer identity hiding properties, depending on when the static keys are transmitted.
||No long term static-key is present|
||The long-term static key is Known to the other party before the handshake|
||The long-term static key is transmitted (Xmitted) to the other party during the handshake|
||The long-term static key (for the initiator) is Immediately transmitted to the responder, despite absent/reduced identity hiding|
Handshakes are represented textually using a standard format: an arrow signifying the direction of communication followed by a sequence of tokens that describe state machine operations. You will see this "ASCII art" format whenever handshake patterns are described in the Noise specification or elsewhere.
02. Valid Handshakes
During a handshake, each party transmits its ephemeral and/or static public keys, and performs DH operations between the ephemeral and/or static public keys of both parties. In fact, there are only six possible tokens (barring PSKs, which we will discuss later):
e: generate ephemeral keypair and transmit public key.
->at the front of the line indicates that the public key is transmitted from initiator to responder and
<-indicates that the public key is transmitted from responder to initiator.
s: transmit long-term static public key. The
<-arrows signify the transmission direction in the same way as for the
se: both participants perform a DH between the ephemeral/static keypair of the initiator and the ephemeral/static keypair of the responder.
Commas separate each token in the same step of the handshake and indicate that the associated action occurs before the next token is processed.
03. Example Handshake Patterns
Jumping Right In: The
Here is the
NN Noise handshake pattern.
NN means that neither party has a long-term static key, so the handshake is based entirely on ephemeral keys. The handshake pattern is:
-> e <- e, ee
This pattern represents an unauthenticated DH handshake.
The first thing to notice is that
ee are not messages, per se -- they are tokens processed by the state machines of both parties. Some tokens (
s), but not all (e.g.
es), lead to messages being sent.
Let's look at what each party does during this handshake.
-> arrow indicates that the transmission will be from the initiator to the responder. The
e token specifies that the initiator generates an ephemeral keypair and transmits the public key to the responder. The responder receives and stores the initiator public key. Both parties hash this key into their handshake hash, which will be included as authenticated data in AEAD ciphertext (ensuring that both parties have the same view of the handshake transcript) as soon as a symmetric key is established and the parties begin encrypting messages. The initiator also has the option to transmit a payload alongside this handshake message. If the initiator were to include a payload, it would include no authentication.
<- e, ee
The responder now does the same. The
<- arrow indicates the transmission will be from the responder to the initiator. The
e token indicates that the responder will generate an ephemeral keypair and transmit the public key to the initiator. The initiator receives this key, and both parties hash the key into their handshake hash. Now, processing the
ee token, both parties perform a Diffie-Hellman between the initiator ephemeral key and the responder ephemeral key. The result of this Diffie-Hellman is used to create a new chaining key, which is in turn used to derive a key that can be used to symmetrically encrypt/decrypt content*. As we mentioned in the first handshake step, now that symmetric key material has been generated, the handshake hash will be included in AEAD ciphertext.
The responder can include a message payload alongside this handshake message. This message would be encrypted, providing message secrecy and some forward secrecy.
See the analysis of the NN handshake in Noise Explorer for some more information.
At the termination of the handshake, both parties will have a shared symmetric state (technically, two shared symmetric states) that can be used to send encrypted messages back and forth. These transport messages (post-handshake) will benefit from message secrecy and some forward secrecy. Because the whole handshake is unauthenticated via any out-of-band means, this scheme is not resistant to an active attacker.
Changing Things: The
Let's consider now the
NK pattern. The initiator here still has no long-term static identity key, but the responder has a long-term static identity that is known to the initiator (transmitted out of band, or during a previous handshake).
The handshake pattern is as follows:
<- s ... -> e, es <- e, ee
The first step of the handshake pattern is a "pre-message," which just serves to identify that the contents were somehow transmitted before the handshake began. In this case,
<- s shows that the responder's long-term static identity was somehow communicated to the initiator ahead of time. The
... separates pre-messages from handshake messages.
-> e, es
The initiator generates an ephemeral public key transmits it to the responder. Transmitted / received messages are always hashed into the handshake hash. Next, both parties perform a Diffie-Hellman between the initiator's ephemeral key and the responder's static key, which is (as always) used to update the chaining key.
Because our chaining key is now based off the responder's long-term static key, which was transmitted out-of-band, any message payload attached to this handshake method benefits from some message secrecy (i.e. given a full transcript of this handshake, the message contents could only be decrypted by an attacker with access to the responder's long-term private key).
<- e, ee
The responder now generates an ephemeral keypair and transmits its public key to the initiator. This handshake message (containing the responder's ephemeral pubkey) benefits from sender authentication since the responder's long-term static identity was used in a Diffie-Hellman. This handshake message also benefits from some message secrecy, since the former DH was used to establish a symmetric key.
Both parties perform a Diffie-Hellman between the initiator's ephemeral key and the responder's ephemeral key, rolling the result into the chaining key and enabling forward secrecy, should the responder’s long-term static key ever be compromised.
04. The Handshake State Machine
During a Noise handshake, each party keeps track of the following variables:
e: The static and ephemeral keypairs of the local party (which may be empty)
re: The static and ephemeral public keys of the remote party (which may be empty)
h: The aforementioned handshake hash, which hashes all handshake data sent and received
ck: A chaining key based on hashes of the outputs of all previous Diffie-Hellman operations
n: An encryption key (derived from
ck) and a nonce that are used to encrypt message payloads
As each token is processed, these variables are updated. The functions supported by the state machine are defined in the Processing Rules section of the Noise specification.
Because the handshake pattern is set ahead of time, each state of the state machine has exact one valid transition to the next state. You can view the possible state transitions as a simple, single-directional chain: there is no input that causes cyclical behavior.
05. After the Handshake
During the handshake phase, the two parties share a single symmetric cipher state. Once a Noise handshake is completed, this state is split into two cipher states, one for each direction of communication. Each of the newly-created ciphers uses a key derived from an HKDF with the chaining key as input.
At this point, the handshake is complete and there is nothing Noise-specific about communicating over the encrypted channels produced by the handshake. Noise does specify a rekey operation that could be triggered by an application-specific message to rotate keys any time after the handshake has been completed.
06. Adding More
Noise supports several other features outside of the handshake patterns that we haven't yet talked about.
Prologues can be used to ensure that both parties have identical views of data -- to ensure that a MITM attack hasn't occurred between the two users before the handshake commences, for example. Prologues will cause the handshake to fail if both parties do not have the same prologue data, but prologues are not considered to be secret data and are not mixed into encryption keys.
Noise also supports pre-shared keys. PSKs can be used to provide message secrecy (and some form of message authentication) before any other handshake operations have occurred. Noise patterns that use PSKs are named by appending "pskZ" to the name of the handshake, where "Z" is a number indicating where the psk token is inserted into the handshake.
NNpsk0 for example. Remember that the original
NN handshake is:
-> e <- e, ee
NN with the PSK token included at the beginning of the first handshake message. The suffixes
2, etc place the PSK token at the end of the first, second, etc. messages respectively. The
NNpsk0 handshake pattern is:
-> psk, e <- e, ee
As a PSK is pre-shared by definition, the
psk token doesn't actually cause either party to transmit anything to the other. The
psk token is processed by both parties mixing the PSK into their cipher state.
In particular, this token is processed by each party calling
MixKeyAndHash(psk) (defined in the Noise spec), which updates both the chaining key and the handshake hash. To ensure forward secrecy and avoid catastrophic reuse of cipher keys, the Noise protocol framework does not allow for the transmission of encrypted data after just processing the
e token is processed in a PSK handshake, the ephemeral public key is mixed into the handshake hash (as usual) and the chaining key (which is specific to PSK handshakes). This mixing ensures randomization of the symmetric key to ensure that the symmetric key is not based solely on the PSK. In fact, an
e token must be present in a PSK-based handshake, either before or after the
Full Protocol Names
When we use Noise to build a protocol, we "fill in the blanks" by providing a handshake pattern, an AEAD construction, a hash function, and a DH scheme. Noise prescribes a naming convention for a specified protocol, as follows:
This protocol name contains all the information required for Noise clients to participate in a concrete run of this protocol, giving us a nice human-readable way to specify a protocol. The initial chaining key within the handshake state machine is actually based on the full protocol name, further ensuring that both parties have the same internal model of the protocol they are running.
07. Noise in Production
Noise is used today in several high-profile projects:
- WhatsApp uses the "Noise Pipes" construction from the specification to perform encryption of client-server communications
- WireGuard, a modern VPN, uses the Noise IK pattern to establish encrypted channels between clients
- Slack's Nebula project, an overlay networking tool, uses Noise
- The Lightning Network uses Noise
- I2P uses Noise
- David Wong's Noise explanation, an excellent visual introduction to the Noise protocol framework
- Trevor Perrin's Noise talk at Real World Crypto 2018
- The official Noise website
- The official Noise specification, one of the more readable specs I have encountered!
- Noise Explorer, a tool that allows you to explore Noise handshake patterns as well as design your own. Noise Explorer performs some automated analysis of the security properties of various handshakes, and is also capable of generating reference implementations.
- Thanks to Jordan Wright, Jeremy Erickson, Ed Marczak, and Dennis Jackson for editing and providing input on pre-release versions of this post
|
<urn:uuid:469dec22-e14c-4650-aecd-bb198d18a30b>
|
CC-MAIN-2022-40
|
https://duo.com/labs/tech-notes/noise-protocol-framework-intro
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00229.warc.gz
|
en
| 0.896065 | 4,216 | 3.203125 | 3 |
What is Middleware?
Have you been hearing industry insiders use the term “Middleware”? What is Middleware, in plain English, really?
Middleware is something that’s referred to by software developers as “software glue.” Technically, Middleware is a kind of computer connectivity software that supports software applications in ways that go above and beyond the operating system itself.
Designed to support a number of application architectures, its overall function is to eliminate the difficulty of integration. Simply put, Middleware programs act as messaging services, making it possible for data management and communication in distributed applications.
Simply put, it functions much the same as plumbing does. It provides the “fixtures,” “pipes” and “joints” to enable what’s important to pass through.
It is the software that bridges the gap between applications and the operating system that lie on either side of a network’s distributed computing platform. As a bridge, it makes it possible for two distinct systems to communicate as Middleware programs move data from one application to another, enabling seamless connectivity. This makes it easier for developers to carry out both input/output and communication, enabling them to focus on the primary function of their application.
This systematic linking of contrasting applications is known as Enterprise Application Integration (EAI).
Examples of Middleware include procedural middleware, message oriented middleware (MOM), enterprise service bus (ESB), data integration, and object request brokers (ORBs), among many others.
There are two categories of Middleware:
- Those that supply human-time services, like web request servicing, and
- Those that carry out their functions in machine-time, such as middleware used in telecommunications and defense systems, and the aerospace industry.
Many systems and networks across industries do make use of Middleware because of its advantages.
What are the advantages of middleware?
- Middleware enables the flow of real-time information access within and among systems in a network.
- In business, it helps streamline processes and improves efficiency in terms of organization.
- Since it facilitates communication between systems, it is able to maintain the integrity of information across a multitude of systems within a network.
- Middleware is also advantageous because of its range of use in a wide array of software systems, from distributed objects and components, to mobile application support, to message-oriented communication, and more.
- Middleware is the manna of developers as it helps them to better create different types of networked applications.
What are the disadvantages of middleware?
- Because of its prohibitively high development costs, not every business can afford to maintain and grow the potential of Middleware.
- There are few people with actual Middleware experience in the market today.
- Benchmarks for Middleware haven’t been set, thus there are hardly any standard marks for Middleware performance levels.
- Most Middleware tools have not yet been fully developed for optimal operations.
- There are too many platforms in existence today that are not yet covered by Middleware.
- In some cases, Middleware often jeopardizes some systems’ real-time performance.
As a connectivity software, Middleware is mostly invisible. It provides a more standard way of doing things, it ties together complex systems and allows developers to concentrate more on the functionality of their applications.
Would you use Middleware in your business? Tell us in the comment section below.
|
<urn:uuid:ac301810-7454-44bf-ac2d-ceb78e7e7dea>
|
CC-MAIN-2022-40
|
https://fourcornerstone.com/middleware-advantages-disadvantages/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00229.warc.gz
|
en
| 0.936909 | 725 | 3.84375 | 4 |
A cloud-based flight path simulator, which determines the most efficient route airplanes can take to reach their destination, is set to save Qantas millions of dollars in fuel, the airline’s CEO Alan Joyce said today.
The Constellation system simulates thousands of possible flight paths based on “millions of data points” including weather and wind patterns, and selects the route that uses the least fuel.
“From a business point of view, that’s going to save us $40m in costs each year by a one or two per cent improvement on these flight plans,” Joyce told an audience in Sydney this morning.
“We did a Sydney to Santiago flight – we saved one tonne of fuel on that flight by getting a better flight path. And we’re going to reduce our carbon emissions because of that system by around 50 million kilograms each year,” he added.
The “numbers are massive” and will have a “huge impact on our business” Joyce said at Amazon Web Services’ Sydney conference.
The system, which Qantas group chief technology officer Rob James described as a “new flight planning algorithm”, was developed in partnership with the Australian Centre for Field Robotics at the University of Sydney.
Constellation, understood to have been around five years in the making, will be rolled out to all aircraft by the end of the year.
Qantas’ fuel bill for last year was $3.2 billion, making a one per cent saving hugely significant.
“One of the most expensive parts of flying is fuel. It’s also the most volatile cost to an airline. Anywhere between 20 and 40 per cent of an operating cost for an airline…Every time one of those fully loaded A380s take off we are pumping about seven litres into those engines every second,” James said.
“That’s like a fire hose of fuel going into those engines. What you want to do is fly those things as efficiently as possible,” he added.
The system – which runs on AWS – sometimes picks routes a human operator would not, for example going miles off course to pick up a time-saving tailwind.
“We can choose to take a less turbulent path, or we can pick up a tailwind to get you to your destination much sooner,” James said.
Flying towards the future
James also revealed the airline was using another simulator engine – dubbed QuadraX – which simulates how aircraft will perform in different conditions and configurations over a 15 year period.
“Apart from fuel being so expensive, the aircraft themselves are very expensive assets to purchase. And when you buy one that’s a ten to 15 year commitment so you want to get that right,” James said.
QuadraX allows airplane purchasers to simulate the assets’ performance on different flight paths, with different engine configurations and usage patterns and in different weather and traffic conditions.
“Every time we run one of these simulations we’re basically doing a ten year study in about nine hours,” James said.
The analysis is compute heavy and done in the AWS cloud.
“We spin up approximately 4,000 CPUs, run the analysis and then spin it right down to nothing,” James said.
The CTO said cloud would become an even bigger element of Qantas’ operations in coming years, driven by the increasing amount of data from the airline’s assets.
“We’ve got these new Dreamliners, 787s, and every time they land we download half a terabyte of sensor data that we can analyse; every time they fly. And that is dwarfed in comparison to what we expect in the future. The next generation of aircraft are going to give us a petabyte of data every time they fly, from the engines alone,” he said.
The company has a roadmap to move most compute workloads, including mainframes and mid-range systems to the cloud by 2021.
“A pretty bold goal for us,” James said.
|
<urn:uuid:515ded65-f366-43af-b11d-705055d57711>
|
CC-MAIN-2022-40
|
https://www.cio.com/article/213940/qantas-cloud-based-flight-sim-saving-millions-in-fuel.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00229.warc.gz
|
en
| 0.95224 | 863 | 2.53125 | 3 |
The username and password combination has been with us for a long time, but we're increasingly seeing its shortcomings for protecting sensitive data.
A new survey of 24,000 consumers across six continents by technology services and consulting company Accenture (opens in new tab) reveals that 60 per cent of consumers find passwords cumbersome and more than three-quarters worldwide would be open to using alternatives.
"The widespread practice of typing usernames and passwords to log on to the Internet might soon become obsolete," says Robin Murdoch, managing director of Accenture's Internet and Social business segment. "Consumers are increasingly frustrated with these traditional methods because they are becoming less reliable for protecting their personal data such as email addresses, mobile phone numbers and purchasing history".
Users in China and India are most likely to be open to alternatives, at 92 per cent and 84 per cent, respectively. More than three-quarters (78 per cent) of consumers in each of Brazil, Mexico and Sweden, and 74 per cent in the United States, are also willing to consider security methods other than usernames and passwords.
The survey also shows a general lack of faith in the security of personal data. Fewer than half (46 per cent) of consumers globally are confident in the security of their information. Those in emerging countries are slightly more confident in the security of their personal data than were those in developed nations, at 50 per cent and 42 per cent, respectively.
"As hackers use more-sophisticated and less-obvious methods, passwords are no longer seen as the definitive answers to the security question," Murdoch adds. "Traditional one-step passwords are now being matched with alternative methods using biometric technologies such as fingerprint recognition and two-step device verification. Within the next few years we are likely to see many more consumers embracing these and other alternative methods".
You can read more in the full report Digital Trust in the IoT Era (opens in new tab) which is available to download from the Accenture website.
|
<urn:uuid:b09d6bf4-34ce-4d33-abfd-59781aa9ae5d>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/2015/07/14/consumers-ready-get-rid-of-username-password-combo/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00229.warc.gz
|
en
| 0.960345 | 405 | 2.5625 | 3 |
When a Layer 7 protocol uses TCP for transportation, the TCP payload can be segmented due to various reasons, such as application
design, maximum segment size (MSS), TCP window size, and so on. The application-level gateways (ALGs) that the firewall and
NAT support do not have the capability to recognize TCP fragments for packet inspection. vTCP is a general framework that
ALGs use to understand TCP segments and to parse the TCP payload.
vTCP helps applications like NAT and Session Initiation Protocol (SIP) that require the entire TCP payload to rewrite the
embedded data. The firewall uses vTCP to help ALGs support data splitting between packets.
When you configure firewall and NAT ALGs, the vTCP functionality is activated.
vTCP currently supports Real Time Streaming Protocol (RTSP) and DNS ALGs.
TCP Acknowledgment and Reliable Transmission
Because vTCP resides between two TCP hosts, a buffer space is required to store TCP segments temporarily, before they are
sent to other hosts. vTCP ensures that data transmission occurs properly between hosts. vTCP sends a TCP acknowledgment (ACK)
to the sending host if vTCP requires more data for data transmission. vTCP also keeps track of the ACKs sent by the receiving
host from the beginning of the TCP flow to closely monitor the acknowledged data.
vTCP reassembles TCP segments. The IP header and the TCP header information of the incoming segments are saved in the vTCP
buffer for reliable transmission.
vTCP can make minor changes in the length of outgoing segments for NAT-enabled applications. vTCP can either squeeze the additional
length of data to the last segment or create a new segment to carry the extra data. The IP header or the TCP header content
of the newly created segment is derived from the original incoming segment. The total length of the IP header and the TCP
header sequence numbers are adjusted accordingly.
|
<urn:uuid:dae5b886-3cc5-4fc1-8407-ca496d6be46e>
|
CC-MAIN-2022-40
|
https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_nat/configuration/xe-16-7/nat-xe-16-7-book/iadnat-h323-vtcp-ha.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00229.warc.gz
|
en
| 0.859436 | 412 | 2.765625 | 3 |
With election day right around the corner, I started to think about what it means to be a conservative versus a liberal and how that impacts one's world view. At the most basic level, conservatives want to support continuity. In general they're happy with the status quo and want to maintain it. Conservatives act as society's buffer for a changing world. Liberals, on the other hand, embrace change. By nature, they're not satisfied with the way things are and want to find new ways to do things. Liberals are the opportunists that move a society forward.
Before I go any further, I have to admit that I'm actually equating liberalism with progressivism — a popular notion, but technically inaccurate. Liberalism is actually more about civil liberties and individual rights. But for the purposes of this discussion we're talking about liberal behavior, not ideology.
In fact, while we normally associate liberalism and conservatism with politics, they really represent socio-cultural perspectives that map to all human endeavors. This includes how we develop and use technology, especially computer technology. I'll get to this shortly. But first let's look at the demographics.
According to the Pew Research Center, in their latest poll among registered voters, conservatives outnumber liberals by a two-to-one margin — 38 percent conservative versus 19 percent liberal. Another 38 percent identify themselves as moderates. By party, 32 percent of Democrats describe themselves as liberals, 23 percent describe themselves as conservatives and 41 percent categorized themselves as moderates. The respective numbers for Republicans are 5 percent, 64 percent and 29 percent.
One of the unfortunate effects of the concentration of liberals in the Democratic party and conservatives in the Republican party is the open warfare that now exists across this philosophical divide. In this culture of two-party tribalism, each party strives to annihilate the other. I say unfortunate because it's detrimental to have a political system dominated by either liberalism or conservatism. As can be deduced from my opening paragraph, I believe both philosophies have important roles to play in politics.
And not just in politics, but in technology as well. I've come to see three basic philosophical approaches to science and engineering, reflected by these common utterances:
- “If it ain't broke, don't fix it.” (techno-conservative)
- “Let me tweak it a little bit.” (techno-moderate)
- “We need a whole new paradigm.” (techno-liberal)
In the technology arena, liberalism maps to innovation and risk-taking; conservatism to standardization and risk avoidance. Without innovation, progress stops. But too much causes chaos and confusion (think the 60s). On the other hand, without standardization and technological continuity, getting anything done becomes impractical. But too much limits choice and kills innovation (think the 50s). Obviously, what we need is some sort of balance.
Do we have balance now? Depends on who you are.
If you're a techo-conservative in IT, everything is moving too fast. Server OEMs are in constant motion trying to keep pace with the latest processor and OS releases. On the software side, ISVs struggle to keep up as new hardware platforms are developed and older ones die off. In the 1980s, Digital Equipment Corporation (DEC), at the time, the second largest computer company in the world, was swept away when its proprietary VAX/VMS and PDP-11/RSX-11M systems became obsolete after the rise of standard Unix platforms and commodity PCs. Today, Microsoft is trying to reinvent itself as the desktop platform becomes subordinate to the Internet platform. Yes, being a techno-conservative has its downsides.
On the other hand, if you're a techno-liberal in IT, the future can't come soon enough. For those who see the true promise of the Web for multi-media and as a general platform for application software, the Internet is still far too slow and primitive. If you're a neurobioloist who wants to simulate the human brain at the molecular level, it's going to be pretty frustrating until you get the right software tools and the petaflops to back them up. Beyond that, Ray Kurweil, the ultimate tech progressive, envisions molecular computing and spiritual machines to provide a complete redefinition of the human condition. If being a techno-conservative is stressful, being a techno-liberal is discouraging.
You might think that the high-tech crowd, especially the IT industry, would be dominated by liberal-thinkers, since innovation is the foundation of engineering. But that's probably only true for areas that haven't been commercialized yet, like nano-engineering. Today, most computer technology is so well integrated into the commercial realm that the conservative tendencies of businesses drive information and computer technology. Earlier this year in our sister publication, GRIDtoday, Tom Gibbs observed that IT invests around 15 percent in innovation. He thought that was dangerously low, but it probably reflects the natural conservatism of commercial enterprises. And even at this level of investment, IT has delivered enormous value to the economy and has been the driving force behind rising productivity for decades. For better and for worse, commercial IT buffers the rate of innovation.
In our world — high performance computing — it's easy to see the philosophical divisions that define the community. Toward the conservative end of the spectrum we have commodity clusters, the x86 ISA, Ethernet, Linux and the reams of code written in Fortran, C and MPI. On the liberal end we have multi-core processors, hardware accelerators (FPGAs, FP coprocessors, GPUs), InfiniBand, new parallel programming languages, national petascale programs, heterogeneous computing and HPC research.
At last week's HPC User Forum in Manchester, the tension between techno-conservatism and techno-liberalism was in evidence. Just some examples:
- IDC's Addison Snell reported that about one-third of HPC users are looking at accelerator processors, mainly FPGAs.
- Numerical Algorithms Group's Ian Reid said the days of the single-core treadmill are over, and multi-core is here to stay. He said that this creates major software issues that will force us to move to hybrid hardware architectures. But software portability must be maintained. He also noted there is considerable concern about whether the HPCS languages will deliver on their promises.
- Intel's Stephen Wheat talked about the return of hyper-threading, 80-core teraflop chips and silicon photonics, but lamented that the impending petaflop hardware will arrive before the software is ready.
- Paul Muzio, AHPCRC/NCSI, stressed that GM, Dassault and many other major companies are still using Fortran. He maintained that businesses won't throw out those huge investments. The applications proposed for petascale computing are not the ones companies or the defense establishment will invest in. But there are opportunities for languages like Fortran to evolve, such as Co-Array Fortran.
- Andy Grant said that IBM is starting to see requirements for accelerators in procurements. IBM is installing a large Opteron cluster with ClearSpeed boards at the University of Bristol. He also talked about the Blue Gene/L successors — Blue Gene/P (for petaflop), followed by Blue Gene/Q (10 petaflops).
- Manchester's Andrew Jones chaired a panel on whether programming model changes are needed for petaflops computing. Jones noted that scaling to 1,000 processors on homogeneous architectures is difficult today. Petascale and exascale computing will involve many more threads than today, and possibly heterogeneous architectures. He believes we may need a new programming paradigm.
It's actually gratifying to see this tension in HPC. The real danger would be to descend into techno-Democrats and techno-Republicans. As long as the conservatives don't slow innovation too much and the liberals don't send us into chaos, the community will move forward.
I'm Michael Feldman and I approve this message.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].
|
<urn:uuid:f772bdfb-3a81-4230-beb8-d6bb1b5167ba>
|
CC-MAIN-2022-40
|
https://www.hpcwire.com/2006/11/03/how_to_talk_to_a_techno-liberal_and_you_must/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00229.warc.gz
|
en
| 0.946529 | 1,693 | 2.515625 | 3 |
With organizations continuing to grow, so will the technology requirements. Increasing power demands will call for more electrical outlets. To meet the growing demand of technology, business organizations will add IDFs (Intermediate Distribution Frame) and expand MDFs (Main Distribution Frame) which means multiple devices that will require multiple outlets. Enter Power Distribution Units, or PDUs.
What is a PDU?
A PDU distributes reliable network power to multiple devices. It does not generate or condition power but delivers AC power from an uninterruptible power supply (UPS), a generator, or utility power source to servers, networking hardware, telecom equipment, and other devices.
Why use a PDU?
In its basic form, a PDU does the same job as a power strip. It uses current from a single source, usually a wall outlet, to power multiple devices, such as computers, peripherals, and networking devices. However, PDUs are designed for installation in equipment racks, maintaining power within reach of rack mounted devices such as servers, switches, routers, or cooling fans.
PDUs are commonly used in data centers, network closets, VoIP phone systems, and industrial environments.
How to pick the right PDU
To find the right model for your needs, ask yourself the following six questions.
- Where will I install it?
- What kind of input power do I have?
- How much power does my equipment need?
- How many outlets do my devices need?
- What kinds of plugs do my devices have?
- Do I need other features?
PDU models are typically offered in 1U or 2U heights for horizontal mounting. They can also be mounted vertically, known as 0U. Depending on the model, they can be mounted in a rack enclosure, on a wall, or under a shelf.
Horizontal PDUs are designed for mounting in a standard 19-inch equipment racks. They can be placed above, below, or between the components they power.
Vertical PDUs look like tall power strips. They fit on the upright rails of a rack enclosure and do not take horizontal mounting space away from other equipment.
Choosing the right PDU will depend on the device(s) input plug. Single-phase power alternates between positive and negative voltage; in the United States, the rate is 60 cycles per second. That means the wave has zero voltage every time it moves from positive to negative and back. Most household and office power are single-phase and work with the following receptacle (outlet) types:
How many outlets do devices need?
The PDU chosen should have at least as many outlets as the number of plugs that are needed to connect. If a device has more than one plug, or if one device must be plugged into another, adjust your count. Remember to leave room for more devices that may be added in the future.
A basic PDU shares a single power source with multiple devices. It simplifies the management of rack equipment and makes a valuable addition to any IT installation. Advanced models do much more, such as protecting against downtime and allowing remote control through software.
Basic PDU distributes unfiltered AC power from a UPS system, generator, or utility source to multiple connected devices.
Switched PDU distributes network-grade power and allows users to control outlets individually or collectively. A digital meter provides information on load and voltage and supports local control; an SNMP connection enables remote monitoring and control.
Ready to choose a PDU?
|
<urn:uuid:1420668e-306b-4c55-95da-6d2985031de2>
|
CC-MAIN-2022-40
|
https://minutemanups.com/expanding-your-connection-power-distribution-units/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00429.warc.gz
|
en
| 0.924172 | 736 | 3.328125 | 3 |
Laptop Security – Easy to Use Security Apps to Protect Money, Data, Photos, Messages, email and Banking Apps on a Laptop Right Now
Recently the US Department of Homeland Security issued a warning to US citizens about the increase in a type of malicious cyber attack involving Emotet malware. This malware was developed by an organized hacking group to steal banking credentials. The cyber attacks focus on private and public sector businesses as well as small governments.
Emotet infects computers and quickly spreads to all the computers connected to the same internet connection. Hackers send spam emails with harmful attachments that download the malware to your computer. It works quickly to steal personal information and money.
Emotet is also involved in even more email scams. The new Coronavirus has presented an opportunity for hackers to play on people’s fears and trick them into downloading malware that can steal information. Unfortunately, as with many natural disasters and world crises, hackers are spamming people with Coronavirus themed emails. The spam campaigns contain Coronavirus related content with links and attachments designed to look like they come from the US Center for Disease Control. Some phishing emails contain links to malicious websites. Unsuspecting spam recipients looking for virus information about the Coronavirus end up losing money or with with stolen identities.
Use Antivirus Software
Hackers send computer viruses, malware, spyware, ransomware and other types of malware through thousands of spam email phishing campaigns. If an unsuspecting email recipient downloads an innocent-looking email attachment, such as a Microsoft Word file, an Adobe PDF document, or a zip file that is malware in disguise, they may inadvertently launch a malware attack.
Other spam campaigns send emails with links to malicious websites. The links send the reader to malicious websites that phish for passwords or launch a malware download.
Sometimes one malware downloads even more malware programs causing further damage to the infected device.
Antivirus programs can detect emails that contain harmful attachments or links to malicious websites. A quality antivirus program is updated by its developer with the latest information about malware attacks, malicious websites, and information about spam emails hackers are sending. An antivirus program is also capable of blocking your computer or phone from visiting websites that are known to steal passwords or money from visitors.
It’s best to use a quality antivirus program that is always updated and works on all of your computers and phones. With the latest information and libraries about malware, viruses, and spam that can harm your computer, hack into your information, or steal your money a reputable antivirus app will help protect you from harm.
Learn About Data Privacy
People understand that their information is being farmed from social media sites like Facebook and Instagram, but many don’t truly understand how it’s being used by advertisers. But companies that want you to buy their products aren’t the only ones who want your sensitive information. Hackers steal social media passwords sent across public WiFi and use it to break into more valuable online accounts like banks and credit cards.
With a basic understanding of data privacy, you’ll learn how something seemingly innocuous like posting a vacation photo or your hometown can lead to your bank account being hacked or your credit card number stolen.
Learn about the basics of cyber security and data privacy, try one of these courses.
- IBM Data Science Professional Certificate by IBM
- Mathematics for Machine Learningby Imperial College London
- Data Mining by University of Illinois
This is one of the easiest things you can do to protect your laptop, computer, router, or smartphone and it’s usually free. Be sure the operating system of each device you own is updated with the latest security patches. Apps should be kept up to date as well.
Set your device to accept automatic updates so you’ll have them as soon as they are available. I have my phone set to accept the update only when I’m connected to WiFi to save on data charges for my phone. Updating apps promptly will fix known security vulnerabilities and bugs. Although it’s true that some updates only bring new features and sometimes not for the better, many updates fix bugs and security issues that hackers could use to break into your device and steal information or money.
Use Two-factor Authentication
Use two-factor authentication (2FA) where it’s available. Two-factor authentication requires a second step to log into an online account, an app, or a device. An example of two-factor authentication is using a free app like Google Authenticator to require confirmation of a login attempt. Other 2FA methods include responding to an email or using a USB key fob as credentials.
Many online accounts and services can be set to require a response to an email or SMS text message to login to confirm your identity. Enabling 2FA ensures the person logging in is really you. It adds an extra layer of account security.
Two-factor authentication has to be set up ahead of time with some other form of authentication authorized in advance.
To take security one step further, enable more multi-factor authentication if available. As the name implies, it requires more than two actions to log in to an app, device, or online account.
Use Biometric Login
Biometric login enabled devices and apps use fingerprint scans or facial recognition to gain access. Biometrics are the best way to protect your device and all the information it contains from hackers or anyone else who might try to access your device. For example, if you lost your phone, the person who finds it might be able to see your photos and open your banking apps if the information is not protected. Anyone who accesses your email without permission can use it to reset the password on any account connected to that email address. When you secure access to your phone or laptop with biometric login, all of your private information is protected.
To use biometric login, you must have a device that supports it. In January 2020 Microsoft stopped supporting their Windows 7 operating system. There are no more security patches and no technical support available. Although this is a hassle for Windows 7 users, it may get them to upgrade their laptops and computers.
Even though it might cost you more, there are many devices that you can buy that come with Windows 10 installed that are less than the cost of buying a new Windows 10 operating system upgrade. Windows 10 has the latest security features including facial recognition and fingerprint scans. Most newer phones now support fingerprint scans and facial recognition and fingerprint scans for login credentials. This is the most secure way to protect all the information on your device and protect your money.
Use a Password Vault
If you don’t want to spend the money to upgrade to a device that supports biometric login like fingerprint scans and facial recognition but want more security for your online accounts, try a password vault.
Hackers begin a campaign to break into sensitive accounts like banking websites and credit cards by sending phishing emails or scraping pubic information from social media accounts. The average person uses a password on multiple online accounts. If you’re using the same password for your email as you are for social media, a hacker may be able to use your credentials to steal your money from another online account. If a hacker can gain access to your email, they can use it to reset passwords on other more important online accounts. That is why it’s important to create a unique password for every online account and app that you use.
|
<urn:uuid:6e406dd9-d7a0-4fb8-b987-39e6a871edbe>
|
CC-MAIN-2022-40
|
https://www.askcybersecurity.com/7-easy-to-use-security-apps-to-protect-your-laptop-right-now/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00429.warc.gz
|
en
| 0.915212 | 1,531 | 2.671875 | 3 |
What Is the Meaning of Egress?
A common egress meaning is the process of data leaving a network and transferring to an external location. Data egress is a form of network activity but poses a threat to organizations if it exposes sensitive data to unauthorized or unintended recipients.
Egress happens whenever data leaves an organization’s network, be it via email messages, as uploads to the cloud or websites, as a file transferred onto removable media like Universal Serial Bus (USB) drives and external hard drives, or through File Transfer Protocol (FTP) or Hypertext Transfer Protocol (HTTP) transfers.
Data Egress vs. Data Ingress
Another way to define egress is the process of data being shared externally via a network’s outbound traffic. When thinking about ingress vs. egress, data ingress refers to traffic that comes from outside an organization’s network and is transferred into it. It is unsolicited traffic that gets sent from the internet to a private network. The traffic does not come in response to a request made from inside an organization’s network.
Egress traffic is a commonly used term that describes the amount of traffic that gets transferred from an organization’s host network to external networks. Organizations can monitor egress traffic for anomalous or malicious activity through egress filtering. This enables businesses to block the transfer of sensitive data outside corporate networks, while limiting and blocking high-volume data transfers.
Threats Related to Data Egress
Data egress presents many threats to organizations, especially if data is shared externally with unauthorized recipients. Sensitive or proprietary data and high-value personal data are highly lucrative and targeted by cyber criminals, nation-state hackers, and even organizations’ competitors.
Bad actors can use data exfiltration techniques that enable them to intercept, steal, or snoop on networks and data in transit, which can result in data loss or leakage. These techniques include the spread of malware, such as backdoor Trojans, or using social engineering to disguise attacks as regular network traffic.
These threats typically involve commonly used tools that organizations access every day, such as email, USB drives, or cloud uploads. More advanced and stealthy methods of intercepting data egress include the encryption of modified data before it is exfiltrated and using techniques to mask the attacker’s location and traffic.
A major risk that data egress poses to organizations is insider threat, which can be either malicious or accidental. A malicious insider threat involves an organization’s own employee stealing corporate data with the intent to harm the company by giving or selling that data to a hacker, third party, or competitor. Accidental insider threats occur if employees inadvertently send data to an unauthorized recipient or disable a security control.
Best Practices for Data Egress Management
Data egress management is reliant on discovering where an organization’s sensitive data is stored and where it leaves the network. This is a process referred to as network monitoring and data discovery and is crucial to securing the data egress points in an organization’s system.
Best practices to achieve this include:
- Create a data egress enforcement policy: Organizations must create and follow a data egress enforcement policy that outlines what constitutes acceptable use of data. This policy must be extremely thorough and outline how the company protects its resources, provide a list of internet-accessible services that are approved for use, and detail guidelines for how employees should access and handle sensitive data.
- Monitor networks: The first step to ensuring secure data egress is to monitor what is happening on an organization’s network. This not only enables an organization to know which users and devices are active on its network but also detect any suspicious activity. Network monitoring also allows organizations to measure crucial metrics like availability, response time, and uptime.
- Deploy an effective firewall: Firewalls are network gatekeepers that enable an organization to securely manage data egress and ingress. Many data breaches were allowed to occur because organizations’ egress rules allowed intruders to access and intercept data without the company even knowing an attacker had been active in their networks.
- Implement firewall rules: Deploying an effective network firewall is a good first step, but it also needs to be configured with appropriate rules that enable it to detect, monitor, and block unauthorized data egress. Effective firewall rules will allow an organization to block data egress to unauthorized locations and malicious individuals.
- Deploy firewall logging: Egress and ingress data traffic must be logged to manage and protect against malicious activity. Firewall logging enables organizations to analyze their network traffic through security information and event management (SIEM) solutions. Using these tools, they can compile, correlate, and manage data from across their networks and systems, and if set up effectively, these same solutions will help prevent unauthorized data exposure.
- Protect sensitive data: Organizations must identify their sensitive data and assign it with classification tags that dictate the level of protection it requires. This process, known as data classification and data discovery, enables an organization to identify, classify, and apply appropriate protective measures to their most sensitive data. Businesses need to locate, identify, and organize their sensitive data before they can decide what level of protection they need and who they allow to access specific data and resources.
- Deploy data loss prevention: Using this data classification knowledge, organizations can then deploy data loss prevention (DLP) tools to safeguard their sensitive data. DLP applies policy-based protection, such as blocking unauthorized actions or data encryption, to protect sensitive data. Combining DLP with data classification and data discovery ensures organizations have a full picture of the sensitive data they have, where it is stored, and how it is protected from unauthorized exposure and loss.
- Control access to data: Simply protecting data is a good start to preventing data egress, but it is also key to controlling who has access to data, networks, and resources. To do this, organizations should implement and follow an authorization policy, which ensures every device that connects to a network is approved before it can join.
- Incident response: In case a data breach or data leak does occur, organizations need to have a preplanned response in place. A well-developed incident response plan that provides repeatable future actions and outlines which individuals are responsible for necessary actions is one of the best ways to protect a company from attack. It enables organizations to minimize the damage a cyberattack causes and mitigate the threat as quickly as possible. A solid incident response plan also includes investigating what happened, which is crucial to learning from the attack and preparing for future events.
How Fortinet Can Help?
Fortinet helps organizations protect their networks, users, and resources with its next-generation firewalls (NGFWs). These advanced firewalls filter network traffic from external threats to data egress, as well as internal threats such as malicious insiders. The Fortinet NGFWs provide key firewall features, such as packet filtering, network monitoring, Internet Protocol security (IPsec), and secure sockets layer virtual private network (SSL VPN) support. They also offer deeper content inspection features that enable organizations to identify and block malicious activity, malware, and other cyberattack vectors.
Fortinet NGFWs support future updates, which allows them to evolve in time with the modern security landscape. This ensures organizations and their data are always protected from the latest cyberattacks threatening their data egress.
|
<urn:uuid:b70cc98d-72cb-4ecf-ac8f-c0a86c7237f5>
|
CC-MAIN-2022-40
|
https://www.fortinet.com/br/resources/cyberglossary/data-egress
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00429.warc.gz
|
en
| 0.922325 | 1,517 | 3.4375 | 3 |
2021 saw an outbreak of ransomware groups and attacks that affected every major industry across the globe. This trend is expected to continue and even surpass the previous year’s numbers by a significant margin in 2022.
In March 2022, researchers detected a new ransomware strain known as Pandora which leverages double extortion tactics to exfiltrate and encrypt large quantities of personal data. The operators offer the decryption key once the victim pays the ransom demanded. Pandora ransomware is a relatively new operation and hence its infection techniques are unknown.
However, after infiltrating the target system, the ransomware appends the “.pandora” file extension to the encrypted files and leaves a ransom note “Restore_My_Files.txt” with instructions on how to recover the data. Researchers believe that the Pandora ransomware is a rebranded version of Rook ransomware, which in turn is a spawn of the leaked Babuk code. This article explores the technical analysis of the Pandora ransomware, its evasion tactics, the process of encryption, and more in detail.
The analysis of Pandora’s binary file sample,
5b56c5d86347e164c6e571c86dbf5b1535eae6b979fede6ed66b01e79ea33b7b, indicates that it is a UPX (Ultimate Packer for eXecutables) packed binary file. UPX is an executable file compressor used by threat actors to add a layer of obfuscation (creation of code that is difficult for humans to understand) to their malware. The ransomware code runs from the original entry point after getting unpacked in the memory.
The ransomware uses obfuscated strings and deobfuscates library names and internal functions at runtime. The library modules used by Pandora are dynamically loaded on a per-use basis via the following APIs:
Initially, the ransomware creates a mutex (mutual exclusion object, which enables multiple program threads to take turns sharing the same resource) to make sure only one instance of the malware is running on the system. The mutex string, “ThisIsMutexa”, gets deobfuscated in the memory. It checks for any existing mutex on the system via OpenMutexA, if not present the malware creates a new one with the value “ThisIsMutexa” via CreateMutexA.
The malware implements anti-debug checks to hinder analysis.
- The code highlighted in the image above reads data at the offset 0x60 from segment register GS. Windows stores the Thread Information Block (TIB) in FS [x86] and GS [x64] segment registers.
- The TIB holds the Process Environment Block (PEB) at the offset 0x60. The malware accesses PEB of the process via the GS register.
- Later the malware reads the data at the offset 0x2 in PEB (ds:[rsi+2]), which is the BeingDebugged member in the PEB structure, and then compares the obtained value with 0. If the process is being debugged then BeingDebugged will have a non zero value. If the test fails, the malware goes into an infinite loop and does not proceed further.
The security endpoints (especially ETWTi) of a device use the instrumentation callback process to check for behavioral anomalies and detect novel malware on the system. Pandora ransomware bypasses such a callback mechanism via
ntsetinformationprocess, which changes the process information.
- ntsetinformationprocess is invoked with
ProcessInstrumentationCallbackas a part of ProcessInformationClass.
- The third argument in the above image is a 10-byte long structure associated with the provided ProcessInstrumentationCallback information class.
- The members and associated values in the structure are as follows:
- Version=0 (0 for x64, 1 for x86)
If the process created for the malware is hooked by security services via callback member, invoking the ntsetinformationprocess in a way mentioned above with callback set to 0, it helps the malware bypass such hooks.
Event Tracing for Windows (ETW) is a powerful tracing facility built into the operating system, to monitor various activities of both userland and kernel land applications running on the system. This feature has become a vital instrument to endpoint security solutions to detect anomalous behavior in running programs. As a result, malware developers have started integrating functionalities in their malware to neutralize the tracing capability. One such vector is patching ETW related functions defined in ntdll.dll in the memory.
- The ransomware dynamically loads ntdll.dll into the memory and deobfuscates the string “
- The address of the EtwEventWrite function is obtained using GetProcAddress API. Getting the function address is a very important step in patching, to bypass the ETW feature.
- Before the malware commences patching, the memory protections on the region of committed pages, where EtwEventWrite resides in virtual address space, need to be changed, which is done via VirtualProtectEx API.
- The memory region of pages where the first instruction of EtwEventWrite resides is changed to PAGE_EXECUTE_READWRITE to be patched.
- The WriteProcessMemory API is used to write one byte at the beginning of the EtwEventWrite function. The second argument points to the beginning of EtwEventWrite, and the third argument is the one byte long payload that gets written at the address of EtwEventWrite.
- The one byte payload is 0xC3, which is the opcode for the instruction “ret”. This makes EtwEventWrite to simply return back to the caller function, without executing its logic to log an event when EtwEventWrite is invoked by other applications.
- After patching, the memory protection of EtwEventWrite is reverted back to the initial permission of PAGE_EXECUTE_READ via VirtualProtectEx.
Before the encryption begins, the malicious software changes the shutdown parameters for the system via SetProcessShutdownParameters API. This function sets a shutdown order for the calling process relative to the other processes in the system. Here, the malware invokes the API with zero value so that the ransomware program is the last to shut down by the Operating System.
After setting these shutdown parameters, the malware empties the recycle bin via SHEmptyRecyclebinA API.
The ransomware raises the priority of the running process to the highest possible priority which is REALTIME_PRIORITY_CLASS via SetPriorityClass API. The second argument is the “dwPriorityClass” parameter which has a value of 0x100.
Finally, the volume shadow copies are deleted by executing a string of commands via ShellExecuteA. It uses vssadmin to perform the task of deleting the shadow files.
The main thread of malware creates two new threads that are responsible for the encryption of user data.
The following APIs are used to create the threads:
The threads are created with dwCreationFlags set to CREATE_SUSPENDED, later the execution of threads is resumed via ResumeThread.
The main thread starts to enumerate the drives present on the system via the following APIs:
Pandora utilizes Windows I/O Completion Ports to efficiently speed up the encryption process. Following APIs are used to orchestrate the search and locking of the user data:
Initially, the main thread of the malware creates an input/ output (I/O) completion port via CreateIoCompletionPort API.
- The fourth argument is “NumberOfConcurrentThreads”. In our case, two threads are allowed to concurrently process I/O completion packets for the I/O completion port.
- After the creation of the I/O port, a queue is created internally, to which threads can push the completion status.
- The two threads created previously will be accessing I/O ports to perform file enumeration and encryption on the infected system.
In general, ransomware in the wild has adopted a model to optimize the encryption process. The goal here is to efficiently utilize the power of multicore processors to concurrently perform file enumeration and encryption. A group of worker threads would fetch the file paths and post them in the queue via PostQueuedCompletionStatus, and another thread can retrieve the posted files (paths) for encryption via GetQueuedCompletionStatus.
Pandora uses the RSA 4096 algorithm for encryption, the public key is embedded within the malware.
As a prior step to the encryption process, the malware accesses directories in the network drives and dumps the ransom note (Restore_My_Files.txt). The ransom note is created using the following three APIs:
The process explained in this section is executed by worker threads highlighted in the image below. These threads can concurrently enumerate and encrypt data via the Windows I/O completion port.
- After dumping the ransom note, the malware uses
FindFirstFileWto open a handle to the files on the disk.
- The retrieved handle is checked against a set of directory names and file extensions.
- The following directories are excluded from getting locked:
|Internet Explorer||Program Files|
|Program Files (x86)|
- The following files are excluded from getting encrypted:
- And the following extensions are excluded from getting locked:
- After performing exclusion checks, the absolute path of the file that passed the check is computed and then the thread calls for PostQueuedCompletionStatus to submit the path to the I/O queue previously created via CreateIoCompletionPort.
- Right after the PostQueuedCompletionStatus call, the same worker thread can resume fetching the absolute path of the next file via FindNextFileW API.
- Another worker thread can now call GetQueuedCompletionStatus to retrieve the absolute path of the target file to start encrypting the files.
- Next, the file attribute is changed via SetFileAttributesW API to FILE_ATTRIBUTE_NORMAL and then the file is fetched for encryption via the following APIs:
- After setting up the file pointer to the target data, the encryption begins by loading the public key in the memory, and the encrypted data is written to the file via WriteFile API. Later the file is renamed via MoveFileExW API to add “.pandora” extension to the encrypted file.
Pandora ransomware writes two values, Private and Public, under the HKCU/ Software registry key. The public value has the public key used by the ransomware to encrypt the user files, while the private value has the protected private key stored for decryption. The decryptor tool that the victim receives after paying the ransom uses this information stored in the registry to decrypt the locked files.
|
<urn:uuid:e8fadf93-58e7-4ca2-bb5c-ea2d18c324d2>
|
CC-MAIN-2022-40
|
https://cloudsek.com/technical-analysis-of-emerging-sophisticated-pandora-ransomware-group/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00429.warc.gz
|
en
| 0.831302 | 2,698 | 2.609375 | 3 |
What is MFA?
Multi-Factor Authentication (MFA) is a layered approach to securing data and applications which require a user to present a combination of two or more credentials to log in.
How does MFA change my login process?
A user's credentials must come from at least two or three different categories, or factors. Your password counts as one set of credentials, other types are either a text/call to your phone, or a iOS/Android app notification.
Why do I need MFA?
Enabling MFA on your accounts (at work and home) helps you protect your data from unauthorized access. If one set of credentials is compromised (such as your password), unauthorized users will be
unable to meet the second authentication requirement, and will not be able to gain access without
additional input from the user.
Is this really that important?
In 2020, companies lost 1.86 billion dollars on business email scams alone. In an era of increasing
cybersecurity threats, MFA stands at the forefront of premium cybersecurity measures for protecting
vital infrastructure. It can prevent even some of the most sophisticated cyber-attacks with a simple push
of a button.
Is this easy to use?
It's as simple as clicking approve or deny on your phone. Logging into Outlook from home to start your day at work? Approved! Someone logging into my account from a foreign country? Denied! It's that easy.
Can someone still "hack" my account if I am using MFA?
Yes, but they cannot access your data or mailbox unless the second layer of authentication is approved – so don't approve login requests you did not initiate. Any login from a new device will prompt an MFA
authentication request. On known devices, you only need to do this once every 30 days.
|
<urn:uuid:077c612e-b84c-4ec0-a8f8-7c168caea1cf>
|
CC-MAIN-2022-40
|
https://helpdesk.creativetech.com/hc/en-us/articles/5764339569051-Why-MFA-is-important
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00429.warc.gz
|
en
| 0.927042 | 379 | 2.578125 | 3 |
Pretexting is a certain type of social engineering technique that manipulates victims into divulging information. A pretext is a made-up scenario developed by threat actors for the purpose of stealing a victim’s personal data.
During pretexting attacks, threat actors typically ask victims for certain information, stating that it is needed to confirm the victim’s identity. In reality, the threat actor steals this information and then uses it to carry out secondary attacks or identity theft.
Sophisticated pretexting attacks may attempt to trick victims into performing an action that exploits the physical and/or digital weaknesses of an organization. For example, a threat actor might pretend to be an external IT services auditor and use this alias to convince the physical security team of an organization to allow the threat actor to enter the building.
Many threat actors who adopt this attack type masquerade as employees or HR personnel in the finance department. These disguises let them target C-level executives or other employees with extensive privileges, who are more valuable for attackers.
While phishing attacks tend to use urgency and fear to exploit victims, pretexting attacks establish a false sense of trust with a targeted victim. This requires threat actors to establish a credible story that does not make victims suspicious of any foul play.
Pretexting Attack Techniques
Pretexters use a variety of tactics and techniques to gain the trust of their targets and convince them to hand over valuable information.
An impersonator imitates the behavior of another actor, usually a trusted person such as a colleague or friend. This involves maintaining a sense of credibility, often by spoofing the phone numbers or email addresses of impersonated institutions or individuals.
An example of impersonation is the SIM swap scam, which exploits vulnerabilities in two-step verification processes including SMS or phone verification to take over target accounts. The pretexter impersonates a victim and claims to have lost their phone and persuades the mobile operator to switch the phone number to the attacker’s SIM. One-time passwords are then forwarded to the attacker instead of the victim.
One successful social engineering attack involving impersonation was the 2015 attack on Ubiquiti Networks. Employees received messages from pretexers impersonating senior executives of the company and requesting payments to the attackers’ bank accounts. This cost the company $46.7 million.
Tailgating is a social engineering technique that enables threat actors to gain physical access to facilities. To tailgate means to closely follow authorized personnel into a facility without being noticed. After reaching the entrance, the threat actor may quickly stick their foot or any other object into the door before it is completely shut and locked.
Piggybacking is very similar to tailgating, except that the authorized individual is not only aware of the actor but also allows the actor to “piggyback” off the credentials. For example, authorized personnel arrives at the entrance of a facility. The individual approaches and asks for help, claiming to have forgotten their access badge. It could also be a woman holding heavy boxes. Either way, authorized personnel may decide to help these individuals to gain access to the building.
Baiting attacks may use hardware like malware-infected flash drives as bait, often adding something that gives it an authentic look, such as a company label.
The bait is placed in commonly visited locations, such as lobbies, bus stations, or bathrooms. The attacker will place the bait in a way that victims will notice it and have an incentive to insert it into a personal or work device. The bait hardware will then deploy malicious software on the device.
Baiting schemes can also be carried out online. For example, enticing ads can lead victims to malicious websites or encourage victims to download a malware-infected application.
Phishing involves impersonating a trusted entity in communications like emails or text messages, in order to obtain sensitive information like payment card details and passwords. Phishing is a separate category from pretexting, but these can be combined—phishing attempts often leverage a pretexting scenario.
Pretexting increases the chances of a phishing attempt being successful, for example, if target employees believe they are talking to a contractor or employer. Compromised employee accounts can also be used for further pretexting attacks that target individuals through spear phishing.
For example, the Canadian MacEwan University in Canada fell victim to a phishing scam in 2017, which cost the university around $9 million. The targeted staff changed payment details, believing the scammer was a contractor.
Vishing and Smishing
Voice phishing (or vishing) is a social engineering technique. This type of attack uses phone calls to trick victims into disclosing sensitive information or giving attackers remote access to the victim’s computer device.
For example, a common vishing scheme involves the threat actor calling victims while pretending to be an official from the IRS. The attacker often threatens or attempts to scare the victim into giving compensation or personal information. IRS vishing schemes usually target older individuals. However, anyone can be tricked by a vishing scam when not adequately trained.
SMS phishing (or smishing) is a form of social engineering similar to vishing and phishing. It uses the same techniques but is perpetuated via SMS or text messaging.
A scareware attack bombards victims with fictitious threats and false alarms. The victim is deceived into thinking that their system is infected with malware. They are then prompted to install malware or software that somehow benefits the threat actor. Scareware is also known as deception software, fraudware, and rogue scanner software.
For example, a common scareware attack involves displaying legitimate-looking popup banners in the browser of a victim surfing the web. The banner may display a text message such as, “Your computer may be infected with harmful spyware programs.” The scareware then offers to install a certain tool (usually malware-infected) for the victim, or directs the victim to a malicious website where the computer becomes infected.
Scareware can also be distributed through spam emails that include bogus warnings or encourage victims to purchase worthless or harmful services.
Pretexting and the Law
Pretexting is, in general, illegal in the United States. For financial institutions governed by the Gramm-Leach-Bliley Act of 1999 (GLBA) (almost all financial institutions), it is illegal for any individual to attempt to obtain, actually obtain, or cause an employee to disclose customer information by deception or false pretenses. GLBA-regulated institutions must also enforce standards to educate their staff to identify pretexting attempts.
In 2006, Congress passed the Telephone Records and Privacy Protection Act of 2006, which extends protection to records kept by telecom companies. However, in other industries, it is not completely clear if pretexting is illegal. In future court cases, prosecutors will need to decide which laws to use to file charges under, many of which were not created with this scenario in mind.
How to Prevent Pretexting
Here are several methods businesses are using to protect themselves against pretexting.
Pretexting includes impersonation, and to be successful the email must appear genuine. Thus, email spoofing is necessary. Domain-based Message Authentication, Reporting, and Conformance (DMARC) is the most prevalent form of protection for email spoofing, yet it is limited, as it requires continual and complex maintenance.
What’s more, DMARC stops exact domain spoofing but does not display name spoofing or cousin domains spoofing, which are far more prevalent in spear-phishing attacks. Attackers have adopted these more sophisticated techniques mainly due to the effectiveness of DMARC.
AI-Based Email Analysis
To stop pretexting, businesses must strive for a more modern method of detection than DMARC. Next-generation anti-spear phishing technology uses artificial intelligence (AI) to study user behaviors and detect indications of pretexting. Furthermore, it can find anomalies in email addresses and in email traffic, such as display name spoofing and cousin domains. Natural Language Processing (NLP), a part of AI, examines language and can decipher phrases and words common in spear-phishing and pretexting.
Lastly, educate your users so they can identify pretexting by sharing real-life pretexting instances with them. Often, what makes spear-phishing and pretexting successful is that users are not familiar with the pretexting tactics mentioned above, and notice nothing abnormal about the requests they receive.
Educate users about the various sorts of email spoofing and train them to study email addresses for signs of display name spoofing and cousin domains. You must also have established rules about financial transactions, including validating requests in person or over the phone.
Pretexting Protection with Imperva
Imperva provides its industry-leading Web Application Firewall, which prevents pretexting attacks with world-class analysis of web traffic to your applications.
Beyond social engineering protection, Imperva provides comprehensive protection for applications, APIs, and microservices:
Runtime Application Self-Protection (RASP) – Real-time attack detection and prevention from your application runtime environment goes wherever your applications go. Stop external attacks and injections and reduce your vulnerability backlog.
API Security – Automated API protection ensures your API endpoints are protected as they are published, shielding your applications from exploitation.
Advanced Bot Protection – Prevent business logic attacks from all access points – websites, mobile apps, and APIs. Gain seamless visibility and control over bot traffic to stop online fraud through account takeover or competitive price scraping.
DDoS Protection – Block attack traffic at the edge to ensure business continuity with guaranteed uptime and no performance impact. Secure your on-premises or cloud-based assets – whether you’re hosted in AWS, Microsoft Azure, or Google Public Cloud.
Attack Analytics – Ensures complete visibility with machine learning and domain expertise across the application security stack to reveal patterns in the noise and detect application attacks, enabling you to isolate and prevent attack campaigns.
|
<urn:uuid:5658501b-b076-4859-ac5f-813c247f0f8f>
|
CC-MAIN-2022-40
|
https://www.imperva.com/learn/application-security/pretexting/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00629.warc.gz
|
en
| 0.926473 | 2,115 | 2.96875 | 3 |
In a speech in New York in October 2017, Michael Dell – founder and CEO of the company that bears his name – talked about the negligible cost of sensors and network nodes, saying:
We’ll soon have 100 billion connected devices, and then a trillion, and we will be awash in rich data. But, more importantly, we’ll have the ability to harness that data.
Maybe so, but there are challenges in making the system work for everyone, according to the World Economic Forum (WEF).
Its annual meeting in Davos, Switzerland, brought together politicians, business leaders, IT strategists, academics, economists, and more to discuss the big questions that demand answers this century. One focus was the changing role that technologies such as the Internet of Things (IoT), Artificial Intelligence, connected transport, and data analytics, play in society.
The IoT is already far more than just homes full of smart consumer goods, lighting systems, and security devices. It also embraces smart buildings, vehicles, transport networks, factories, energy grids, power stations, and entire cities.
Meanwhile, the costs of computing, storage, and connectivity continue to fall, suggested the WEF – although smartphone owners may question whether that is really true as they look through their annual accounts.
Forecasts by IDC predict that IoT spending will hit $1.2 trillion within the next four years. And whereas current IoT spends are dominated by the manufacturing sector ($189 billion in 2018), spending in transportation, utilities, and cross-industry applications continues to rise.
Good news? In theory, sensors, the IoT, data analytics, AI, and machine learning can help us to use energy more efficiently, reduce carbon emissions, minimise waste, design better cities, predict diseases, track epidemics, and more. This is the hype and promise of ‘Industry 4.0’.
In this way, connected technologies can help meet the United Nations’ sustainable development goals (SDGs). WEF analysis published to coincide with the 2019 conference found that an estimated 84% of IoT deployments are currently addressing, or have the potential to advance, SDGs.
But when they are applied to people, these technologies also have social and ethical dimensions to do with data gathering, bias (institutional and historic), and surveillance. Predicting patterns in weather systems, early-stage cancers, or influenza outbreaks is one thing; but predicting someone’s potential to commit a crime is another, especially if organisations then use that data to deny them services, credit, or insurance.
But there are real practical, strategic, and operational challenges, too. Despite a growing number of companies, organisations and governments experimenting with IoT projects, success stories are not easy to come by.
According to 2017 analysis by Cisco, three-quarters of IoT projects fail due to limited understanding of how to design and integrate solutions effectively into daily operations. The WEF noted that challenges related to security, interoperability, and the sustainability of IoT solutions are widespread.
To overcome these and help realise the full potential of the IoT, last year the WEF’s Centre for the Fourth Industrial Revolution assembled a team of technology leaders from across the globe to reinforce best practice, streamline procurement, and enable more consistent and positive outcomes.
During the first half of 2018, the WEF consolidated more than 200 IoT case studies into ‘solution sets’. These clusters of technologies were each analysed by engineers, scientists, researchers, and executives representing more than two dozen public and private organisations worldwide.
The teams evaluated each cluster against four variables: economic impact, societal benefit, technological difficulty, and financial barriers. Ultimately, this led to an assessment of which IoT solutions the WEF believes are the most scalable and impactful.
So what are they?
The WEF’s initial findings are:
Six positive use cases emerged from the technologies overall. These were solutions that: better manage crops and livestock; advance the safety, well-being and efficacy of workers; optimise the movement of goods and people; help with early warning and prevention of disasters; assist doctors in monitoring and treating patients; and help governments and/or utilities manage our finite natural resources.
The WEF likes all of these, but is less than impressed with other IoT applications. Its researchers also made the following observations.
People in cities
We tend to believe that ageing populations are primarily a Western problem. However, the WEF found that the best, impactful and scalable IoT solutions address needs that are most pressing in Asia, where elderly populations combine with rapid urbanisation to create significant new challenges. In the West, our ageing cities create a different spin on the problem.
According to UN data, populations in East Asia are ageing faster than in any other part of the world. From 1990 to 2017, the 40+ population grew in East Asia from 28% to 48%. In parallel, Asia is witnessing an unprecedented shift of people out of rural poverty in search of new opportunities – as happened in the West during the first Industrial Revolution.
In China, for example, the urban population has increased by 500 million since 1989. That’s equivalent to two-thirds of the entire population of Europe moving into cities in just 30 years. I witnessed this myself on a visit to Shanghai last year – population 24 million – where new gated communities for the middle classes stretch as far as the eye can see.
Such trends place enormous pressure on healthcare systems and urban infrastructures, as well as generate vast environmental and sustainability challenges.
These are all areas where the IoT has potential, which is why Asia leads the world in IoT spending.
Human enhancement – or replacement?
Contrary to growing concerns about automation’s ability to displace human labour, the IoT solutions approved by WEF experts focus on enhancing worker productivity.
This chimes with public statements by Microsoft CEO Satya Nadella and IBM chair and CEO Ginni Rometty, among others, that AI, automation, and other Industry 4.0 technologies are about augmenting human skills.
However, several reports published last year found that many organisations are deploying them tactically to cut costs and slash workforces, rather than strategically to make their businesses smarter.
Workers vs machines
While manufacturing is currently the largest sector for IoT spending, WEF experts didn’t feel that all industrial applications are equally effective. IoT technologies that improve worker well-being stand head and shoulders above those that focus on enhancing system operations, they said.
This challenges accepted wisdom that the Industrial IoT (IIoT) is largely about predictive maintenance, ‘cobots’, and improved supply chains, for example. According to the WEF, it should really be about keeping workers happy.
Worker-centric solutions include using IoT devices to optimise workplace conditions, such as temperature, lighting, and air quality – which most employees would see as beneficial. However, the WEF also identifies as positive the use of sensors and wearable devices to monitor workers’ health, improve their performance, and reduce the risk of accidents.
In a general sense, the combination of sensors, health tech wearables, and AI does indeed hold out the promise of better healthcare, especially among ageing populations.
In the healthcare space itself, for example, the WEF sees solutions that enhance preventative care and offer early disease detection as the most impactful and scalable. These include using sensors to continuously monitor heart rates, blood pressure, blood sugar levels, and other conditions.
But in an industrial setting, the context is very different. This is the flipside of the coin – one that the WEF appears to have overlooked. While IoT-enabled worker safety systems (such as the use of exoskeletons as lifting aids) could be beneficial, the use of sensors to monitor employees and keep them at their stations is a very different matter.
In some industries – call centres and factories among them – such technologies tend to be seen as maximising employers’ profits, not improving workers’ health. Indeed, it’s an application that has bred mistrust, not acceptance, of the IoT, and the perception that some systems are really enabling new forms of slavery.
Of course, not all workers sit at desks or labour in factories. In agriculture, where food security remains a global challenge, WEF experts stressed the opportunity for IoT technologies to help workers do their jobs more efficiently.
Sensors and other precision-agriculture technologies that enable farmers to better monitor and manage crops, optimise the use of water and fertilisers, or manage livestock, were cited among the most impactful and scalable.
In the US and elsewhere, vertical farms are moving food production closer to the mouths that need feeding – in cities – growing crops in ideal conditions, via sensor data, hydroponics, and smart lighting systems.
Smart city challenges
However, the WEF’s experts were divided on the potential impact and scalability of many smart-city solutions themselves. This is interesting, as they are often cited as the most beneficial IoT projects in societal terms.
Some of the WEF team expressed concern about the challenges these solutions create – particularly in terms of personal privacy protection, security, and economic sustainability.
However, smart city solutions that focus on system-wide efficiencies – for example, monitoring real-time electricity consumption to balance supply and demand, or enabling public utilities to detect water leaks quickly – hold out the greatest promise in the short term, said the WEF.
Just as ride-sharing services like Uber and Lyft have disrupted the taxi market and challenged the concept of urban car ownership, similar models are transforming commercial transport, found the WEF.
Fleet management systems (which rely on sensors and IoT technologies to monitor/optimise the use of corporate vehicles and equipment), and route optimisation tools (which use real-time data to improve the movement of goods across supply chains) were cited as two of the most impactful and scalable solutions.
The WEF’s initial findings form the beginnings of a new roadmap and strategic framework for accelerating the impact of IoT systems, the organization claimed:
By focusing attention and coalescing support around tried-and-tested IoT solutions with clear societal and financial returns, the WEF’s Centre for the Fourth Industrial Revolution and its partners are committed to maximising the value and impact of IoT investments.
In the months ahead, the WEF will:
Bring traditional industry competitors together to co-design, and align behind, a set of trusted implementation models – which incorporate well-defined business models, established technical frameworks and clear metrics on the costs and benefits of implementation – for these highly impactful and scalable solutions.
The organization said these will provide:
An essential tool to ease and speed up the process for procuring and deploying new IoT systems, reducing missteps and embedding best practices.
While claims and counter claims are made for the IoT – and security remains a blindspot, as manufacturers rush devices to market – it’s refreshing to hear findings that are so people focused (in most cases).
With a ‘Fifth Industrial Revolution’ supposedly now being built on consent and privacy (in the words of some CEOs at Davos), the IoT will certainly need public trust if it is to serve citizens’ interests.
|
<urn:uuid:d9973479-e48d-4aa1-b320-ec5b30333b36>
|
CC-MAIN-2022-40
|
https://diginomica.com/how-to-make-the-iot-work-for-everyone-the-view-from-davos
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00629.warc.gz
|
en
| 0.944794 | 2,346 | 2.921875 | 3 |
Text Analytics—sometimes called Text Mining—refers to the identification of abstract concepts within or computations about text data. The concepts can include the identification of explicit meaning, as discussed previously in our profile of Natural Language Understanding (e.g. using semantic analysis and statistical methods to infer that the word “cold” refers to a medical condition in a sentence about a person having flu symptoms, but “cold” refers to an unpleasant temperature when used in a sentence about a beach vacation.
Text Analytics can also be used at a higher level of abstraction to infer the emotional state and demographic information about the author based on word usage. In this case, the analysis focuses on why something is said in a particular way, rather than attempting to capture what was meant literally.
At the lowest level of abstraction, text analytics can report on word usage, identification of facts, and relationships or patterns.
Representative Vendors: ABBYY, Altilia Group, Amazon Comprehend, Amenity Analytics, Cortical.io, IBM, Kaypok, Lexalytics, Luminoso Technologies, Medallia, Microsoft Azure, NICE, Rosette Text Analytics, and Wit.ai
|
<urn:uuid:a33c43cb-ab24-4d13-8dfd-4f741bc6969e>
|
CC-MAIN-2022-40
|
https://aragonresearch.com/glossary-text-analytics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00629.warc.gz
|
en
| 0.910031 | 249 | 2.578125 | 3 |
Enabling Reliable Data Center Growth in an Era of Water Scarcity
As people and industries continue to shift to cloud and IoT-based solutions, the cloud computing market is experiencing exponential growth. This expansion, along with the growing global population, will lead to greater demand for goods, services, and natural resources–notably water.
Data centers and their electricity supply chains require fresh water to deliver the massive amounts of data that the world relies on. A typical cooling system uses up to 8 million gallons of water a year per megawatt of electricity, according to the Uptime Institute, an advisory organization focused on improving the performance, efficiency and reliability of business-critical infrastructure. Currently, much of that supply comes from fresh water.
Given the growing demand for the public cloud (CAGR 22 percent), it is essential for data centers to adopt an integrated water management strategy to maintain reliable performance and operational resiliency
A recent study conducted by Stanford, Northwestern, Carnegie Mellon, and U.S. Department of Energy researchers estimated that U.S. data centers will require approximately 174 billion gallons of water to maintain operations in 2020. If we continue with business as usual, the global demand for fresh water to support all people and all industries is expected to exceed available supplies by 40 percent by 2030. This imbalance between supply and demand is jeopardizing the reliability of water to support human health, agricultural productivity, and economic development, and to maintain sustainable ecosystems.
The scarcity of water, both in terms of quantity and quality, along with greater demand to cool servers, will translate into an increased operational risk for businesses around the world. That’s particularly true in places such as California– the most populous state– with over 800 datacenters, more than any other state. Companies, including those that manage big data, are aware of these risks and many have actively increased their water conservation efforts over the past decade.
However, since 2011, overall corporate water use has declined by only 10 percent. That’s not nearly enough to close the gap. Valuing water as an asset rather than taking it for granted is a paradigm shift that must occur, along with an understanding that water stress is a function of both quantity and quality. This requires a more comprehensive water stewardship approach that assesses risks in three areas: physical (water availability and quality), regulatory, and reputational. Given the growing demand for the public cloud (CAGR 22 percent), it is essential for data centers to adopt an integrated water management strategy to maintain reliable performance and operational resiliency.
Here are three pathways for data centers to follow to ensure that they have the water they need to operate now and in the future.
Consider water risk as a critical factor when making a decision about where to build or expand data centers. Smart water management plans incorporate conservation, reuse and recycle strategies but the first step is putting any data center that requires water where water is more likely to be available. In many regions of the world, the price of water is undervalued and underpriced even when it is scarce and, as a consequence, businesses aren’t fully aware of its scarcity and the related business risks in a given location. Publicly available tools, such as the Water Risk Monetizer, can help determine the risk-adjusted cost of water to your business in various locations. It’s a factor that should be considered in evaluating sites for new operations, or where to expand existing operations.
Develop a redundancy plan to ensure that you have a sufficient supply of water stored in case of an unexpected disruption (e.g., a water main break). Are you going to have enough water onsite? How many hours of water will your facility need? The Uptime Institute recommends that data centers in tiers 1-4 must have at least 12 hours of onsite water storage to maintain critical cooling systems in a worst-case scenario.
Create a resiliency plan to prepare for water risks that could have long-term consequences for your operations. Take a broader view of the factors affecting your data center and consider both current and future water conditions in your area of operation. Investigate alternatives to fresh water such as gray water (purple pipes), rain harvesting, humidification recovery, and water transportation by third parties.
A Microsoft data center near San Antonio, Texas, used the Water Risk Monetizer and discovered that the true value of water, based on supply and demand, was more than 11 times greater than the center’s current water bill. Microsoft partnered with Ecolab to develop a smart water strategy for using recycled water (gray water) at the site. This led to more than $140,000 in water savings and reductions of nearly 60 million gallons a year. As this example illustrates, the Water Risk Monetizer not only encourages conservation, it helps make water reuse and recycling an important option for ensuring a more resilient future for businesses and communities alike.
Ask yourself what the likelihood is that your revenue will be at risk due to water scarcity from availability or quality in the coming years. Make sure you have a plan in place if quantity or quality is compromised for an extended period of time. Use the Water Risk Monetizer to calculate your revenue-at-risk score (the likelihood of a loss in revenue as a result of water scarcity) for three, five and 10 years. This information can help guide your decision-making and enable you to better manage and mitigate water-related risks.
Water scarcity is a genuine threat to data center growth, reliability and reputation. The time has come to rethink operations and implement aggressive water strategies to optimize operations, reduce costs and enable reliable growth. For technology companies facing intensifying public demand for information, downtime is simply not an option.
Check Out : Top Water Management Solution companies
|
<urn:uuid:cc58e77e-94df-482e-b2c3-f7f5fdf6a1c2>
|
CC-MAIN-2022-40
|
https://datacenter.cioreview.com/cxoinsight/enabling-reliable-data-center-growth-in-an-era-of-water-scarcity-nid-26119-cid-18.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00029.warc.gz
|
en
| 0.944264 | 1,179 | 2.71875 | 3 |
Data is at the centre of business.
Collecting and analysing data is becoming easier, especially with the influx of IoT devices.
But storing and handling that data is becoming an issue. Speed and security is an issue for most big data projects. Storing data in the cloud can slow the delivery.
This is where edge computing comes in.
Edge computing, or fog computing, is a new way of processing tasks to make the information easier and faster to process.
What Is Edge Computing?
Edge computing processes data nearer to the user. It is often positioned on the edge of your network, but can also be a network positioned in closer proximity to your users than cloud can.
This can be faster than sending the information to a data centre or the cloud to be processed then back to the users again.
However, edge computing isn’t really the opposite of cloud. Edge should be included in your larger cloud computing architecture.
Edge computing is paving a new path through digital transformations, meaning even small companies are creating their own mini data centres to handle data – especially with the rise of GDPR and the threat of more fines.
Why Do We Need Edge Computing?
Using the cloud is very useful. Gmail, Dropbox, Office 365 and more have been making life easier and work more seamless.
Storing and delivering networks in the cloud is still highly recommended. But there will always be tasks where having an edge setup will be advantageous.
There are 3 main applications for edge:
1. When every second matters
2. When tasks are bandwidth guzzling
3. When business-critical tasks or sensitive data need to be separated from the cloud.
How Does Edge Computing Help?
Edge computing helps in a variety of ways. It’s making IoT integration a lot easier and paving the way for new technology. It takes the volumes of data that IoT connected devices spit out and helps process it without sending it externally.
Edge Reduces Latency
Latency is the measure of time between you asking your network for information and the reply to come back.
In some cases, latency could literally be life or death.
For example, driverless cars.
Your car senses an object in the road. It sends a message to the cloud server and waits for a response. The server tells the car it’s a hazard and instructs the car to steer around it.
This process could take a split second.
But at 70mph, that split second could be too long. A data processing server in the car will be much quicker, and not so prone to a strained network when there are too many cars on the road.
Latency can also mean closing or not closing a deal on the stock market. It could be a hospital device not reacting fast enough to a patient’s body. It could be taking too long to analyse security footage to catch a criminal.
But it could be as simple as needing to process a huge amount of data, such as thousands of files. Shaving off a nanosecond off every file could save a lot of time in the long-run.
Unfortunately, data centres aren’t impenetrable. If someone wants that data, they will get it one way or another.
But, storing that data at the edge can make it harder to get to. The edge servers do not even have to connect to the internet or a network all the time, meaning data can be isolated.
The edge network can process information and only send the necessary information to the cloud. If your cloud network is attacked, sensitive data is safe.
It also allows you to have far more control of the security of that data too. You have full control of how secure the servers are, but with an external data centre, you may struggle to find out which security measures they use.
If your cloud gets hit by a DDoS attack, then your network goes down. But your edge server can keep vital services up and running in the meantime.
If your cloud server goes down, it could take critical operations down with it. Edge could save you. By running your business-critical services through a separate backup server, it means
Examples Of Edge Usage
So, now we’ve established what edge is, let’s look at what it can be used for.
As we’ve already established, latency can be life or death to a driverless car. Having all the processes happening within the vehicle itself can allow for instant decision-making which can have life-saving consequences.
Oil and Gas Refineries
Remote refinery rigs need to be monitored at all times – from safety levels to machine performance. These machines can be monitored in real-time by users on or off the rig, and by machines. These measures allow for emergencies to be handled quickly and often before they become an issue.
Video and Media Production
Videos are bandwidth heavy. Sending up huge raw files to the cloud can be usage intensive and slow down even the most robust networks.
Edge can allow for processing to happen closer to the creation, meaning videos could be scaled down into previews to send over to central storage.
Machine-learning could be put in place to analyse the data and results could be sent through to be stored, which is useful in the case of CCTV or other surveillance footage.
Content Delivery Networks
On the internet, speed is a necessity. Every second of load time sees website conversion rates drop by 12%.
So, getting a webpage delivered quickly is a necessity.
Content delivery networks help deliver content to users quickly. They have edge networks all over the world which serves up content based on geographical location. By moving the content closer to the user, the time it takes to access the content drops.
If you are looking to deploy edge solutions in your company, Comms Express can help. Contact our friendly team of experts so they can help you find your perfect solution.
|
<urn:uuid:02daef38-1f67-4d97-9212-3a05438b727d>
|
CC-MAIN-2022-40
|
https://www.comms-express.com/blog/what-is-edge-computing/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00029.warc.gz
|
en
| 0.923022 | 1,224 | 3.140625 | 3 |
Life today is just not about living as a part of this territorial world. Today we are also the part of virtual world. Digital technology and new communication systems have made dramatic changes in our lives. Living in a country with a billion people is difficult. It’s even more difficult when everyone is a part of the same virtual world. In a virtual world there is anonymity. It’s hard to determine who is wrong and who is right. We have laws which regulate the actions of the people. Laws provide harmony in the society. It allows people to live at peace and let the justice flow equally for everyone. Similarly we need cyber laws which is entirely different from ordinary laws to regulate and control the acts in the virtual world.
Crimes that happen in the real world can also take place in the virtual world. Ever thought what kind of world it would be without law. People with not feel themselves responsible for their deeds and due to this others will suffer certainly. As the number of internet users is on the rise so are the crimes, the need for cyber laws and their application has also gathered great momentum.
Here are some cyber laws applicable in India.
In India, cyber laws are contained in the Information Technology Act, 2000 (“IT Act”) which came into force on October 17, 2000. The main purpose of the Act is to provide legal recognition to electronic commerce and to facilitate filing of electronic records with the Government. When the world is becoming digitally organised India must have cyber laws to combat cybercrimes in India. The following Act, Rules and Regulations are covered under cyber laws:
Information Technology Act, 2000
This Act was amended by Information Technology Amendment Bill, 2008 which was passed in Lok Sabha on 22nd December, 2008. The Preamble to the Act states that it aims at providing legal recognition for transactions carried out by means of electronic data interchange and other means of electronic communication, commonly referred to as “electronic commerce”, which involve the use of alternatives to paper-based methods of communication and storage of information and aims at facilitating electronic filing of documents with the Government agencies.
Some features of this act are:
- Section 10A effect that contracts concluded electronically shall not be deemed to be unenforceable solely on the ground that electronic form or means was used.
- In section 43 where the damages of Rs. One Crore prescribed of the earlier Act of 2000 for damage to computer, computer system etc. has been deleted and the relevant parts of the section have been substituted by the words, ‘he shall be liable to pay damages by way of compensation to the person so affected’.
- Section 43A protect sensitive personal data or information possessed, dealt or handled by a body corporate in a computer resource which such body corporate owns, controls or operates. If such body corporate is negligent in implementing and maintaining reasonable security practices and procedures and thereby causes wrongful loss or wrongful gain to any person, it shall be liable to pay damages by way of compensation to the person so affected.
- Section 66 prescribe punishment for offences such as obscene electronic message transmissions, identity theft, cheating by impersonation using computer resource, violation of privacy and cyber terrorism. However section 66A was held unconstitutional in 2015.
- Sections 67A and B deals with penal provisions in respect of offences of publishing or transmitting of material containing sexually explicit act and child pornography in electronic form.
- Section 67C deals with the obligation of an intermediary to preserve and retain such information as may be specified for such duration and in such manner and format as the central government may prescribe.
Information Technology (Certifying Authorities) Rules, 2000
The Indian Penal Code of 1860 and the Indian Evidence Act of 1872 was amended by the IT Act of 2000 to keep in view urgent need of suitable amendments to keep up with the advanced technological changes. Some of these changes are:-
- Section 4 was amended to target any person in any place without and beyond India committing offence targeting a computer resource located in India also the word “offence” to includes every act committed outside India which, if committed in India would be punishable under this code.
- In section 118, for the words “voluntarily conceals, by any act or illegal omission, the existence of a design”, the words “voluntarily conceals by any act or omission or by the use of encryption or any other information hiding tool, the existence of a design” shall be substituted after amendment.
- In section 464, for the words “digital signature” wherever they occur, the words “electronic signature” shall be substituted.
- The Sections dealing with false entry in a record or false document etc (eg 192, 204, 463, 464, 464, 468 to 470, 471, 474, 476 etc) have since been amended as electronic record and electronic document thereby bringing within the ambit of IPC, all crimes to an electronic record and electronic documents just like physical acts of forgery or falsification of physical records. In practice, however, the investigating agencies file the cases quoting the relevant sections from IPC in addition to those corresponding in ITA like offences under IPC 463,464, 468 and 469 read with the ITA/ITAA Sections 43 and 66, to ensure the evidence or punishment stated at least in either of the legislation can be brought about easily.
3. Information Technology (Security Procedure) Rules, 2004
The Union Cabinet has recently in September 2012, approved the National Policy on Information Technology 2012. The Policy aims to leverage Information & Communication Technology (ICT) to address the country’s economic and developmental challenges. The main thrust was to gain significant global market-share in emerging technologies and Services also innovation and R&D in cutting edge technologies and development of applications and solutions in areas like localization, location based services, mobile value added services, Cloud Computing, Social Media and Utility models. From making house e-literate to provide affordable access to all public services in electronic mode the union cabinet worked to enhance transparency, accountability, efficiency, reliability and decentralization in Government and in particular, in delivery of public services.
Information Technology (Certifying Authority) Regulations, 2001
The Information Technology Act was enacted with a view to give a fillip to the growth of electronic based transactions, to provide legal recognition for ecommerce and e-transactions, to facilitate e-governance, to prevent computer based crimes and ensure security practices and procedures in the context of widest possible use of information technology worldwide. The Act will apply to the whole of India unless otherwise mentioned. It applies also to any offence or contravention there under committed outside India by any person.
Today is not the same world as yesterday. Today technology just overpowered itself. It is obvious if technology is growing crimes are growing with them. It is a difficult task to identify, detect, punish and adjudicate some crime which did not happen in the real world but has the capability to harm anyone with the same effect. India has grown from nothing towards a developing nation. It has identified cyber crimes and is identifying and differentiating one crime to another. All the legislation proposed are for the betterment of the country. These laws will work well when people have knowledge about these laws. Due to this they will not only forbid themselves from doing things which is unlawful but also file complain to seek justice against any act done to them which is unlawful. Thus legal education and technology savvy is important to deal with cyber laws in India.
|
<urn:uuid:918a8729-b7e5-4a4f-93f3-4b64008fc73f>
|
CC-MAIN-2022-40
|
https://cybersecurityhive.com/cyber-laws-india/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00029.warc.gz
|
en
| 0.950864 | 1,540 | 3.34375 | 3 |
Scareware is a widespread Internet fraud scheme that intimidates victims into buying unnecessary or harmful software taking advantage of their ignorance. Scareware usually exploits fears of having a computer virus on a machine and persuades users to purchase fake security software. Here we’ll regard how this spoof works and how not to get fooled by it. Among other things, we’ll touch on threats associated with scareware.
What is Scareware?
Scareware is a scam that plays on fears of inexperienced users. Although computer viruses are an obsolete type of malware, and you will hardly catch one nowadays even if you try, they remain a horror story for people. And the least you know about a threat, the easier it can scare you.
Both trustworthy and scam security products are promoted via advertising. An advertisement of a good solution will respect the customer and make stress on qualities and features of the promoted program. In the worst case – it will explain that there are many threats out there on the Web, and each endpoint needs protection. The scareware, on the contrary, will try convincing you that your computer is already infected with malware. Moreover, pushy ads will insist on immediate installation of the program they represent, as if it were a last chance to cure your pc.
The profitability of the scheme is understandable. People get scared, buy the program and feel like the defenders of their computer system. Perhaps later, the apprehension will come that they just threw away their money, but they will no longer be able to get it back. There are usually many victims of such deception, and that is the very thing on which the scam relies.
Sadly, losing money is not the worst thing that can happen. Sometimes such malvertising used as a filter: whoever bought into this definitely does not have an actual antivirus. Accordingly, those agents who do business on the distribution of adware and malware can safely install a bunch of harmful programs on the victim’s device.
How Scareware Works
It all starts with a person suddenly seeing an advertising banner on some website. The banner itself looks like an automatic notification. Novice users may not even understand that they are dealing with an advertisement.
The message usually says that a scan of the user’s computer was carried out, which found infection with dangerous malware. Already here, a knowledgeable person could have laughed because not only is it impossible to scan the device so quickly, but it would also be problematic to do it remotely without preliminary procedures.
But charlatans deal with inexperienced people and therefore continue their psychological attack. The banners usually include very serious-looking malware names, tables, codes, etc. The more serious the picture looks, the stronger the effect. In all its appearance, the message tries to appear automatic. You can see, for example, this caption: “threat level: high,” as if the same plate could give out a reassuring “low.”
Such schemes are generally built on a series of psychological techniques. Intimidation is only the first of them. The use of colors plays with the victim’s emotions. Red stands for anything related to threats. As soon as the “rescue” program enters the scene, a soothing blue or green color appears. This feeling of possible safety encourages the user to make a purchase. In addition, the price is low. Most scareware schemes rely on the possibility of quick payments combined with a vast number of buyers.
There may be more time-consuming schemes for the crooks. For example, they might launch a massive campaign offering free device scans. To take one, the user must first download the software, the functionality of which will be limited until the program is purchased. So that this payment is still made, the scan will produce frightening results. This approach counts on more educated users.
By the way, the scope of scareware is not limited to the security sector. You can imagine other types of scareware, such as cleaners, that will scare users by saying: “look, a little more, and your system will get so clogged with the garbage that the device will start freezing.” The advertised program will be able to delete unused applications, temporary files, etc.
The programs in question can remain completely fake without an iota of the promised functionality. All “treatment” of the device, just like the initial intimidation, can be just a visual effect.
What are The Threats?
Theoretically, the victim of scareware could get lucky, and the only problem would be the wasted money. But more often than not, a deceptive program will leave an unpleasant payload behind. Its severity may vary. In fact, it corresponds to the degree of danger from the unwanted or overtly malicious software that scareware can fetch onto the victim’s computer. In most cases, installing a scareware application will decrease the PC’s running speed. We’ll be coming from the guess that scareware developers want understandable profit from their victims, not reduced to the price of the application.
This goal implies infecting the device with either of the malware types:
- Adware is a class of relatively harmless unwanted applications. They flood users with ad banners, modify browsers’ settings, add ad links on webpages, etc.
- Spyware is a more significant threat. Hidden software collects information about the system and the user’s activity to send it to people who can commercially benefit from having it. o
- Miners are the programs that steal computing resources of the victim’s machine and throw them at mining cryptocurrency (for somebody else, of course.) The injured side will also be surprised by the electricity consumption rate.
- Cybercriminals can add the infected device to the botnet, a controlled network, to perform certain activities on the web unbeknownst to the user.
- Ransomware is probably the worst case. This malware encodes all data files on the victim’s computer, and the only chance to get them back is to buy a key from the racketeers.
Criminals can drop many other types of malware into the unaware victim’s system. However, those are more suitable for targeted attacks and require hackers’ special attention. The malware mentioned above can work and bring profit automatically.
How not to be fooled by scareware?
- Install an actual security system. GridinSoft Anti-Malware is one of the best solutions on the market due to the combination of technical efficiency and cost-effectiveness. Its virus libraries are regularly updated so that whichever malware becomes recognized in the world, Anti-Malware will know how to deal with it. The program can perform a deep scanning, work in on-run protection mode, and be a security measure for safe Internet browsing.
- Know right before you get scammed. The scareware schemes work only because of people’s ignorance. You don’t need to be a hacker or even an advanced user. Just take a simple course on Internet surfing from someone more experienced in it.
- Don’t visit dubious websites and avoid clicking on ad banners whatsoever. You can hardly encounter malicious advertising, which scareware surely is, on trustworthy websites like Google, Youtube or Facebook. It’s not that you should limit your surfing to these three sites, but they can serve as an example of a trustworthy website appearance. As soon as you see ad banners popping up all around you, flashing and glaring, proceed with great caution if you need to.
- Install ad-blocking software. It goes as an extension to your browser that blocks advertising banners from rendering. It might save you a lot of nerve cells.
- If you happen to buy a scareware product, make sure you remove it as you usually remove an application. In Windows, pressStart > Settings > Apps > Apps & FeaturesChoose the app you want to remove, and then select Uninstall. After removing the scareware, carry out an antivirus scan to get rid of any accompanying malware.
|
<urn:uuid:dffaf52e-b7fa-4ceb-953c-7baf7402bcfb>
|
CC-MAIN-2022-40
|
https://gridinsoft.com/blogs/what-is-scareware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00029.warc.gz
|
en
| 0.940588 | 1,652 | 2.578125 | 3 |
The internet has forever changed advocacy. Activists that would traditionally hit the pavement in protest or organize sit-ins to garner attention are now going digital with their efforts. Hacktivism, a combination of the words hacking and activism, is the use of hacking to expose a believed injustice. It is also referred to as cyberactivism, digital activism, or online activism.
Please note that activism and protests are protected legal activities while hacking is illegal.
Hacktivists and hacktivist groups
The individuals that participate in hacking attacks as a form of activism are called hacktivists. Their motivations vary widely but tend to be social, political, or religious. There is also an array of attack types carried out by hacktivists to bring attention to their cause of choice.
Hacktivists and hacktivists groups claim that their intentions are altruistic and not meant to cause malicious harm. They cause online disruption in an attempt to bring about their desired change.
Most hacktivist groups strive to stay anonymous, but others have names and are widely recognized. Wikileaks, Syrian Electronic Army, and Cult of the Dead Cow are three examples of the most well-known hacktivist groups. While any organization or individual can be targeted by hacktivists, common targets include multinational corporations, government agencies, and powerful individuals.
Types of hacktivism
Whether it’s promoting free speech or releasing incriminating information, hacktivism cyberattacks come in many different forms:
- Anonymous blogging – the writing of blog posts under an anonymous name, often to protect a whistleblower that is exposing injustice.
- Denial of Service (DoS) – Preventing access to computers
- Doxing – gathering of information and releasing it publicly to incriminate, embarrass or incite change.
- Geo-bombing – revealing the location, via Google Earth, where YouTube videos are recorded of political or human rights prisoners.
- Website defacement – changing the appearance of a website, typically to push messaging that brings attention to a cause important to the hacktivist.
- Website mirroring – a workaround to share censored websites. The censored website will be copied and posted with a modified URL making it publicly available.
- Website redirect – modifying the address of a website so visitors are automatically redirected to a different website that supports the hacktivist or group’s agenda.
Potential for mass disruptions
Even though hacktivism isn’t a brand new idea, hacktivist attacks are becoming more common, especially during these turbulent international times. Despite efforts from governments all over the globe, hacktivism has developed into a force to be reckoned with and no doubt can cause mass disruptions.
|
<urn:uuid:a03caf2f-95d3-49c8-839b-060a25c1eb64>
|
CC-MAIN-2022-40
|
https://measuredinsurance.com/blog/what-is-hacktivism/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00029.warc.gz
|
en
| 0.928145 | 552 | 3.203125 | 3 |
The Internet of Things comprises billions of connected devices and is often difficult to comprehend. One of the myths surrounding IoT — that all data processing is done in the cloud — was debunked at the recent Smart Industry conference in Chicago.
There are, in fact, many kinds of edge computing models that handle data processing for IoT connections.
Vivek R. Davé, director of technology development for Harting, a global manufacturer of industrial connectors, and Wes Dillon, solutions consultant for MachineShop, an IoT software company, discussed the growing importance of edge computing at Harting.
Their presentation, “3D Printing and New Conceptual Framework for the IoT Edge,” sparked a conversation about the growing computing needs of IoT, specifically that there are several approaches to meeting that demand: cloud computing, fog computing and edge computing.
Businesses need to know about the different IoT edge options as they connect more devices and navigate the complexities of IoT deployments.
The Cloud Is One Clear Option
While the cloud is not the only platform that organizations can use to process data generated from IoT connections, it is a significant one. In many cases, IoT sensors collect raw data at the edge of the network and transfer it to a remote data center — the cloud — where it is stored, processed into a usable format, and fed into a back-end application for processing and analysis.
Cloud computing is essential to IoT, but many early adopters of IoT quickly learned that pushing all of their data to the cloud is not financially sustainable over the long term.
“That approach was impractical for us at Harting,” Davé said. “We’d be paying a seven-figure bill just for the transfer and storage of all the data we were gathering.”
Fog Computing Processes Data Locally
Instead of transporting that data all the way out to the wide area network, what if businesses could gather and process it at the local area network level? That is the essence of fog computing.
Sensor data from edge computing devices is pushed to a gateway (alternately called either a fog or IoT gateway) on the LAN that converts that data into an internet-friendly protocol.
Once the data is converted, it can then be analyzed, processed and stored at the LAN level. Some data may eventually get pushed to the cloud, but the fog layer allows organizations to filter only actionable data to the cloud for analysis, greatly reducing the associated costs of using that cloud service.
To be clear, “fog computing” is a concept that is being pushed by Cisco Systems. “It’s specific to their technology and architecture,” MachineShop’s Dillon said. (ARM Holdings, Dell, Intel and Microsoft are also partners with Cisco in the OpenFog Consortium, which is designed to promote fog computing.)
This being the early days of IoT, some people use fog computing and edge computing interchangeably. But they are, in fact, distinctly different.
Edge Computing Normalizes Data Streams
Another option known as edge computing pushes IoT data processing to the network edge devices themselves.
“With edge computing, you’re ingesting many different protocols,” Dillon said. “You’re normalizing that data, and you’re then acting on it locally or porting it downstream.”
A large number of industrial devices that are currently in use were deployed without connectivity built in, and were not designed to be internet-compatible. These edge devices can be configured with modular software and hardware pieces, known collectively as programmable automation controllers (PACs).
The PACs process and act upon the collected data (among many other functions that PACs serve), or push that information to the cloud for further analysis.
So which computing approach is right for your IoT deployment? The answer, Davé said, should be “all of the above.”
“Right now the conversation around IoT is very simplistic — you can either process at the edge or in the cloud,” he said. “It’s not that simple. You often need to be able to do computing both at the edge and in the cloud. It’s more complicated than people make it out to be.”
“A lot of the IoT conversation has been set by software developers,” Dillon added. “And I think the whole spectrum of ‘edge to cloud’ represents a balanced perspective for companies dealing with onsite, real-world challenges.”
|
<urn:uuid:850c0976-b8e1-4e9d-b54b-74ec1ac2f467>
|
CC-MAIN-2022-40
|
https://biztechmagazine.com/article/2016/10/businesses-have-multiple-iot-data-processing-options
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00029.warc.gz
|
en
| 0.939192 | 929 | 2.953125 | 3 |
Everyone knows this one, right? Just obey the following rules:
- Don’t give your email address to strangers
- Never post your email address on newsgroups
- Don’t leave your email address lying about on web pages.
- Don’t reply to spam – they know you’re reading it.
Unfortunately this advice is seriously out-of-date, although some emails are still harvested by spammers this way. People keep asking the question “I didn’t do any of the above, so how come I’m getting all this spam?”
What the American spammers are actually doing is using malicious software on innocent computers (installed using the normal virus channels). Amongst other things, this software searches the victim’s hard disk for all the email addresses it can find. It then sends the results back to be added to their spamming list. In order to have your email address added to a spamming list, all you need do is exchange an email with an infected PC – or a PC that becomes infected in the future.
As to item four, about never responding to spam, this is no longer the case. Spammers don’t use their real return address anyway. They track who’s reading their wares by embedding a reference to an image in an HTML email. When the message is displayed the image is downloaded from their server; when this happens they know who it was. Microsoft Outlook allows this to happen; Microsoft doesn’t appear to be in any hurry to fix it.
So what can you do? Not much! If you can, use disposable emails. For example, if you’re the secretary of a club and you correspond with a large number of people, some of whom are likely to be hijacked, make your email address ’secretary1@…’. When this is compromised, change it to ’secreatry2@…’ and so on.
A proper solution is needed, but there’s no political will to solve it. The identity of the criminals doing this is well-enough known; the American’s just let them operate virtually unhindered. Something to do with ‘freedom of speech’!
|
<urn:uuid:4b753e9e-837f-4f28-ba41-7cbd3693945d>
|
CC-MAIN-2022-40
|
https://blog.frankleonhardt.com/2006/19/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00029.warc.gz
|
en
| 0.930844 | 469 | 2.578125 | 3 |
Farmers need to embrace advanced technologies such as the IoT in the next few years in order to support the growing human population, new research has claimed.
With the global population set to reach 11 billion by the end of the century (according to UN estimates) and climate change set to affect agriculture, changes are required in how we source and grow food, analyst firm Beecham has said.
Report co-author and chief research officer, Saverio Romero, cites the UN Food and Agriculture Programme, saying global production of food, feed and fibre will need to increase by 70 per cent by 2050, to meet the demands of a growing population.
“This means that to optimise crop yields and reduce waste, the agriculture and farming industries will need to rely heavily on IoT and machine-to-machine (M2M) technologies moving forward. GPS services, sensors and big data will all become essential farming tools in the coming years and are clearly set to revolutionise agriculture,” he said.
The report uses an umbrella term – precision agriculture – to describe all the innovations that should be in use in the near future.
“Precision agriculture can help reduce significant losses in farming, solve problems of data collection and monitoring, and reduce the impacts of climate change,” concluded Romeo. “In the long term, we have no choice but to invest in the use of precision agriculture and smart farming because of the urgency of the problems the world faces.”
The full report, entitled Smart farming: The sustainable way to food, can be found on this link (opens in new tab).
Image source: Shutterstock/everything possible
|
<urn:uuid:ee0dfcbe-2992-4910-87ca-ca5b2497d032>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/news/farming-needs-to-become-smart-to-sustain-humans/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00029.warc.gz
|
en
| 0.919615 | 336 | 2.953125 | 3 |
In this course you will be introduced to the classification problem and a number of the approaches used to solve the problem. Each approach is presented with the underlying intuition as well as the necessary mathematical underpinnings. We discuss the learning algorithms and illustrate the python tools available using examples. You will learn the relative merits and demerits of each approach. The focus of the course is on learning to find the right model for the problem at hand using the available tools and experimentation. Throughout the course, exercises are provided to reinforce ideas.
What am I going to get from this course?
Learn several classification models that are widely in use.
Gain the knowledge and skills to effectively apply existing classification algorithms and tools to solve real-world problems.
Evaluate multiple models and select the most appropriate for the task at hand.
Prerequisites and Target Audience
What will students need to know or do before starting this course?
Students will benefit from prior exposure to probability and statistics, basic algebra and calculus. Familiarity with the Python programming language is required. Students should be able to use Python 3.x and Python Notebooks.
Who should take this course? Who should not?
Industry professionals and college students who are interested in learning about the available algorithms and tools to address machine learning problems in general, and specifically, the classification problem.
|
<urn:uuid:311142ec-3b75-416a-a1a6-0d235bab9450>
|
CC-MAIN-2022-40
|
https://training.experfy.com/courses/supervised-learning-classification
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00230.warc.gz
|
en
| 0.913661 | 278 | 3.328125 | 3 |
Question 692 of 952
A national retail chain needs to design an IP addressing scheme to support a nationwide network.
The company needs a minimum of 300 sub-networks and a maximum of 50 host addresses per subnet.
Working with only one Class B address, which of the following subnet masks will support an appropriate addressing scheme? (Choose two.)
50 hosts per subnet, so we'd need a range of 64 IP addresses which is a /26. This converts to a mask of 255.255.255.192.
|
<urn:uuid:70d265b2-926c-44b0-8eac-cd9693f9122f>
|
CC-MAIN-2022-40
|
https://www.exam-answer.com/cisco/200-125/question692
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00230.warc.gz
|
en
| 0.909813 | 118 | 3.421875 | 3 |
NASA is looking for a partner to help create a center that will develop new technologies with biological attributes
NASA is looking for a partner to help create a center that will develop
new technologies with biological attributes.
A competitive solicitation for participants in the NASA Center for Biology-Inspired
Technology is scheduled for release in late May, and award of a cooperative
agreement is expected in September.
Participation in the solicitation is open to industry, educational institutions,
nonprofit organizations, NASA field centers and other government agencies.
NASA is interested biology-inspired technologies because they will enable
novel space missions and research capabilities, according to a draft cooperative
agreement notice. Annual funding for research is estimated at $1.5 million.
NASA anticipates that technology based on biological processes will
not only improve space systems and the capability of humans to inhabit space,
but also commercial and medical systems on Earth.
Drawing on plans for humans to inhabit the International Space Station,
NASA envisions the need for a human-machine partnership in which human thought
and action and technological systems are linked and are equally important
aspects of analysis, design and evaluation.
Achieving those types of systems requires bold research efforts designed
to mimic biological solutions to sensory perception, communication, computation,
adaptation and motor control, according to NASA.
NEXT STORY: CIO Council joins international IT effort
|
<urn:uuid:865104cd-15f2-4964-82d2-eca55592503c>
|
CC-MAIN-2022-40
|
https://fcw.com/2000/04/nasa-seeking-biotech-partner/241719/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00230.warc.gz
|
en
| 0.909859 | 287 | 2.6875 | 3 |
Top 10 Must-Know Machine Learning Algorithms in 2022
Top 10 Algorithms to Create Functional Machine Learning Projects
From simple day-to-day functions to making computers smarter, Machine Learning algorithms help automate manual tasks for making our lives simpler. The significance of Machine Learning has grown even further, which is why enthusiastic data scientists and engineers look forward to learning different techniques to hone their skills.
Below are the top 10 Machine Learning algorithms that you should know. These will help you to create practical projects, no matter whether you choose Supervised, Unsupervised, or Reinforcement Machine Learning model.
Read our Infographic: What Machine Learning is and why it is important in business
1. Apriori Algorithm
Apriori algorithm is a type of machine learning algorithm, which creates association rules based on a pre-defined dataset. The rules are in the IF_THEN format, which means that if action A happens, then action B will likely occur as well. The algorithm derives such conclusions by analyzing the ratio of action B to action A.
One of the most common examples of the Apriori algorithm can be seen in Google auto-complete. When you type a word, the algorithm automatically suggests associated words that are mostly typed with that.
2. Naive Bayes Classifier Algorithm
Naive Bayes Classifier algorithm works by presuming that any specific property in a category is not related to the other properties of the group. This helps the algorithm to consider all the features independently as it calculates the outcome. It is very easy to create a Naive Bayes model for huge datasets, and can even do better than many of the complex classification methods.
The best example of the Naive Bayes Classifier algorithm will be email spam filtering. The function automatically classifies different emails as spam or not spam.
3. Linear Regression Algorithm
Linear Regression algorithm determines the correlation between a dependent variable and an independent variable. It helps understand the effect that the independent variable will cause on the dependent variable if the former’s value is changed. The independent variable is also referred to as the explanatory variable, while the dependent variable is termed as the factor of interest.
Generally, the Linear Regression algorithm is used in risk assessment processes, especially in the insurance industry. The model can help to figure out the number of claims as per different age groups and then calculate the risk as per the age of the customer.
Related Reading: Can Machine Learning Predict And Prevent Fraudsters?
4. K-Means Algorithm
K-Means algorithm is commonly used for solving clustering problems. It takes datasets into a specific number of clusters, which is referred to as “K”. The data is categorized in such a way that all the data points in the cluster remain homogenous. At the same time, the data points in one cluster will be different from the data grouped in other clusters.
For instance, when you look for, say, “date”, on the search engine, it could mean a fruit, a particular day, or a romantic night out. The K-Means algorithm groups all the web pages that mention each of the different meanings to give you the best results.
5. Decision Tree Algorithm
Decision Tree algorithm is the most popular Machine Learning algorithms out there today. The model works by classifying problems for both categorical as well as continuous dependent variables. Here, all the possible outcomes are divided into different standardized sets with the most significant independent variables using a tree-branching methodology.
The most common example of the Decision Tree algorithm can be seen in the banking industry. The system helps financial institutions to categorize loan applicants as well as determine the probability of a customer defaulting on his/her loan payments.
Related Reading: How Predictive Algorithms and AI Will Rule Financial Services
6. Support Vector Machine Algorithm
Support Vector Machine algorithm is used to classify data as points in a vast n-dimensional plane. Here, “n” refers to the number of properties in hand, each of which is linked to a specific subset to categorize the data. A common use of the Support Vector Machine algorithm can be seen in the regression of problems. It works by categorizing data into different levels using a particular line or hyper-plane.
For instance, stockbrokers use the Support Vector Machine algorithm to compare the performance of different stocks and listings. This helps them to device the best decisions for investing in the most lucrative stocks and options.
7. Logistic Regression Algorithm
Logistic Regression algorithm helps calculate separate binary values from a cluster of independent variables. It then helps to forecast the likelihood of an outcome by analyzing the data against a logit function. Including interaction terms, eliminating properties, standardizing techniques, and using a non-linear model can also be used to create better logistic regression models.
The probability of the outcome of a specific event in the Logistic Regression algorithm is calculated as per the included variables. It is commonly seen in politics to predict if a candidate will win or lose in the election.
8. K- Nearest Neighbors Algorithm
K Nearest Neighbors or KNN algorithm is used for both the classification as well as regression of different problems. The model saves the data available from several cases, which is referred to as K, and classifies new cases as per the data from the K neighbors based on distance function. The new case is then included in the identified dataset.
K Nearest Neighbors needs a lot of storage space to save all the data from different variables. However, it only functions when needed and can be very reliable in predicting the outcome of an event.
9. Random Forest Algorithm
Random Forest algorithm works by grouping different decision trees based on their attributes. This model can deal with some of the common limitations of the Decision Tree algorithm. It can also be more accurate to predict the outcome when the number of decisions goes higher. The decision trees are mapped here based on the CART or Classification and Regression Trees model.
A common example of the Random Forest algorithm can be seen in the automobile industry. It is seen to be very productive in forecasting the breakdown of a specific automobile part.
10. Gradient Boosting and Adaptive Boosting
Gradient Boosting and Adaptive Boosting (AdaBoost) algorithms can be used when you need to handle a huge amount of data and predict the outcome with the highest accuracy possible. Boosting algorithms combine the power of different basic learning algorithms to improve the results. It can also merge weak or average predictors to get a strong estimator model.
Gradient boosting is generally used with decision trees, while AdaBoost is typically used to improve binary classification problems. Boosting can also correct the misclassifications found in different base algorithms.
The above-listed Machine Learning algorithms will help you get started with your desired projects right away. These will equip you for understanding the scope of Machine Learning as well as work out complex problems more easily.
Related Reading: How Machine Learning Boosts Customer Experience
Want to develop machine learning applications that deliver better experiences for your users? Connect with us.
|
<urn:uuid:a976ecf4-eb72-4d7a-a32b-1b1ef10687bb>
|
CC-MAIN-2022-40
|
https://www.fingent.com/blog/top-10-must-know-machine-learning-algorithms-in-2020/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00230.warc.gz
|
en
| 0.915257 | 1,474 | 2.828125 | 3 |
General Remarks on Partitioning
A partition is a division of a logical database or its constituent elements into independent parts. Database partitioning may be done for reasons of performance, manageability, or availability. This section concentrates on partitioning to improve performance.
By splitting a large table into several smaller tables, queries that need to access only a fraction of the data can run faster because there is less data to scan. Maintenance tasks, such as rebuilding indexes or backing up a table, can also run more quickly. Placing logical parts on physically separate hardware provides a major performance boost since all this hardware can perform operations in parallel.
Interaction Server performs large numbers of queries, updates, inserts, and deletes on its database. While it is relatively easy to achieve optimal performance with updates, inserts, and deletes, queries (SELECTs) are different.
The Interaction Server database consists of a single major table that stores all the interaction data. Every interaction in the system is always assigned to some interaction queue, represented by value of the field queue in the Interaction Server table. Business processes may employ dozens or even hundreds of queues.
Queues can vary greatly in the way they are used: some hold many interactions which are rarely processed at all (for example, an archive queue), others hold a small number of interactions with a high processing rate (for example, a queue for interactions that need some preliminary processing).
If these two types of queue are separated into different partitions, then the slower selection rate of the first type will not interfere with the high-speed selections of the second type. So the queue field is a natural choice to partition the data on. The remainder of this section describes partitioning by queue.
|
<urn:uuid:b8615bd7-4530-4f20-993c-75071ae66973>
|
CC-MAIN-2022-40
|
https://docs.genesys.com/Documentation/ES/8.5.0/Admin/IxnDBGen
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00230.warc.gz
|
en
| 0.910396 | 350 | 3.203125 | 3 |
Develop a proactive cybersecurity framework that emphasizes threat detection and vulnerability management.
In cybersecurity, a vulnerability is a weakness that can be exploited by a threat actor, such as an attacker, to perform unauthorized actions within a computer system or network. A typical example is mistakes in software code that provide an attacker with direct access to a system or network until the software code is patched.
Exposure is the number of vulnerable systems or networks that attackers can access.
Risk is a measurement of future loss from a given scenario, such as a ransomware event, derived from the probable frequency and magnitude of loss events. Avoiding loss is a fundamental reason for maintaining an enterprise vulnerability management program.
One of the many ways to help protect your business from unplanned downtime is to develop a vulnerability management program. These task management programs help your IT team with vulnerability identification, patch management, and optimizing your position against cyber-attacks.
|
<urn:uuid:e4c4e2cc-7f3d-47f2-9502-bc63ab1285ee>
|
CC-MAIN-2022-40
|
https://www.certitudesecurity.com/services/vulnerability-management/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00230.warc.gz
|
en
| 0.938814 | 183 | 2.96875 | 3 |
What is Master Data?
Master data is the data that sits at the center of all core business processes and applications. It’s high-confidence, high-integrity (master) data that’s relied on to produce favorable business outcomes.
As more decision making and planning activities leverage new algorithmic methods and processes (like Machine Learning), managing master data is more important than ever. You can imagine the problems and devastation caused by decisions based on bad or inconsistent data.
An Example of Bad (Very Bad) Master Data Management
In fact, we can learn a lot from NASA’s loss of the Mars Orbiter in 1999. Billions of dollars in time, research, and equipment vanished because of an easily avoidable data consistency problem.
The problem? Different units of measurement were used by NASA and Lockheed Martin. According to NASA engineer Richard Cook, “The spacecraft had more or less hit the top of the atmosphere and burned up.” Apparently the thrusters were only putting out ¼ of the thrust they should have been.
In NASA’s case, units of measurement certainly was master data, but for some reason, it didn’t make their list.
So What is Master Data Management?
Master data management is governing master data for consistency and reuse with the goals of:
- Reduced IT costs
- Assuring consistent positive business outcomes
- Preserving business process integrity
According to Gartner, Inc:
“Master data management (MDM) is a technology-enabled discipline in which business and IT work together to ensure the uniformity, accuracy, stewardship, semantic consistency and accountability of the enterprise’s official shared master data assets.
"Master data is the consistent and uniform set of identifiers and extended attributes that describes the core entities of the enterprise including customers, prospects, citizens, suppliers, sites, hierarchies and chart of accounts.”
THE BIG IDEA: MDM is simply an attempt to solve an age-old problem: inconsistent versions of critical data within the center of an organization.
This problem begins the very moment a second business application is added.
MDM goes hand-in-hand with an organization’s enterprise data model which covers all (most!) of the organization’s data.
"An Enterprise Data Model represents a single integrated definition of data, unbiased of any system or application. It is independent of “how” the data is physically sourced, stored, processed, or accessed.
The model unites, formalizes, and represents the things important to an organization, as well as the rules governing them."
— Noreen Kendle, 2005
Why Master Data Management is Important
The importance of managing master data cannot be overstated. MDM:
- Increases agility
- Drives competitive differentiation
- Underpins critical business processes
What makes MDM difficult is the fact that many software applications are developed and deployed in silos. But adding even more difficulty is low desire, commitment, and ability in many organizations to create a common link between data and improving business outcomes.
BIG TIP: Historically, it's often been difficult for CIOs, and more recently CDOs, to demonstrate the value for advanced data analytics strategy.
To justify the effort, focus strategy on innovation, the value of information, risk mitigation, and supporting the decisions that drive the most valuable business outcomes.
What Leads to Successful Master Data Management?
Successfully developing and maintaining master data requires managing metadata, data modeling and mapping, and semantic reconciliation (recognizing that two objects are the same even when they’re described differently).
Not all data needs to be managed. If you listed all data objects or attributes stored in all business processes and applications, the list would be immense.
So how do you determine what data should fall within MDM?
Use this rule of thumb: the only data which should be included in your MDM program is data that is directly tied to the business outcomes and processes the organization is willing to change.
It is easy to see that organization-wide MDM is a major undertaking. It requires an immense amount of time, money, and frankly – patience to implement. Successful MDM initiatives start out small, and grow in scope over time.
Another way to determine the most important data to manage is looking at what metrics and data support the most important KPIs. Maintaining high data quality targets for these critical processes is important.
A logical, organic approach to an MDM implementation is to adopt a single-subject-area approach. For example, a procurement outsourcing organization may use MDM for line-item invoice validations. Another common starting point is using MDM to create a 360-degree view of customers to support CRM objectives. This incremental effort builds foundational stepping points for future MDM growth.
How to Get Started with Master Data Management
There are many different ways that this important data can be stored and managed. Some methods include consolidation, registry, centralized, and coexistence.
- Consolidation – Primarily used to support business intelligence and data warehousing. Some refer to this as downstream MDM – applying MDM downstream of the systems which created the master data.
- Registry – As the name suggests, this is an index for master data which remains fragmented across silos
- Centralized – This style is used in high-control, top-down environments where data is authored, stored, and accessed from one or many MDM “hubs.” It offers the greatest control over access and security, but is highly invasive to the organization.
- Coexistence – This is a large-scale distributed model. A central system maintains a “golden copy” of data which is published to subscriber systems.
Implementation styles are not mutually exclusive. Many organizations start with one style and later move to another. You might start with consolidation, and then move to coexistence based on changing needs.
MDM is Not a Technology
There are many systems integrators and MDM vendors to choose from – all offering their flavor of MDM and tailoring the best approach for your organization’s goals and data. It is not uncommon to leverage multiple MDM products or vendors.
But don't make the mistake of thinking about MDM as a technology. Rather, it is a discipline focused on the processes required to govern master data. Technology is simply used to support the discipline.
What is Master Data Management – the Bottom Line
The goal of MDM is achieving consistency, accuracy, integrity, and semantic consistency of master data. And all this in an effort to grow and support important business outcomes.
MDM helps break down operational data silos, but requires an immense amount of commitment from the organization.
BE WARNED: The greatest challenges to MDM are not technical. Rather, they’re related to political and organizational silos.
|
<urn:uuid:1af0ae68-0555-42df-b9ed-d20443b66746>
|
CC-MAIN-2022-40
|
https://blog.bisok.com/via/what-is-master-data-management
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00230.warc.gz
|
en
| 0.929185 | 1,425 | 3.125 | 3 |
“In a room where people unanimously maintain a conspiracy of silence, one word of truth sounds like a pistol shot” – Czeslaw Milosz
Every year United States news magazine and website TIME features a person, a group, an idea, or an object as ‘Person of the Year’ and for the year 2002, three women were given this title – Cynthia Cooper, Sherron Watkins, and Coleen Rowley – and the edition was called ‘The Whistleblowers’. Cynthia, Sherron, and Collen were chosen for this title because they were the whistle-blowers of large organizations – WorldCom, Enron, and FBI respectively. TIME described them as the “three women of ordinary demeanor but exceptional guts and sense”. Since then, many high-profile whistle-blowers have been making space in the news headlines, especially from the giant tech industries, like – Frances Haugen, who exposed Meta for exploitation of personal data, Timnit Gebru and Rebecca Rivers, who challenged Google on ethics and AI issues and Janneke Parrish, who raised concerns about the discriminatory work culture at Apple. Looking at the data of all the big tech industry whistleblowers, a conclusion was made that most of the whistleblowers in tech industries were women! So, does that mean women are more likely to be whistleblowers than their male peers? The answer is not so straightforward.
Who are whistleblowers? A whistleblower is a person who comes forward and discloses information on any wrongdoing that might be happening in an organization as a whole or even in just one specific department of the organization. In short, they spill the beans. This person could be anyone who witnesses any sort of malfeasance, an employee, a government agency, a contractor, or even a supplier. A whistleblower can be categorized as internal or external based on whom they report the misconduct to. If it is reported to senior officers of the organization like the HR Head or CEO then it is called internal whistleblowing and if reported to people outside of the organization like media, police, or government then it is called external whistleblowing. Clearly, whistleblowing is not a gender-specific vocation but an opportunity to call out frauds.
According to an article published in Fortune magazine, women whistleblowers are different from men in possessing certain attributes which are beneficial to the trade:
- Their way of tolerating risk: Research on gender and risk by Judy Rosener, a professor emerita at the University of California Irvine states that “Women tend to see the downside of risk while men tend to see the upside, which means women tend to take less risk”. As a result, women are less likely to tolerate workplace shenanigans and ethical uncertainty. Some experimental studies and attitudinal surveys also show that women are directly associated with a lower incidence of bribery.
- Their motherhood gene: As discriminatory as it sounds, many people believe that women have certain “genes” which make them more inclined to defend those in weaker positions. In a corporate setup these would be, mistreated employees, cheated customers, or deceived shareholders.
- Their outsider’s status: It is now a known fact that women often feel like outsiders in their own companies. This is largely because even though women are at the helm of giant industries, they still constitute only a minor proportion of the entire workforce and even less in the leadership positions. Consequently, they might feel less loyal towards their employers.
If we look carefully at the above-mentioned points, it is very likely that all these arguments came from the way men and women are socialized into different gender roles in society and the way we picture them.
While there might be reasons for women to become whistleblowers, there are also some points why women might hold back from doing so. Women who fear the consequences of reporting misconduct have a good reason to be cautious as studies show that female whistleblowers have to experience more retaliation than male whistleblowers. A 2008 study published in the journal Organization Science, examined whether a whistleblower’s gender and level of power in the organization can affect the probability of facing retaliation and it was concluded that a woman’s level of power and authority didn’t protect her from retaliation but there was a significant correlation in the males that the more powerful they were, the less retaliation they experienced.
Most companies now have separate policies for people to be able to safely report incidents without losing their job or getting mistreated by their bosses or colleagues. Today, even government and media tend to support whistleblowers with the typical portrayal being ‘a brave individual is willing to risk their personal fall-out to ensure safety and justice for others’. However, this was not always the way whistleblowers were seen.
For a very long time, lawmakers viewed whistleblowers as individuals who were unfaithful to their work and organization. The courts prioritized an employee’s “duty of loyalty” above their efforts to help the public. In 1982, a Texas court said it was okay for a nursing home to lay off an aid who complained that her boss didn’t seek a doctor for a resident after that resident’s stroke because it was suggested that the aid violated her duty of loyalty by making her complaint. This massive shift from old beliefs to the contemporary world’s priorities took decades to happen.
Whistleblowing is a complicated situation to analyze as its public manifestation is just the tip of the iceberg and most of the whistleblowing is confidential. Since most of the data on whistleblowing is anonymous, the conclusion on the influence of gender on whistleblowing is not very accurate. There are just hints, rather than hard evident data. All this could totally be a confirmation bias where the concept of female whistleblowers perfectly fits the portrait the society has created of women being more altruistic and morally virtuous than men.
More than the relation of gender in whistleblowing what’s important is that whistleblower protection should be strengthened so that more people step forward and help towards a better, less risky, and more regulated future.
|
<urn:uuid:2a74c9eb-5c74-4282-9873-c4b6b414b28d>
|
CC-MAIN-2022-40
|
https://www.gavstech.com/do-women-make-better-whistleblowers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00230.warc.gz
|
en
| 0.962639 | 1,381 | 2.671875 | 3 |
The Australian Signals Directorate (ASD) Essential Eight has received considerable attention since it included an additional four strategies to the previously defined ‘Top 4 Strategies to Mitigate Cybersecurity Incidents’. Logan Daley continues the ASD Essential Eight Explained series below.
Macros are basically a batch of commands and processes all grouped together to make life a little easier when performing routine tasks. In many cases, they simply execute as the user and save untold hours, reducing the number of errors one can make with tedious tasks. Unfortunately, Macros are also a popular exploit through leveraging this autonomy and ability to execute code, reaching even beyond the application itself. Anyone that has been around for a long time will remember the Melissa macro virus and the havoc it caused with email services worldwide. Or even the Wazzu macro virus that altered the content of files. Most of this is due to Visual Basic for Applications (VBA) which is still used to this day. Microsoft, to their full credit, has done a tremendous amount of work to secure macros in the past several versions of Office, but you can’t save people from themselves. I once had a car with advanced safety features but all the technology in the world wouldn’t keep me from driving off the road if I did it on purpose.
While it might be tempting to simply disable all macros, full stop, that isn’t the answer. Remember that macros exist for a reason and that’s to automate tasks, save time, and keep some of us from going loopy after doing the same thing a thousand times over. A better approach is to selectively trust macros but remove the choice from the end user. How do we trust macros? Digitally sign them and then lock down the application to disable all but the signed ones.
So how do I digitally sign macros?
This is where it can get complex. While there are tutorials about how to self-sign digitally signed macros, self-signed certificates really don’t inspire any trust in the broader community, so the availability of a Public Key Infrastructure (PKI) infrastructure, either internal using the Microsoft solution or external using a third-party trusted Certificate Authority (CA) is preferred. Rather than bog you down in details, I would encourage you to start exploring digital signing of your macros and get the right people involved before moving ahead. This is a perfect example of when you need to put your hand up and ask for some help unless you have the in-house skills. On top of digitally signing and distributing your macros, you also need to consider policies that lock down these features in the office applications lest your users will just go in and disable this protection anyway to run all macros. Yes, scary, I know.
Of course, in an environment that doesn’t need macros, go ahead and just disable them completely. I doubt, however, that many of these environments actually exist.
There can be a lot of moving parts here, so a plan is critical. Consider group policies, restricted privileges, macro control and distribution, digital signing and PKI and you will quickly see how many places you can come off the rails. Please don’t throw this in the “too hard bucket” because there is a lot to gain when macros are managed correctly, especially in an environment where the productivity can be impacted tenfold by their proper use but a hundredfold by their exploitation.
The macros themselves have to be trusted because as you can imagine, if we make a mistake and then trust that mistake, digital signing won’t make an ounce of difference. You must QA the macros and thoroughly test them before using them. Human error, as with all things, is omnipresent.
Determine if you need macros. If no, then happy days, just implement a blanket policy to disable them across the board and move on. For non-domain systems, just disable them in your applications. For the rest of us, and likely the majority, that need macros, it’s time to take inventory of the macros we use. Delete the ones we don’t and begin the process of vetting the ones we do. Digitally sign your required macros after thorough QA and testing, and then distribute and control as needed. Ideally, we should never execute an untrusted macro unless we’re the ones that developed it and are trying to make it legitimate. Once these hurdles have been crossed, you can get back to unhindered productivity and actually make it out of the office before midnight.
By the way, it’s worth considering macros in applications other than office. Microsoft isn’t the only ones that figured out macros are incredibly powerful.
Find out what your current policy is on Microsoft Office Macros and if you don’t have one, consider creating one. As I mentioned earlier, this can be complex with a lot of moving parts so unless you have the resources like in-house skills and PKI, put up your hand and ask us to help you. If you have the resources, look at locking down your macros and controlling their distribution and the end user control over the applications. People are very skilled at Googling how to bypass security settings and pushing their limits. Logging and alerting may be a worthwhile side project to this as well. For those of you that already have all of this in place including digitally signed macros, it’s time to run a health check on your current state to make sure it’s still doing what it’s supposed to. Nothing in this world is even set-and-forget.
Read more from the ASD Essential Eight Explained series.
|
<urn:uuid:a2bf6716-0e04-4900-a30a-335326d39197>
|
CC-MAIN-2022-40
|
https://www.data3.com/knowledge-centre/blog/asd-essential-eight-explained-part-5-disabling-untrusted-microsoft-office-macros/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00430.warc.gz
|
en
| 0.947172 | 1,164 | 2.59375 | 3 |
When we look into the future, we have a terrible habit of underestimating it. We look at the last 10 years, how far technology has come in that time, and expect this development to be mirrored in the decade coming.
This couldn’t be further from the truth. In reality, progress follows the law of accelerating returns and the last 10 years saw significantly more growth than the decade before.
Explained as simply as possible, the ‘law of accelerating returns’ is based on the principle that as we develop more technology, further growth becomes easier to conceive. We learn from experience, and faster due to the resources available to us.
When we think about this velocity, smart cities seem just around the corner, and as a concept are heavily reliant upon data centers for their success.
“The world population presently stands at approximately 7.7 billion people, with nearly four billion or 54 percent living in cities today. By 2050, it's projected that more than two-thirds of the world population will live in urban areas, with seven billion of the expected 9.7 billion people occupying cities globally.”
Marc Cram, the director of new market development for Server Technology, a brand from Legrand, sees these population densities as directly related to the development of smart cities. This is, in part, due to necessity. As the population expands, we need to find a way to manage it effectively in the highly dense areas. Resources will be stretched and become reliant on efficiency.
“A successful smart city will depend on solutions across six major domains. Economy, environment and energy, government and education, living and health, safety and security, and lastly, mobility. The common understanding for a smart city in 2021 is one that provides for the real-time monitoring and control of the infrastructure and services that are operated by the city, thereby reducing energy use and pollution, while improving health, public safety, and the quality of life of the citizens and visitors.”
It should not go unacknowledged that this could trigger some anxiety. The concept of real-time monitoring can feel a little too Orwellian for many people’s comfort but, frankly, is already a part of our daily lives.
We are already monitored by GPS on our mobile phones - our preferences are tracked on the internet, and microphones listen to our every conversation. The question that Smart Cities answer is how can this integration of technology help out daily lives.
This process is already beginning in New York.
“Smart, in this case, means being efficient in the use of both human and financial capital to minimize energy usage while ensuring the quality of service for public utilities such as water, electricity, and transportation, and to provide for the day-to-day safety of people and resilience of infrastructure.
“Smart means taking advantage of automation and remote management capabilities for lighting, power, transportation, and other mission-critical applications to keep the city running. With thousands of cameras and millions of sensors already in use around New York City, they are already well on their way to being a smart city that processes over 900 terabytes of video information every single day.
“Recently, the city of New York committed grant monies for the development of a couple of new IoT-based applications that will likely require the use of smart street lighting that is available through the New York Power Authority.
“First, is a real-time flood monitoring pilot project led by the City University of New York and New York University to help understand flood events, protect local communities, and improve resiliency. Two testbed sensor sites - one in Brooklyn and the other in Hamilton, which is a power beach in the Queen's area which has a history of nuisance flooding. The software solution that is being tested must act as an online data dashboard for residents and researchers to access the collected flood data
"Secondly, the city is testing computer vision technologies that automatically collect, and process, transportation mobility data through either a live video feed, recorded video, or through some sort of site-mounted sensors. Currently, street activity data is collected through time and person-intensive methods that limit the location, duration, accuracy, and number of metrics available for analysis. By incorporating computer vision-based automated counting technology, the city hopes to overcome many of these limitations with flexible solutions that can be deployed as permanent count stations, or short duration counters with minimal setup costs and calibration requirements.”
At some point, this technology could develop even further, interacting with citizens on a personal level by utilizing Edge computing. As Cram pointed out, “for example, the light fixture can be a WiFi or WiFi device. A desk itself could be outfitted with sensors to detect how warm or cool you are, and measure movement to suggest getting up and walking around for a period of time.”
|
<urn:uuid:85bd2b17-5027-496b-9f8a-bc476d3d3244>
|
CC-MAIN-2022-40
|
https://www.datacenterdynamics.com/en/marketwatch/why-smart-cities-are-both-incredible-and-inevitable/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00430.warc.gz
|
en
| 0.95188 | 991 | 2.75 | 3 |
Society's dependence on technology and an always-online connection is beginning to reveal an increasing number of vulnerabilities that are much bigger than any government or business. For example, we have already witnessed the fragility of the energy supplies and supply chains that many have taken for granted. In addition, social media echo chambers, censorship, and disinformation have all contributed to the polarization that divides communities across the world.
Anyone can target an attack on our critical infrastructure. Anything from power grids to water supplies is now a target for cyberattackers. In addition, both ends of the financial spectrum, from Canadian truckers to Russian billionaires, have had their assets frozen. As a result, leaders face difficult and complex choices that could protect society while impacting personal freedoms. Thankfully, this is not another political article but a wake-up call that an online connection cyber resiliency is now entangled with everything in our digital world.
We now manage everything from our smartphones. Online banking and an increasingly cashless society mean that every bill, payment, or purchase relies on the internet. Many are also filling their homes with smart products to manage their entertainment, heating, lighting, and even cooking. In addition, many homes are protected with CCTV, smart doorbells, and burglar alarms. Even the feeling of belongingness, confidence, and esteem was largely fostered online during lockdowns during the pandemic.
Maslow's pyramid of needs highlighted that our basic physiological needs include food, water, rest, warmth, safety, and security. But the bigger question is how many of these would be impacted if a global cybersecurity attack removed our always-on connection to the services we have become so dependent on.
Is a cybersecurity war already underway?
There is a strong argument that we are already in the early stages of a global cyberwar. Our news feeds are bombarded every week with stories of data breaches or ransomware attacks where critical data from governments and businesses have been compromised which can impact the national security and stability of a nation.
The SolarWinds hack and the Kaseya attack show how everything connected has become a target as attackers begin to think much bigger than merely stealing data. Sabotage of military databases and attacks on critical infrastructure have also been rising steadily over the last few years against public transport systems, power grids, banking, water supplies, supply chains, and hospitals. When combined with online disinformation campaigns, we begin to see the impacts of destabilization.
Bad actors have an expanding list of attack vectors at their disposal. Unfortunately, phishing, malware, and traditional viruses are often preferred to effectively take down the critical infrastructure that we all depend on. DDoS attacks are no longer just used by script kiddies and are often considered an easy way to prevent users from accessing their devices or networks.
The global shortage of semiconductors
Elsewhere, some analysts are casting a watchful eye on the tensions between China and Taiwan. There is currently a global chip shortage, and Taiwan now accounts for 92% of the world's most advanced semiconductor manufacturing capacity. The problem is that semiconductors are the brains of modern electronics. Everything from smartphones, laptops, and game consoles to the cars we drive depends on them.
The global chip shortage is responsible for big brands such as Jaguar Land Rover losing £9m ($12m) in the final three months of 2021 alone. The war in Ukraine is already responsible for halting the production of over half of the world's neon output for chips. But within a few days of the Russian invasion of Ukraine, Taiwan also revealed it would be joining the international sanctions against Russia.
With the entire world being reliant on chip makers in Taiwan, every business is vulnerable, and arguably the reason why many analysts believe the island nation is so attractive to Chinese authorities. However, it's time for the average person to understand that the ramifications of this global chip shortage are much more than a difficulty getting their hands on the latest iPhone or PS5 games console.
Instead, people should consider who will control the chips that power everything and how new technologies introduce more cyber risks and vulnerabilities.
In recent weeks Russia has amassed up to 190,000 troops in the attack against Ukraine. Although many will sympathize and support the brave people fighting back, some will look at the map and think it's a long way from where they are living their lives. But cyber attackers do not care about borders.
If we do enter a world cyberwar where attackers exploit weaknesses, poor cyber hygiene, and vulnerabilities, we run the risk of mass casualties without a single shot being fired. Hospital emergency rooms without power or mass riots caused by misinformation campaigns and the absence of communication networks could impact every individual wherever you are reading this.
However, it doesn't have to be this way, and it's simply a case of recognizing that cyber security is everyone's business, and it needs to be a strategic priority for governments and businesses. Companies also need to invest in their people to tackle the skills gap as well as recruit and retain cyber talent. If the prospect of everything going offline is enough to keep you awake now, maybe it's an excellent opportunity to be proactive about your cyber resilience now before it's too late.
|
<urn:uuid:2787f013-af03-4244-91dd-d2a5ee6da32b>
|
CC-MAIN-2022-40
|
https://cybernews.com/security/how-a-cybersecurity-war-and-global-chip-shortage-could-spark-wwiii/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00430.warc.gz
|
en
| 0.957666 | 1,051 | 2.796875 | 3 |
The demand for digital skills across every industry is continuously increasing. Employees are now challenged to embrace a career that consists of lifelong learning to thrive and survive in an always-online world. But the digital skills gap is widening and is reportedly responsible for the UK losing out on £63bn in GDP every year.
Although mobile and emerging technologies are helping to accelerate the delivery of education, digitization is increasing the digital divide in education. Many pupils do not have the luxury of having smartphones, tablets, and laptops at their disposal. Equally, in the workplace, funding the reskilling and upskilling of workers has also been conspicuous by its absence.
The growing problem of digital exclusion
Technology has transformed almost every aspect of our lives. How we communicate, learn, work, shop, and view entertainment is all built on complex technologies. But those who don't possess the digital skills to use new solutions can suddenly find themselves locked out of a digital world and unable to participate in society.
Education, training, job roles, and even a trip to the self-checkout at a supermarket all require digital skills.
Many older adults are also beginning to feel isolated by brands that are migrating to online-only services. While most people reading this will rely on internet banking to make quick online payments or settle bills on the move, what happens to those that don't have high-speed internet, a smartphone, or the skills to work them?
There are 274 million older people in China using cell phones, but only 134 million use smartphones to access the internet. These stats reveal that 140 million in one country have never browsed the internet on their phone.
Analysts predict that by 2050, older adults will outnumber the young for the first time in our history. The proportion of adults over 60 in society is expected to reach over one billion in just four years. Many are more than willing to learn, but a lack of help and support has left them locked out of the digital world.
From the classroom to the workplace to those enjoying retirement, digital exclusion is a problem we need to tackle urgently. As businesses focus on digital transformation projects, there also needs to be a much stronger emphasis on ensuring that everyone enjoys the ride and that nobody gets forgotten or left behind.
Internet access, education, and training
From students to retirees, anyone who does not have access to digital devices and high-speed broadband is significantly disadvantaged in life. In addition, many employees have lost their jobs due to automation or because of the pandemic. The world of tech also has a diversity problem and desperately needs to attract underrepresented people in science, technology, engineering, and mathematics (STEM) occupations.
However, there are currently 3.5 million unfilled cybersecurity jobs in the world. This is just one of many examples of how a lack of investment in training and education got us to where we are today.
There is a massive skills gap between the needs of businesses and what we are teaching our kids in schools or internal training opportunities in the workplace.
Employers now have a responsibility to reskill their employees by investing in training and supporting their staff. It should be an easy decision considering that hiring someone new can cost up to 30% of the job's salary. By contrast, training an existing employee can cost hundreds rather than thousands and result in digitally upgrading the skills of an entire team.
Businesses that do nothing can experience productivity loss and a high employee turnover which eventually impacts customers due to the mistakes made by untrained staff. In France, they are tackling these issues by launching the Grande Ecole du Numérique. The multi-stakeholder initiative is offering ICT skills training programs that meet inclusiveness and diversity criteria. If eligible, businesses can receive funding for up to 80% of their costs through a grant.
Human skills and emotional intelligence
According to analysts, 50% of jobs will be changed by automation. But behind the scary headline is the reassuring fact that only 5% will be eliminated. We cannot afford to ignore that 9 out of 10 jobs will require digital skills. As a result, low-skilled and vulnerable people will all need help upskilling to thrive and survive in a digital age.
It's important to remember that as we continue to automate repetitive and mundane tasks, it's human skills and emotional intelligence that will help employees shine in the workplace of tomorrow. A Deloitte paper predicts that soft skill-intensive occupations could account for up to two-thirds of all jobs by 2030.
When humans and technology work seamlessly together, we can begin to explore how we can amplify each other's strengths. If we invest in people, we could unlock a new wave of human innovation and creativity that transforms businesses and empowers employees to create new ways of working. But most importantly of all, it's our collective responsibility to ensure that nobody gets left behind.
|
<urn:uuid:c7a809c5-5823-4674-9469-ce3dbc84d81b>
|
CC-MAIN-2022-40
|
https://cybernews.com/editorial/why-we-need-to-tackle-the-digital-skills-divide/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00430.warc.gz
|
en
| 0.963783 | 986 | 3.234375 | 3 |
The Einstein@home project has found a rare astronomical object with the help of thousands of users processing gravitational wave data.
Nature.com reported that Chris and Helen Colvin from Iowa were telephoned by Bruce Allen, director of the Einstein@home project and a director of the Max Planck Institute for Gravitational Physics in Hannover, to inform them that their screensaver had been the ones to crunch the data which enabled a significant discovery.
Einstein@home is one of many projects which can be run under the Berkeley developed distributed computing screensaver called BOINC. The project had been devoted to finding evidence of so-called gravitational waves but has not met with success for four years. Allen had then directed a proportion of the massive computing power of the project towards searching for evidence of pulsar stars.
BOINC is a program which allows personal computers to contribute processing power to various projects when they are idle. Under the BOINC framework are a number of different projects which users can opt to participate in such as the well-known SETI@home search for extra terrestrial intelligence and climateprediction.net which seeks to build a superior model of climate change.
The newly discovered recycled isolated pulsar is around 17,000 light years distant and one of only around a dozen found to date.
|
<urn:uuid:b0a1596b-1629-489b-bab7-1730d94557b3>
|
CC-MAIN-2022-40
|
https://www.pcr-online.biz/2010/08/13/rare-star-found-by-distributed-computing-screensaver/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00430.warc.gz
|
en
| 0.953656 | 261 | 3.078125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.