text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Introduction to Microservices: What are Microservices? Use Cases and Examples
This article was originally published at Algorithimia’s website. The company was acquired by DataRobot in 2021. This article may not be entirely up-to-date or refer to products and offerings no longer in existence. Find out more about DataRobot MLOps here.
A microservice architecture can beef up your team’s speed by adjusting how they design and ship code, and developers and business leaders can get ahead by implementing it inside their teams. The one-two punch of serverless and microservices combined is driving totally new types of applications and frameworks.
So what are microservices all about? The concept is based on a pretty simple idea: it sometimes makes sense to develop your applications as a lot of very small interlocking pieces instead of one giant whole. These components are developed and maintained separately from each other, so updates don’t require re-doing the entire codebase. Along with a few other design requirements, that’s the basic idea of microservices.
Understanding the “Traditional” Monolithic Model
Traditional application design is often called “monolithic” because the whole thing is developed in one piece. Even if the logic of the application is modular, it’s deployed as one group, like a Java application as a JAR file for example. Imagine if all of your notes from different college classes were in one long stream.
This type of code writing and deploying is convenient because it all happens in one spot, but it incurs significant technical debt over time. That’s because successful applications have a tendency of getting bigger and more complex as your company grows, and that makes it harder and harder to run.
For some insights into why and how a monolithic application can get confusing, consider this example from Chris Richardson: “I recently spoke to a developer who was writing a tool to analyze the dependencies between the thousands of JARs in their multi‑million line of code (LOC) application. I’m sure it took the concerted effort of a large number of developers over many years to create such a beast.”
There are a few reasons why this monolith eventually becomes so difficult to manage, including:
- The codebase is too big for any single developer to fully understand
- If the codebase is difficult to understand, changes made will often be detrimental
- Larger applications mean longer and longer deployment timeframes
- Agile frameworks often require multiple pushes to production each day, and re-deploying the entire monolith runs into time issues
Because of these and many other accompanying issues, a new way of developing applications is becoming popular. Microservices separates all of the major parts of this monolith from each other, untangling the codebase and drastically changing how developers can write and interact with it.
Service-Oriented Architecture (SOA) is used for applications composed of discrete and loosely connected agents that perform a function. SOA describes an application that can be built in such a way that its modules are seamlessly integrated and can therefore be easily reused. However, this type of architecture is very complex, sometimes sending over a million messages at a time, making it difficult to manage. SOA also often has higher response times and lower overall performance.
In contrast, microservices allows developers to build and manage software efficiently. It’s not only easy to work with during the design and build phase, but also performs with speed and efficiency once launched. Monolithic and SOA architectures both require systematic changes to be made by modifying the monolith. Microservices remove this issue, along with many others, by untangling the monolith so that changes can be made by simply creating a new service.
What Are Microservices?
In contrast with the monolith type application, here’s what an app developed with a microservices focus might look like:
Overall, it’s largely the same: you have a user interface, some functions, and a database. With microservices though, those functions (not literally functions, but functional parts of an application) are all separate. They communicate with the user interface, each other, and instances of the database.
A team designing a microservices architecture for their application will split all of the major functions of an application into independent services. Each independent service is usually packaged as an API so it can interact with the rest of the application elements.
In direct consonance with the problems outlined above, breaking down your application into bundles of microservices offers some key benefits:
- Simplify your application with well-defined boundaries for each piece of functionality
- Allow teams to work separately on independent parts of your application without the need for constant collaboration
- Microservices can be deployed, maintained, updated, and scaled independently of each other in a continuous fashion
It’s hard to overstate how large of a paradigm shift this new approach is. It totally flips around many of the challenges of traditional monolith deployment.
Microservices Examples and Business Use Cases
What is the use of microservices? A microservices-based architecture offers a lot of benefits in theory, but it’s difficult to make it work in practice. That’s why we’re still very much in the early development stages of this idea, and that applies even more strongly to larger companies.
But this isn’t entirely now. In fact, the concept of splitting applications into smaller interactive parts has actually been around as a programming paradigm for a while. One of the reasons why it’s taken this long for microservices to emerge as a legitimate alternative is simple: culture. Implementing this architecture isn’t just a technical decision: it’s about having the right teams in place, being comfortable using open source, and working in an organization that’s comfortable challenging the status quo in IT.
Companies implementing microservices have been very open about their process and why they chose it. Here are some useful examples from companies that might not surprise you:
- Service-Oriented Architecture: Scaling the Uber Engineering Codebase As We Grow (Uber)
- Netflix Conductor: A Microservices Orchestrator (Netflix)
- What Led Amazon to its Own Microservices Architecture (Amazon)
But in addition to the usual large-tech-company repeat offenders, some companies that are utilizing this architecture might surprise you:
- Partial Failures in a Microservices Jungle: Survival Tips from Comcast (Comcast)
- The eBay Architecture (eBay)
- Walmart Embraces Microservices to Get More Agile (Walmart)
IT organizations are still figuring if they’re willing to make this shift. But in the meanwhile, those who do and find the right fit are reaping the benefits.
Challenges with Deploying Microservices
As with any design decision, there are drawbacks to a microservices architecture. The major issue is complexity––breaking up your codebase makes it easier to understand, but creates complications in orchestration. Microservices mean a distributed system, which comes with its own problems.
Your team is going to need to handle some new situations. For example, any individual microservice can fail at any point, just like a traditional software deployment. You need to write logic to deal with that. Another issue is database management––with the monolith there’s typically only one or a few databases to update, but with microservices there can be many. Managing data consistency across a distributed system can be a major challenge.
Finally, testing and deployment can become troublesome in a microservices-oriented architecture. If any services are dependent on others, you need to design a specific order for deployment and testing. Changes can impact multiple services in your application, and accounting for that is difficult.
Serverless, Microservices, and Containers
The shift towards microservices fits nicely with two other important trends in the deployment space: serverless and containers. Serverless is about abstracting the code around server side logic, and having a provider manage your infrastructure for you. Containers are all about bundling your code and dependencies into self-executing, independent packages.
Containers and microservices fit together because they have the same fundamental goal—package individual components as independent, responsive elements. Serverless empowers this architecture by focusing on functions as a service––now that your application pieces are packaged individually, deploying them as functions can make a lot of sense.
You get all the benefits of a microservices architecture, but it’s simple to orchestrate and integrate. Algorithmia is also the only serverless platform that offers GPUs, which are a key part of building a fast machine learning application.
Why Microservices Are Killer for Machine Learning
As more machine learning goes into production, it’s becoming clearer that a microservices architecture can be a good fit for this kind of application. There are two major reasons why this is the case:
- After training your models, inference is usually stateless—since no data or state needs to be maintained, independent services work
- Machine learning is a compute-intensive process that often requires specialized hardware (like GPUs), and you don’t want that to be a core part of your server requirements
Algorithmia deploys algorithms as scalable microservices to take advantage of these two features.
Further Reading and Papers
Introduction to Microservices (Nginx) – “This blog post is the first in a seven‑part series about designing, building, and deploying microservices. You will learn about the approach and how it compares to the more traditional Monolithic Architecture pattern. This series will describe the various elements of a microservices architecture. You will learn about the benefits and drawbacks of the Microservices Architecture pattern, whether it makes sense for your project, and how to apply it.”
Microservices (Martin Fowler) – “The term “Microservice Architecture” has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.”
Architecting Microservices (Paper) – “This paper reports on a PhD research project addressing three different challenges concerning MSA: (i) the identification of the key properties of microservice architectures, (ii) the identification and investigation on a description language for designing and analyzing architectures, (iii) the identification of the factors that impact the process of migrating existing applications towards MSA. The initial contributions of this project are: (i) a systematic mapping study on architecting microservices performed in order to understand the state of the research and the possible gaps in the area, (ii) an approach for architecture recovery of microservice-based systems named MicroART, and (iii) the implementation of the MicroART first prototype.”
Microservices: Yesterday, Today, and Tomorrow (Paper) – “Microservices is an architectural style inspired by service-oriented computing that has recently started gaining popularity. Before presenting the current state-of-the-art in the field, this chapter reviews the history of software architecture, the reasons that led to the diffusion of objects and services first, and microservices later. Finally, open problems and future challenges are introduced. This survey primarily addresses newcomers to the discipline, while offering an academic viewpoint on the topic. In addition, we investigate some practical issues and point out some potential solutions.”
Microservices Tutorials and Walkthroughs
Microservice Architecture Tutorial (tutorialspoint) – “Microservice Architecture is a special design pattern of Service-oriented Architecture. It is an open source methodology. In this type of service architecture, all the processes will communicate with each other with the smallest granularity to implement a big system or service. This tutorial discusses the basic functionalities of Microservice Architecture along with relevant examples for easy understanding.”
Spring Boot Tutorial: REST Services And Microservices (Jaxenter) – “The times of Java EE application server and monolithic software architectures are nearly gone. Hardware is not getting faster anymore, but internet traffic is still increasing. Platforms have to support scaling out. Load must be distributed to several hosts. Microservice-based architectures can offer solutions for this requirement. Apart from the better scaling, microservices offer faster development cycles, dynamic scaling depending on load and improved failover behavior.”
Quick Intro to Node.JS Microservices: Seneca.JS (Codementor) – “So, you want to use NodeJS to create microservices architecture? That’s very simple and awesome! In my career, I’ve used many frameworks and libraries for creating microservices architecture, even created custom libraries (don’t do it!) — until I found SenecaJS.”
Build An API For Microservices In 5 Minutes (Javaworld) – “Enough talk, let’s roll up our sleeves and start building our microservices core competency. This brief, hands-on tutorial shows you how to create a new API with AnyPresence JustAPIs. Before you start,download the free trial version of JustAPIs and follow the Quick Start Guide to set it up.”
Python Microservices Development (Ziade) – “We often deploy our web applications into the cloud, and our code needs to interact with many third-party services. An efficient way to build applications to do this is through microservices architecture. But, in practice, it’s hard to get this right due to the complexity of all the pieces interacting with each other. This book will teach you how to overcome these issues and craft applications that are built as small standard units, using all the proven best practices and avoiding the usual traps.”
Building Microservices with .NET Core 2.0 (Aroraa) – “Moving forward, you will be introduced to real-life application scenarios; after assessing the current issues, we will begin the journey of transforming this application by splitting it into a suite of microservices using C# 7.0 with .NET Core 2.0. You will identify service boundaries, split the application into multiple microservices, and define service contracts. You will find out how to configure, deploy, and monitor microservices, and configure scaling to allow the application to quickly adapt to increased demand in the future.”
Building Microservices: Designing Fine-Grained Systems (O’Reilly) – “Distributed systems have become more fine-grained in the past 10 years, shifting from code-heavy monolithic applications to smaller, self-contained microservices. But developing these systems brings its own set of headaches. With lots of examples and practical advice, this book takes a holistic view of the topics that system architects and administrators must consider when building, managing, and evolving microservice architectures.”
Microservice Architecture: Aligning Principles, Practices, and Culture (O’Reilly) – “Microservices can have a positive impact on your enterprise—just ask Amazon and Netflix—but you can fall into many traps if you don’t approach them in the right way. This practical guide covers the entire microservices landscape, including the principles, technologies, and methodologies of this unique, modular style of system building. You’ll learn about the experiences of organizations around the globe that have successfully adopted microservices.”
|
<urn:uuid:51fc03ef-f589-47cb-a74f-3d6d7686b294>
|
CC-MAIN-2022-40
|
https://www.datarobot.com/blog/introduction-to-microservices/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00054.warc.gz
|
en
| 0.930755 | 3,211 | 2.578125 | 3 |
New GM Tech Could Have Autonomous Cars Teaching People How to Drive
Here’s a use for autonomous car technology that hasn’t previously been considered to any great extent – teaching learners how to drive.
General Motors has filed an application to patent technology that could spell the end for every nervous learner’s worst nightmare: well-meaning, but intimidating, driving instructors.
Motoring website Motor1 has revealed how GM has submitted a design to the United States Patent and Trademark Office for an autonomous vehicle system intended to measure and train new drivers without any other human presence in the car.
While the idea might seem extreme, the rationale behind it is actually pretty sound. As the application points out, “Current techniques for training humans for driving include a human instructor. However, in certain situations, typical techniques using a human instructor may not always be optimal, for example, as this may introduce biases of the human instructor, and/or may be more time consuming, costly, and/or difficult to schedule, and/or may include risks and/or inefficiencies.”
GM’s thinking is that autonomous vehicles can perform the “instructor” role. For clarity, the application defines an AV as a “vehicle that is capable of sensing its environment and navigating with little or no user input… by using devices such as radar, lidar, image sensors and the like.”
While teaching humans how to drive seems to fly in the face of AVs’ raison d’etre, the application further points out it may be desirable for people to drive for “personal satisfaction” or in a scenario where AVs are “not permitted.”
How the tuition and assessment would work in practice is straightforward, with the driver of the car in control with the autonomous system working as a backup.
Sensors within the car would rate the student on the fundamentals of driving – for example, throttle, braking and steering inputs – using what the AV considers best practice as a benchmark. The application describes how the driver would be provided with a score and/or “instantaneous feedback” based on what the AV ascertains as recommended performance.
This method of tuition would allow the student to be taught in stages. Once individual operations were mastered, additional controls could be ceded to the pupil, giving them more responsibility as their ability improves.
And any thoughts that the lack of another human in the vehicle could allow students to manipulate the autonomous system can be immediately dispelled, as the application makes clear how the method “includes providing results associated with the trainee to one or more third parties.”
How quickly we see this system in practice is open to debate. The vehicle included in the filing “corresponds to a level four or level five automation system under the Society of Automotive Engineers’ standard taxonomy.” But for anyone who has endured a fractious relationship with their driving instructor, it probably can’t come soon enough.
|
<urn:uuid:b175bf36-662f-41ca-a583-e56a0047f46f>
|
CC-MAIN-2022-40
|
https://www.iotworldtoday.com/2022/04/19/new-gm-tech-could-have-autonomous-cars-teaching-people-how-to-drive/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00054.warc.gz
|
en
| 0.952542 | 625 | 2.609375 | 3 |
The White House announced Friday that the Global Connect Initiative is working to connect 1.5 billion people to the Internet worldwide by 2020.
“It can change lives by connecting schools to the web, bringing telemedicine to rural health centers, lowering the barriers to political participation, and supplying up-to-date market information to businesses and entrepreneurs,” said Suhas Subramanyam, of the Office of Science and Technology Policy, in a blog post.
About 60 percent of the world’s population is without Internet and the number increases to 95 percent in the poorest countries, according to Subramanyam.
The 40 countries participating in the initiative decided to treat the Internet as critical infrastructure, similar to roads, bridges, and ports, and increase funding and resources to build Internet infrastructure.
The Overseas Private Investment Corporation, the Federal government’s development finance institution, announced that it has invested over $1 billion in Internet connectivity infrastructure projects to support development in 15 countries across the Americas, Asia, Europe, and the Middle East. Countries are consulting technical and business experts on how to make the most out of these investments by using cost-saving network designs, Internet infrastructure opportunities, and local skills development and training.
The State Department is working with Tunisia, India, and Argentina to write policies that will increase digital growth and create an open and accessible Internet .
In June, President Obama created the Global Connect International Connectivity Committee (GCICC), made up of 16 Federal agencies and led by the State Department to coordinate United States projects related to worldwide Internet access.
President Obama said that the Global Connect Initiative is “bringing wonders of technology to far corners of the globe, accelerating access to the Internet, [and] bridging the digital divide.”
|
<urn:uuid:083df54d-8b11-4a98-ba4c-a165bbb59b37>
|
CC-MAIN-2022-40
|
https://origin.meritalk.com/articles/worldwide-initiative-plans-to-connect-1-5-billion-by-2020/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00255.warc.gz
|
en
| 0.915191 | 362 | 2.59375 | 3 |
As we think about protecting this heritage and the importance of preserving language, we believe that new technology can help Brad Smith, Microsoft
Through the Microsoft Translator application, the Māori language will be available to more people around the world through the use of advanced machine learning translation technology, says Brad Smith, president of Microsoft.
“To focus only on shaping the future ignores the value of the past, as well as our responsibility to preserve and celebrate the te reo Māori heritage. Which is why we are proud to announce the inclusion of te reo Māori in our free Microsoft Translator app,”says Smith, who announced the news on his blog.
The software will be widely available on computers or smart devices, enabling people around the world to instantly translate text and documents into te reo Māori and vice versa, as well as the many other languages supported by the app such as Spanish and Chinese.
“Microsoft has been working with language experts on projects that include te reo Māori in our platforms and software for more than 14 years. We want to provide better access to Māori language and culture via the technology Kiwis use every day,” says Anne Taylor, education lead at Microsoft New Zealand.
Taylor says Prime Minister Ardern has called for one million new te reo Māori speakers by 2040. “We’re determined to support this goal, and including te reo Māori in Microsoft Translator is one more action we can take to help make the language accessible to as many people as possible.”
Microsoft Translator needs to ‘learn’ te reo Māori in order to provide accurate translations that grow and change with the language, recognising that language is a breathing, living thing.
The translation model employed for te reo Māori will use Microsoft’s AI technology, which will allow the accuracy of the translations to be continually updated and refined.
Te Taka Keegan, senior lecturer in computer science at Waikato University, and one of the experts involved in the Māori language tool, says the project not only supports the daily use of te reo Māori in schools and workplaces, but helps scholars, researchers and ordinary people access and study the language around the world.
Smith says the te reo Māori project is part of a broader programme of work to support indigenous languages and cultures worldwide.
“When a community loses a language, it loses its connection to the past – and part of its present. It loses a piece of its identity. As we think about protecting this heritage and the importance of preserving language, we believe that new technology can help,” he says.
|
<urn:uuid:8404653b-a149-403a-aaf5-42655b658755>
|
CC-MAIN-2022-40
|
https://www.cio.com/article/201584/microsoft-translator-now-includes-te-reo-maori.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00255.warc.gz
|
en
| 0.950905 | 562 | 2.71875 | 3 |
How AI can improve medical diagnosis
While your flesh and blood doctor isn't about to be replaced by a robot, artificial intelligence can support them in their job
How do medical specialists diagnose conditions? They look at our symptoms, use the knowledge they have built up in practice, bring tools and techniques into play, and work out what they think is wrong with us. Then they get on with treating us. Sounds simple enough, but in reality it’s often very complex.
Artificial intelligence has been helping medics out with diagnosis for a long time, and it is proving a very useful tool across a huge range of medical disciplines. It is far from ubiquitous, but it is helping clinicians across a wide spectrum, with very positive results.
Speedy diagnosis and efficient use of resources
A good way to understand the benefits AI brings is in the context of real world examples.
Moorfields Eye Hospital NHS Foundation Trust is using AI to help diagnose eye diseases and so far the system has made correct referral decisions for more than 50 eye diseases with 94% accuracy. Clinical trials and regulatory approval are needed before the technology can be widely used, but the opportunity to help clinicians diagnose more quickly and prioritise people whose sight relies on urgent treatment is exciting for the trust.
Valerie Phillips, who works in MedTech at PA Consulting, gave a couple of other examples in use in the UK. “Kheiron Medical deploys AI software and deep learning tools to support radiologists for breast cancer screening … [and] HeartFlow creates a personalised 3D model of a patient’s heart from their coronary CT scan and can assess how a blockage impacts blood flow,” she tells IT Pro.
Phillips adds: “By bringing together multiple data sources, the doctor then does not have to search for the information they need which speeds up their diagnosis and decision making. Equally, automating manual tasks and enabling physicians to confirm, evaluate, quantify, track and report actions automatically makes them more efficient.”
The technology is already being welcomed by doctors. In a press statement, Dr Philip Strike, interventional cardiologist at Queen Alexandra Hospital, Portsmouth, said HeartFlow “has transformed our paradigm for investigating chest pain. It has dramatically reduced the number of patients requiring invasive investigation and has allowed strategic targeting of therapy for those patients who still require invasive angiography, which saves both time and expense”.
Diversity in the digital workplace
The future of work is a collaborative effort between humans and robotsDownload now
AI and general practice
AI is used in a more generalist diagnostic environment too, including by GPs. Doctorlink is an app-based system available on phone, tablet and through a web browser that uses a symptom assessment tool. Practices using the service can also accept appointment bookings through it.
Dr Ravi Tomar, a GP in London whose practice uses Doctorlink, tells us: “In the first year alone, this has resulted in one in five patients being re-routed to more appropriate forms of care – such as self-care or local pharmacy – and patient phone calls to the practice have reduced by a third, alleviating strains on the time and resource of clinicians and administrative staff.”
Tomar also says the system has improved the patient experience too. “Before we adopted Doctorlink’s health assessment platform in October 2018, we were running a standard 8am daily triage phone system for appointment bookings,” he explains. “Like so many other practices, this meant our books were full within a couple of hours and we had to turn patients away.”
Tomar appreciates, however, that people are wary of the idea of AI replacing medical professionals. Consequently, people registered at his GP surgery can opt in to the system if they want, but also still have access to more traditional methods. “Success isn’t about 100% uptake, it’s about the right uptake by the right patients,” he says. “I don’t see AI consultations replacing patient time with their own doctor.”
It’s still early days for AI in healthcare, however, and it’s currently used relatively little. Healthcare professionals are navigating both the clinical successes and the cultural and ethical aspects of using AI. Phillips sums up where the medical profession is and what has still to be done, telling us: “There needs to be careful review of the type of AI, the applications on offer and the outputs delivered, triage, diagnosis, clinical-decision making, second reading, the specific clinical area, the established clinical practice, risk assessment and the regulatory environment. Only then can AI become a standard medical tool.”
Big data for finance
How to leverage big data analytics and AI in the finance sectorFree Download
Ten critical factors for cloud analytics success
Cloud-native, intelligent, and automated data management strategies to accelerate time to value and ROIFree Download
Remove barriers and reconnect with your customers
The $260 billion dollar friction problem businesses don't know they haveFree Download
The future of work is already here. Now’s the time to secure it.
Robust security to protect and enable your businessFree Download
|
<urn:uuid:bae310cf-f090-412e-b529-494cbbcac058>
|
CC-MAIN-2022-40
|
https://www.itpro.com/technology/artificial-intelligence-ai/354520/how-ai-can-improve-medical-diagnosis
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00255.warc.gz
|
en
| 0.948829 | 1,077 | 2.703125 | 3 |
Why a Single Sign-On Actually Improves Security
Have you ever wondered how some platforms will only have you log in once for all of your various needs, even though they might be different applications, websites, or services? This is essentially what single sign-on is, and it’s quite common in the technology world today. What is single sign-on exactly, and what kind of security does it actually provide for organizations that use it?
What is Single Sign-On?
Imagine that you use a single password or username to sign into multiple different accounts, not even those that are necessarily related. This is basically what single sign-on is. It is a centralized authentication platform where you use one set of credentials to access multiple applications or platforms.
As explained by CSO, “In the most common arrangement, the identity provider and service provider establish a trust relationship by exchanging digital certificates and metadata, and communicate with one another via open standards such as Security Assertion Markup Language (SAML), OAuth, or OpenID.” You log in once, and that login can be used to sign you into other accounts associated with that login.
Think about it like this; rather than authenticate the user themselves, the application asks another application to authenticate the user for them, then allowing the user to access the application as if they had used a username or password pair in the normal way.
Why Is Single Sign-On Useful?
There are many reasons why single sign-on can be useful. Here are some of the following:
- Passwords are hard for employees: Employees who have to remember multiple complex passwords and usernames for various different accounts can often make mistakes or forget their passwords.
- Cloud sprawl is a very real thing: The more applications businesses implement, the more difficult it becomes to manage them all. SSO provides businesses with ways to authenticate users in a way that is beneficial for productivity and security.
- Easy IT management: IT administrators can more easily revoke privileges for accessing various services or applications, since there is only one pair of credentials associated with SSO.
Isn’t That a Security Discrepancy?
It’s easy to see how single sign-on could create a security issue if it is implemented incorrectly. After all, what happens when that one credential gets stolen by a hacker? In reality, SSO does the exact opposite. It reduces the attack surface considerably, and with fewer opportunities for employees to create insecure passwords, the likelihood of attacks falls somewhat. In short, SSO is more likely to help than it is to hinder your security.
The biggest issue you are likely to encounter with single sign-on is adding new technologies or making adjustments to your IT infrastructure, as SSO implicitly ties together many different services.
The biggest benefit you can expect from SSO is by far the improvements to productivity. Since users will be logging in fewer times throughout the day, they can instead focus on getting work done, meaning more opportunities to improve your bottom line.
Techworks Consulting, Inc. can advise you on the appropriate way to secure your organization and potentially offer solutions for how to approach cloud sprawl. To learn more about what we can do for your organization, reach out to us at (631) 285-1527.
|
<urn:uuid:409f4c9f-de12-4f62-b9b6-4186c9f906ac>
|
CC-MAIN-2022-40
|
https://dev.maketechwork.com/news-events/blog/why-a-single-sign-on-actually-improves-security
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00255.warc.gz
|
en
| 0.941181 | 680 | 2.6875 | 3 |
What is a cybersecurity risk assessment?
A cybersecurity risk assessment is an assessment of an organization's ability to protect its information and information systems from cyber threats.
The purpose of a cybersecurity risk assessment is to identify, assess, and prioritize risks to information and information systems. A cybersecurity risk assessment helps organizations identify and prioritize areas for improvement in their cybersecurity program. It also helps organizations communicate their risks to stakeholders and make informed decisions about how to allocate resources to reduce those risks.
There are many cybersecurity risk assessment frameworks and methodologies available, but they all share a common goal.
The National Institute of Standards and Technology (NIST) Cybersecurity Framework is one of the most popular risk assessment frameworks. It provides a flexible and structured approach for organizations to assess their cybersecurity risks and prioritize actions to reduce those risks.
Another popular risk assessment framework is the ISO 27001:2013 standard. This standard provides a comprehensive approach to information security management, including requirements for risk assessment and risk treatment.
Organizations can also develop their own customized risk assessment frameworks and methodologies. Whatever approach an organization chooses, the goal should be to identify, assess, and prioritize risks to information and information systems.
Why carry out a cybersecurity risk assessment?
A cybersecurity risk assessment is important because it can help identify risks to your organization’s information, networks and systems. By identifying these risks, you can take steps to mitigate or reduce them. A risk assessment can also help your organization develop a plan to respond to and recover from a cyber attack.
Organizations should conduct cybersecurity risk assessments on a regular basis to keep their risk profiles up to date. Additionally, if there are changes to an organization's computer networks or systems, a new risk assessment should be conducted.
What does a cybersecurity risk assessment include?
A cybersecurity risk assessment evaluates the organization's vulnerabilities and threats to identify the risks it faces. It also includes recommendations for mitigating those risks.
A risk estimation and evaluation are usually performed, followed by the selection of controls to treat the identified risks.
It is important to continually monitor and review the risk environment to detect any changes in the context of the organization, and to maintain an overview of the complete risk management process.
ISO 27001 and cyber risks
The international standard ISO/IEC 27001:2013 (ISO 27001) provides the specifications of a best-practice ISMS (information security management system) – a risk-based approach to corporate information security risk management that addresses people, processes, and technology.
Clause 6.1.2 of the standard sets out the requirements of the information security risk assessment process.
- Establish and maintain certain information security risk criteria
- Ensure that repeated risk assessments “produce consistent, valid and comparable results”
- Identify “risks associated with the loss of confidentiality, integrity and availability for information within the scope of the information security management system”, and identify the owners of those risks
- Analyze and evaluate information security risks, according to the criteria established earlier
It is important that organizations “retain documented information about the information security risk assessment process” so that they can demonstrate that they comply with these requirements.
They will also need to follow a number of steps – and create relevant documentation – as part of the information security risk treatment process.
ISO 27005 provides guidelines for information security risk assessments and is designed to assist with the implementation of a risk-based ISMS.
Purchase the latest ISO/IEC 27005 Standard >>
How to implement best-practice cybersecurity with ISO 27001
Download our free green paper – “Risk Assessment and ISO 27001” – to receive risk assessment tips from the ISO 27001 experts.
Cybersecurity risk assessment services
Conducting a cybersecurity risk assessment is a complex process that requires considerable planning, specialist knowledge, and stakeholder buy-in to appropriately cover all people-, process-, and technology-based risks. Without expert guidance, this can only be worked out through trial and error.
IT Governance provides a range of risk assessment and cybersecurity products and services to suit all needs.
IT Governance’s fixed-price, three-phase Cyber Health Check combines consultancy and audit, remote vulnerability assessments, and an online staff surveys to assess your cyber risk exposure and identify a practical route to minimize your risks. Our approach will identify your cyber risks, audit the effectiveness of your responses to those risks, analyze your real risk exposure, and then create a prioritized action plan for managing those risks in line with your business objectives.
Find out more
vsRisk is an online risk assessment software tool that has been proven to save time, effort, and expense when tackling complex risk assessments.
Fully aligned with ISO 27001, vsRisk streamlines the risk assessment process to deliver consistent and repeatable cybersecurity risk assessments every time.
Find out more
Why choose IT Governance for your cybersecurity risk assessment needs?
IT Governance specializes in IT governance, risk management, and compliance solutions, with a special focus on cyber resilience, data protection, the GDPR, the Payment Card Industry Data Security Standard (PCI DSS), ISO 27001, and cybersecurity.
IT Governance is also recognized under the following frameworks:
- UK government CCS-approved supplier of G-Cloud services
- CREST certified as ethical security testers
- Certified under Cyber Essentials Plus, the UK government-backed cybersecurity certification scheme
- Certified to ISO 27001:2013, the world’s most recognized cyber security standard
|
<urn:uuid:e1959b36-fc25-4f48-942f-12cc26306a06>
|
CC-MAIN-2022-40
|
https://www.itgovernanceusa.com/cyber-security-risk-assessments
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00255.warc.gz
|
en
| 0.910636 | 1,139 | 2.890625 | 3 |
The Xbasic command statements determine the structure and flow of execution in a script. For example, the if .. then statement is used to perform actions conditionally. The following script will make the computer beep if the supplied logical expression, x < 5, evaluates to .T. (TRUE):
if (x < 5) then ui_beep() end if
To perform actions repeatedly, you can use a for .. next statement. For example, the following script makes the computer beep ten times:
for i = 1 to 10 ui_beep() next i
Other statements that determine the structure and flow of a script are the SELECT, GOTO, and while statements and their variations.
A label marks a line in a script that serves as a point of reference for the GOTO and ON ERROR GOTO command statements. The GOTO label and ON ERROR GOTO label statements will resume the execution of the script just after the label.
A label is defined by adding the label delimiter (:) at the end of the label name. When referencing a label, use the label name without the label delimiter. A label name can consist of the same characters in a legal variable name and cannot begin with a number. The label must be the first thing to occur on the script line.
For example, the GOTO statement in the following script skips over the TRACE.WRITELN() method:
if (x > 100) then GOTO bye end if trace.writeln("The value is: " + ltrim( str(x) ) ) bye: end
If the label is defined at the end of a script, it must be followed by at least one other statement (probably the end statement). This is important because the GOTO label statement resumes execution at the line after the label; the end statement gives it a line to go to.
|
<urn:uuid:66da110e-4f7d-4eec-a279-4ace720a27bd>
|
CC-MAIN-2022-40
|
https://documentation.alphasoftware.com/documentation/pages/Ref/Xbasic/Command%20Statements.xml
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00455.warc.gz
|
en
| 0.817577 | 398 | 3.875 | 4 |
A firewall is an invaluable asset in your business’s cybersecurity strategy. Whether hardware- or network-based, it will protect against malicious traffic. Cyber attacks often involve malicious traffic. A hacker may perform a brute-force attack to try and access an otherwise protected database, or a hacker may use a botnet to conduct a distributed denial-of-service (DDoS) attack.
You can protect your business’s network from malicious traffic by using a firewall. Once deployed, the firewall will scan incoming and outgoing traffic while cross-referencing it against a set of rules. The firewall will reject traffic that fails any of the rules. But if you’re going to use a firewall, you should avoid firewall pinholes.
What Is a Firewall Pinhole?
A firewall pinhole is a particular port on a network that’s not covered by a firewall. All networks have ports. A port is a uniquely identifiable point of connection. Computers and other network-connected devices may have one or more ports each. A firewall pinhole is simply a port that’s not covered by a firewall.
The Dangers of Firewall Pinholes
Firewall pinholes are vulnerabilities. Like other vulnerabilities, they can pave the way for cyber attacks. Each firewall pinhole is an open port that a hacker may use to access your business’s network.
Ports are oftentimes left open so that apps can access a service on the network. But leaving these ports open for an extended period will place your business at risk of a cyber attack. Assuming an open port isn’t covered by a firewall, it will become a vulnerability. These open, unprotected ports are firewall pinholes. Hackers can bypass the firewall by targeting a firewall pinhole.
Eliminating Firewall Pinholes
To protect your business’s network from cyber attacks, you should eliminate firewall pinholes. A simple solution is to configure your business’s network so that firewall pinholes close automatically after a short period.
The longer a firewall pinhole stays open, the greater the risk of a hacker exploiting it and using the open port to conduct a cyber attack. You can set firewall pinholes to close automatically, however. If a firewall pinhole has been open for two or three minutes, for instance, you may want to close it. Configuring your business’s network to automatically close firewall pinholes after a few minutes will minimize the risk of cyber attacks.
Open ports are common on networks. You’ll have to keep some of the ports on your business’s network open. If a port is open and not covered by a firewall, though, it will become a firewall pinhole.
|
<urn:uuid:526687e4-d773-4c1b-b668-7b3ad2c15f13>
|
CC-MAIN-2022-40
|
https://logixconsulting.com/2022/09/13/firewall-pinholes-is-your-network-unprotected/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00455.warc.gz
|
en
| 0.893887 | 561 | 3.171875 | 3 |
Out of the Lab: Fiber Dispels Dispersion
Jean-Louis Auguste and his colleagues at IRCOM, an R&D laboratory based at the University of Limoges, in collaboraton with a group of scientists at the Indian Institute of Technology at New Delhi in India and the the University of Nice in France, have designed and tested a dispersion compensating fiber (DCF) that works 20 times as well as commercially available DCF, they claim.
The development means that service providers need a lot less fiber to deal with dispersion problems, and that adds up to big savings. To understand why, it's necessary to come to grips with what dispersion is, why it threatens to become a big problem in optical nets, and how it can be cured.
Each pulse of light sent over fiber is formed from a small range of frequencies. As some frequencies travel slightly faster than others, the pulse broadens out over distance until it merges with its neighbors.
At bit rates of 2.5 Gbit/s and below, the effect is small. At 10 Gbit/s, dispersion can be overcome with modern fiber design -- though ripping out old fiber and installing new is not usually a sensible option. But in the next generation of systems supporting 40 Gbit/s, dispersion is something every operator will have to deal with. Even with the most up-to-date fibers, 40 Gbit/s systems will require pulse reshaping every 30 kilometers.
There are several methods for reversing dispersion. The most popular is the use of so-called dispersion compensating fiber (DCF). Typically, 5 to 10km of DCF must be added to a fiber span in order to recover the signal. The bad news is that this adds to attenuation (loss of optical power) in the link, which must be compensated for with expensive optical amplifiers.
At first glance, the design of IRCOM’s fiber doesn’t seem that unusual. It bears similarities to triple-clad DCF, which comprises a core (with a very high refractive index) and three cladding layers (with low, high, and intermediate refractive indices). The difference is that IRCOM’s fiber has four cladding layers.
At short wavelengths, the fiber behaves like an ordinary fiber, guiding light in the central core. But at a particular wavelength -- chosen by the researchers to be 1550 nanometers -- weird things start to happen. Light from the central core starts to leak into the ring of high index material in the cladding, which becomes a second annular core that guides light through the fiber. When that happens, the dispersion takes a nosedive.
IRCOM says its fiber has a dispersion of -1800 picoseconds per nanometer per kilometer, about 20 times greater than current DCF.
“It’s an impressive result,” says Lars Grüner-Neilsen, project manager for fiber R&D at Lucent Technologies Denmark, where Lucent manufactures its speciality fibers, including DCF. But there are other issues to consider, he warns. Can the fiber handle multiple wavelengths? Can it be spliced with low losses?
The results from IRCOM begin to answer these questions, but only a systems experiment will tell the whole story.
Because the dispersion behaviour is linked to a wavelength-specific phenomenon, the region of high negative dispersion is confined to a narrow band of wavelengths. Auguste says that IRCOM is working with fiber manufacturers to determine the applications to DWDM (dense wavelength-division multiplexing).
”Our results on losses are interesting,” says Auguste. Splice losses of 1 to 2 dB were achieved, which are comparable to splice losses between standard singlemode fiber and run-of-the-mill DCF, he says. And at most wavelengths, the propagation losses are also equivalent to standard fibers. But at the key wavelength at which the fiber shows high negative dispersion, the propagation losses increase dramatically. It’s an issue that Auguste hopes will go away because the fiber will be used in short lengths.
IRCOM reported the findings in the journal Electronics Letters. The paper was co-authored by scientists from the Indian Institute of Technology in Delhi, who proposed the original fiber design, and LMPC at the University of Nice, who manufactured fiber preforms.
This collaboration is under the aegis of the Indo-French Centre for the Promotion of Advanced Research (IFCPAR), a bilateral instrument of cooperation in science and technology between France and India established in 1987.
-- Pauline Rigby, Senior Editor, Light Reading http://www.lightreading.com
|
<urn:uuid:80dc133e-5d71-4b29-b37b-f2a52610bd9b>
|
CC-MAIN-2022-40
|
https://www.lightreading.com/ethernet-ip/out-of-the-lab-fiber-dispels-dispersion/d/d-id/571429
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00455.warc.gz
|
en
| 0.933556 | 971 | 3.21875 | 3 |
“Chromium makes em’ralds green / And gives rubies their deep red / Nitrogen makes diamonds yellow / Boron makes them blue instead.” It may not be Keats, but through his collection of poems and songs, MIT Senior Lecturer James D. Livingston, who teaches in the department of materials science and engineering at MIT, has found a way to make complex scientific concepts less intimidating to nonexperts.
In his case, the neophytes are mostly freshmen. Livingston, a former research physicist with General Electric, wanted to make the transition from high school to college less difficult for these students by infusing an element of fun into his physics and chemistry classes and “lower[ing] the scare barrier.”
Mnemonic devices became a tool for Livingston to help his students remember the course material and feel more comfortable with difficult subject matter. Although a song may not be the ticket to explaining why the ERP system has crashed, Livingston’s point—that it’s a good idea to think outside the box when facing a communication barrier—shouldn’t be lost on CIOs. Use humor, tell a story, write a poem, do whatever it takes, he says, to ease the tension and get them ready to listen to what you have to say.
|
<urn:uuid:13e73642-9666-45c4-b91e-d598fa47c446>
|
CC-MAIN-2022-40
|
https://www.cio.com/article/252528/enterprise-software-communication-techno-poetry.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00655.warc.gz
|
en
| 0.964951 | 272 | 2.515625 | 3 |
Renewable energy is a hot topic right now, and it has been at the forefront of data center innovations for a while, too. Solar power is a major component in the renewable energy segment and ,though it has vast potential, it hasn’t been adopted as an integral power source in the industry. There have been, however, notable steps in this direction. But, is a solar powered data center practical?
Installing solar power panels is where it all begins. Solar energy, as it stands today, isn’t really potent enough to fulfill the power consumption requirements of a data center. The investment required to put up huge photo-voltaic solar panels is huge and, based on prevailing costs of material as well as installation, it takes several years to recover investments. The reason for the long break-even period is the fact that, in line with the current technology we have access to on the solar power front, the actual power generated is far less. An installation of few hundred panels, covering an area of thousands of square feet, for example, has the potential to generate around a couple of hundred kilowatts of electricity on a good day. The investments for such a project however, amounts to around several millions and, considering the power output, it can easily take a couple of decades to recover this invested amount.
Cost-effectiveness is, of course, a factor that is being worked on, as is the technology. The government encourages the use of solar energy by offering tax cuts to a data center implementing it. Though not a practical solution (yet!), the move is often regarded as one that sends a message. After all, any attempt at reducing your carbon footprint is welcome and installation of solar panels accomplishes this, even if it is on a small-scale. The good thing is that as adoption of solar energy increases, solar panels cost less.
At Lifeline Data Centers, we believe that renewable energy is the way to go. After all, we also have to worry about depleting resources. With the continued innovations in solar energy, we ensure we are on track to integrate these innovations into our data center structure whenever the time is right.
For more information on how we use energy-efficient ways to power our data center, download our “Why Data Center Power Compartmentalization is Important” white paper:
|
<urn:uuid:352e4e3b-5ec4-4014-8fb6-5e0b29213efd>
|
CC-MAIN-2022-40
|
https://lifelinedatacenters.com/data-center/solar-powered-data-center-concept-practical-one/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00655.warc.gz
|
en
| 0.952077 | 477 | 2.953125 | 3 |
In the late 1970s and early 1980s, Bell telephone companies were making a mint off of offering the ability to call your friends and family that lived outside your predefined region, charging up to $2 per minute (during peak hours) for long distance calls. The problem for many people was that these regions kept shrinking. Some people decided to combat this costly system by reverse engineering the system of tones used to route long-distance calls, thus routing their own calls without the massive per-minute charges demanded by long-distance providers. These people were called Phreakers, and they were, in effect, the first hackers, which have evolved into what you now know as phishing attacks.
Cut to the modern day, most domestic long-distance telephone calls are free relegating Phreakers to the annals of history. Hackers today thrive in digital environments, using tools and strategies that the average person has no idea about to get access to data. Why would they want data?
What Motivates Hackers?
Of course, the motivation varies from hacker to hacker, but there are only a few things they can come away with. They can come out of a successful hack with leverage over a computing system in multiple ways, they occasionally can steal money, but most of today’s hackers are looking for data to mine. This is because the insatiable need (and abundance) of data can fetch a savvy hacker a pretty penny on the dark web.
No matter what their motivation is, to successfully hack a computing system, they need access. The network security tools that most businesses have in place, if properly updated, is typically enough to keep hackers out of your network. This reality has spiked the popularity of social engineering attacks such as phishing. If they can’t get into your network and infrastructure though software or through straight network hacks, they need to gain access through deception.
What Exactly is Phishing?
Phishing is exactly what the name implies. You bait a hook (of sorts) by way of messages directly to end users. This can be through any communications method available. Email phishing is the most prevalent for businesses, but phishing attempts through the telephone, social media accounts, and even instant messaging services have grown in popularity.
The phishing message will either lead you to a fake page that will collect personal information, or in the form of an attachment that will download malware on a system. Once the malware is in, it will immediately find credentials and other noteworthy data, and in a couple mouse clicks, your company’s network and infrastructure are exposed.
Some real nasty strains of malware (called ransomware) will encrypt your system files and then provide you with a message effectively holding your system’s (or worse yet, your business’) data for ransom. Failure to pay in the time provided will erase all the data and cause irreparable harm to your business.
Training Your Employees
Kaspersky Lab said that they detected 482.5 million phishing redirects in total in 2018, effectively doubling the amount found in 2017. That’s a dubious trend that doesn’t seem to be altering course any time soon. As a result, training your employees in how phishing attacks are successful is imperative. How you go about successfully doing that, and how you keep them up to date on what threats are currently making problems for people can be difficult.
Some suggest that embedded training, that is the training done in the normal course of business, is completely ineffective at mitigating phishing attacks. While it is our position that any training is better than no training, we suggest that the best type of training for your employees isn’t by looking to see how they would react, but proactive training. That is heightening their awareness to the threats that are out there. Phishing, in particular, is a hack that many people are exposed to daily, so there are some very specific things that they should get to understand to be better prepared if they do encounter a phishing attack. They include:
- What Phishing Is – Clearly define what phishing is and what forms of phishing they will likely come across.
- What Email Address Spoofing Is – The way we like to explain it is it’s like robocalls that look like they are coming from a local number, but when you answer it is a party on the other end just spoofing local numbers. It’s easy to spoof email addresses in the same way.
- Phishing Subject Lines are Typically Aggressive – Whether they are enticing or threatening, phishing email subject lines almost always stand out. Once opened they typically continue that tone, manipulating users into making mistakes.
- Phishing Isn’t Always Obvious – Today, there are spear phishing tactics that use publicly-available information to target individuals within your company, such as making the email seem like it’s from your boss.
- Phishing Uses Links and Attachments – Typically, just opening a phishing email won’t hurt you. It’s when you click on a link inside the phishing email/message or go to download an attachment from the email that you are in serious trouble. Teaching your staff to be wary of any attachment or link that they don’t know is important.
These are just the basics. Phishing can completely devastate your business, so if you are looking to put together a comprehensive training plan for your staff, reach out to the IT professionals at COMPANYNAME. We can help you come up with a plan to get your staff the knowledge they need to keep your business safe and running efficiently. To learn more call us today at PHONENUMBER.
|
<urn:uuid:fe9cb889-4fc7-471f-bae0-22106dd51f45>
|
CC-MAIN-2022-40
|
https://www.activeco.com/how-to-properly-train-your-staff-to-avoid-phishing-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00655.warc.gz
|
en
| 0.956201 | 1,176 | 3.0625 | 3 |
Television is undoubtedly one of the most popular forms of mass media ever developed. While a television set can have static, television itself is far from static. The medium has changed considerably since its inception and is bound to change even more in the near future.
The Evolution of TV
Back in 1948, only one in ten Americans had even seen a television set. TV’s popularity exploded shortly thereafter and in 1960, 70 million U.S. viewers tuned-in to watch Senator John Kennedy and Vice President Richard Nixon in the first-ever televised presidential debate. TV’s penetration continued. By the end of the 1980s, almost 53 million U.S. households subscribed to cable, while the number of cable networks increased from 28 in 1980 to 79 in 1989. Within the ensuing decade, the number of national cable video networks exploded to 171, and by the end of 1999, approximately 7 out of 10 television-owning households (more than 65 million) were cable subscribers.
This growth trend continued through 2010, but then reversed. A 2013 Nielsen study found that the number of American households with TVs had been dropping since 2011, as a growing number of people have been “cord-cutting” by making the switch from traditional cable companies to watching videos online via mobile phones or through streaming services such as Netflix, Amazon Prime, Hulu and others. Recent reports from TDG (opens in new tab) and Limelight Networks (opens in new tab) also indicate that consumers are, in increasing numbers, cutting the cord and moving to OTT & video streaming services. About 22 per cent of the 100 million households that subscribe to broadband do not have pay-TV service. And the number of consumers who have at least one OTT streamed video subscription has jumped by 15 per cent. Some consumers are even willing to pay for more than one streaming video service. Clearly, a growing number of people are taking a technological leap away from traditional television services and toward new offerings, creating an exciting time in the industry, and bringing massive changes and innovations in content delivery.
Online streaming services like Netflix and Hulu have dramatically altered the media consumption habits of Americans, particularly young adults. In fact, a recent study conducted by the Pew Research Center indicates that 61 per cent of people between the ages of 18–29 say the primary way they watch television shows is via streaming services on the Internet. Although the overall percentage of U.S. adults primarily watching TV via streaming services is considerably lower (at 28 per cent), these numbers clearly illustrate a monumental generational shift away from traditional television to algorithm-powered user-centric services offering curated, personalised viewing experiences.
Are Traditional Broadcasters on the Way Out?
For a long time, traditional broadcasters have viewed OTT as a threat and, accordingly, were slow to offer similar services themselves. Now, however, it seems the ice has broken, and the broadcasting and TV landscape is about to change. One major recent development was Disney’s announcement (opens in new tab) that it is severing its distribution deal with Netflix in favour of launching its own streaming service in 2019. And with official confirmation (opens in new tab) that Disney is buying some of 21st Century Fox’s core entertainment assets, including the Fox movie and television studio, Disney will have increasing ability to populate this streaming service with its own television productions, providing exclusive content to its own OTT subscribers.
The massive Disney-Fox deal illustrates that traditional players are not out of the game. Through a series of acquisitions and consolidations, participants in various segments of the value chain in television, filmed entertainment, and video are seeking to diversify and participate in other parts of the value chain (from content creation to distribution), trying to reach new audiences and revenue streams. A strategic shift to original content production and the wider adoption of direct-to-consumer offerings provide the opportunity for broadcasters to own the consumer relationship and access new markets without relying completely on third parties. These strategies will need to continue to evolve dynamically, as consumer behaviours evolve, which seems a certainty. Indeed, a recent Ericsson report predicts that by 2020, just 10 per cent of people will still watch TV only on a traditional screen.
The Brave New World
The broadcast industry is moving from an SDI workflow towards an IP-based workflow. A lot depends on this important technological shift, which is creating opportunities and risks (including the risk of being left behind). For instance, Comcast in the US announced its Blockchain Insights Platform with broadcast participants including Disney, Altice USA, Channel 4 UK and Cox Communications. The members will be able to match their individual data sets to target audiences across IPTV and OTT services. The objective of the Comcast Blockchain Insights Platform is to build better multi-screen ad planning and monetisation through information exchange of non-personal data for providing addressable advertising.
The IPTV workflow and other new technologies have the potential to change the way the users interact with television programs, creating new and improved viewing experiences.
Perhaps the easiest and most natural form of interaction is a conversation, and one of the latest innovations being used in broadcasting to increase engagement is chatbots. CNN is one of the networks that is leading the way in diving into the possibilities here, having rolled out a variety of chatbots over the past six months across messaging apps such as Facebook Messenger, Kik, and LINE, in addition to voice-activated devices like Amazon Echo. The company considers itself to be in the experimental stage at this point, and says it is constantly evolving its usage while exploring additional chatbot possibilities on both smart home and automobile platforms. Others, such as Hulu, HBO, Netflix, and Channel 4, have also utilised chatbots as an engagement tool. As these offerings improve, it is very likely that consumers will increasingly use, depend upon, expect, and eventually demand their availability.
As consumers embrace new technologies, many experts believe that virtual reality (“VR”) will become an essential aspect of television and video in the not-too-distant future. Some go so far as to propose that VR can realise its social and immersive potential in avatar animation technology that would allow show creators to bring members of the audience into TV shows. Technologies are being developed for personalising the avatar to resemble the participant’s appearance through photorealistic human modelling. While these particular technologies concentrate on live television shows (where an actual two-way-conversation is possible), they are emblematic of some of the perhaps unexpected ways that technology can and will continue to transform television.
Which emerging technologies enter our lives and change our day-to-day experiences remains to be seen. But regardless of the exact direction that television takes moving forward, new tech-enabled services and engagement tools will continuously be used to hew and shape Television’s future path forward -- a path we all will traverse, whether as content creators, media executives, advertisers, technologists or simply as consumers.
Sergey Bludov, Senior Vice President, Media and Entertainment, DataArt (opens in new tab)
Image Credit: Flickr / Sourav Biswas
|
<urn:uuid:6accccf2-47b7-41a2-8470-d10d686116db>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/features/from-struggle-to-embrace-tvs-technological-conundrum/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00655.warc.gz
|
en
| 0.95227 | 1,447 | 2.609375 | 3 |
In terms of transfer rates, for perspective, the USB 1.0 specification introduced in 1996 offered a maximum data transfer rate of 12 Megabits per second (Mbps). USB 2.0 maxes out at 480 Mbps. However, USB 3.0, 3.1, and 3.2 can be confusing to the user. Let’s break it down.
|USB version||Issue Date||Marketing Term||Bits/Sec||Power Delivery||USB Symbol||Port/Cable symbol||USB Charging symbol|
|USB 1.0||Jan 1996||Low Speed||1.5 Mbps||5V/500mA|
|USB 1.1||Aug 1998||Full Speed with Updated||1.5 Mbps/12 Mbps||5V/500mA|
|USB 2.0||April 2000||High Speed||480 Mbps||2.5W (Max)|
|USB 3.2 Gen1x1|
(was USB 3.0)
|Nov 2008||SuperSpeed USB 5Gbps||5Gbps||4.5W (Max)|
|USB 3.2 Gen2x1|
(was USB 3.1)
|July 2013||SuperSpeed USB 10Gbps||10Gbps||100W (Max)|
|USB 3.2 Gen2x2|
(was USB 3.2)
|Sep 2017||SuperSpeed USB 20Gbps||20Gbps||100W (Max), USB Type-C connector Only|
|USB4 Gen 2x2 |
(was USB 4)
|Sep 2019||USB4 20Gbps||20Gbps||100W (Max), USB Type-C connector Only|
|USB4 Gen 3x2 |
(was USB 4)
|Sep 2019||USB4 40Gbps||40Gbps||100W (Max), USB Type-C connector Only|
Table (1) USB specification summary
The USB 3.2 specification replaces the USB 3.x specification and introduces a new nomenclature. USB 3.2 defines the following connection speeds:
The USB 3.2 protocol specification only defines the performance capabilities that may be implemented in a product; USB 3.2 is not USB Type-C, USB Standard-A, Micro-USB, or any other USB cable or connector and USB 3.2 is not USB Power Delivery or USB Battery Charging.
Figure (1) USB connectors
In the past, USB only transmitted data; however, the Type-C connector is designed to contain the data, power, and video transmission. Let’s take a look at what functions USB Type-C supports.
The latest power delivery specification is PD3.0. The major difference between PD2.0 and PD3.0 is Programmable Power Suppliers (PPS) mode; many different devices might be plugged into a given PD charger, but PD is universal. Thus, the PPS approach demands that the sink side be smarter for fast charging, and also be compatible with QC 3.0/4.0/4.0+; MTK Pump Express (PE) 2.0/3.0; and Samsung Adaptive Fast Charging (AFC).
|Charging Technology||USB PD 3.0||QC4+/4/3.0||Samsung|
|Output Current||5A max (Via EMCA)||2.6A/4.6A||2A/5A|
|Protocol||USB PD||Quick Charge||Adapting Fast Charging USB PD 3.0|
|USB PD3.0 Compatible|
Table (2) Power Delivery Specifications
Another important feature of PD 3.0 technology is Fast Role Swap (FRS), where a device that is providing power can quickly change its power role from the Sink to Source and the Hub dual-role port starts sinking power. The USB accessory flash drive and monitor continue functioning during the FRS event; FRS helps to prevent any data loss that may occur when the Hub upstream facing port (UFP) power is unexpectedly removed from a device. FRS improved the data loss inherent in PD 2.0.
Here are important PD features:
The USB-C connector is available at both ends or at just one end of the cable for the specifications below:
You can use an USB Type-C male to USB A type female adapter to connect an existing product; nevertheless, you will remain at the existing data speed or even lower due to the insertion loss occur between the adapter and cable. For example: If an old USB external hard drive uses USB A/M to USB Micro B/M cable, data speed may be lower. Use a USB 3.1 Cable - Type C Male to USB 3.2 Gen2x1 Micro B cable instead of a Type-C adapter to ensure full-speed data transmission.
USB Type-C has multiple functions; yet, your device may not include the full USB type-C functions.
The USB-IF, VESA, and Intel defined the logo usage guidelines to explain the USB-C technology inside.
If you see the trident is enclosed in a battery shape, it means the USB-C connector enables power delivery (PD).
Below is an example of a USB3.2 Gen1x1 Type-C + PD. with charging function, 5 Gbps maximum transmission speed.
The USB Type-C charging logo next to the type C connector means the port can be power charged.
Not all devices include a symbol on the product to help you identify the USB-C technology inside. A USB-C interface is common in MacBook and Chromebook computers. Usually, the USB-C interface supports charging and data transfer, and provides video signal output; but, we still recommend reading your product spec sheet to know your device’s USB-C features.
USB-Type C only describes the physical connector. USB Type-C is not the same thing as USB 3.2 or USB 3.1, so knowing the Type-C technology inside your source devices is important.
USB-Type C offers data transfer, power charges devices, and transmits video. All of these functions are performed by just one cable. For the cable to bear a maximum of 20 volts at 5 amps (100 W) power, it is important to use a quality USB-Type C cable.
USB4 will be available only for USB-C ports. USB4 Type-C will support Thunderbolt 3/4 and also includes a PCIe function. More USB4 devices will be launched in 2021, so you may see USB4 Type-C devices on your desk and in your backpack very soon.
USB-C Adapter Cables
|VA-USBC31-DVID-003||USB-C to DVI-D Adapter Cables, 3FT|
|VA-USBC31-DVID-006||USB-C to DVI-D Adapter Cables, 6FT|
|VA-USBC31-DVID-010||USB-C to DVI-D Adapter Cables, 10FT|
|VA-USBC31-DP12-003||USB-C to DisplayPort (standard locking type connector) Adapter Cables, 4K60, 3FT|
|VA-USBC31-DP12-006||USB-C to DisplayPort (standard locking type connector) Adapter Cables, 4K60, 6FT|
|VA-USBC31-DP12-010||USB-C to DisplayPort (standard locking type connector) Adapter Cables, 4K60, 10FT|
|VA-USBC31-HDMI4K-003||Type-C to HDMI(4K/60Hz) M Cable with HDR-3ft|
|VA-USBC31-HDMI4K-006||Type-C to HDMI(4K/60Hz) M Cable with HDR-6ft|
|VA-USBC31-HDMI4K-010||Type-C to HDMI(4K/60Hz) M Cable with HDR-10ft|
|VA-USBC31-HDMI4K-016||USB-C to HDMI Adapter Cables, 4K60, 16FT|
|VA-USBC31-VGA-003||USB-C to VGA Adapter Cables, 3FT|
|VA-USBC31-VGA-006||USB-C to VGA Adapter Cables, 6FT|
|VA-USBC31-VGA-009||USB-C to VGA Adapter Cables, 9FT|
|USB-C Adapter Dongles|
|VA-USBC31-RJ45C||Type-C to RJ45+Type-C PD Charging, 100W|
|VA-USBC31-DP4KC||Type-C to DisplayPort adapter, PD60W, DisplayPort 1.2 Alt Mode, 4K60|
|VA-USBC31-VGAC||Type-C to VGA/PD, PD 60W, ABS housing, DisplayPort 1.2 Alt Mode, Per Lane Data Rates up to 5.4Gbps (HBR2), 1920x1200 @ 60Hz|
|VA-USBC31-DVIC||Type-C to DVI/PD, PD 60W, ABS housing, DisplayPort 1.2 Alt Mode, 1080p60|
|VA-USBC31-HD4KC||Type-C to HDMI(4K/60Hz)/Type-C PD Charging, Aluminium housing, PD 100W|
|VA-USBC31-RJ45||Gigabit Adapter Dongle - USB 3.1 Type C Male to RJ-45|
|USB3.2 Gen2x1 (was USB 3.1) Cable|
|USB3C10G-1M||Type C Male to USB 3.1 Type C Male, 10-Gbps, 1-m|
|USBC2MICRO-1M||USB 3.1 Cable - Type C Male to USB 2.0 Micro, 1-m|
|USB3C-1M||USB 3.1 Cable - Type C Male to USB 3.0 Type A Male, 5-Gbps, 1-m|
|USB3C5G-1M||USB 3.1 Cable - Type C Male to USB 3.0 Micro B, 5-Gpbs, 1-m|
|USB3CB-1M||USB 3.1 Cable - Type C Male to USB 3.0 Type B Male, 1-m|
|USBC2TYPEB-1M||USB 3.1 Cable - Type C Male to USB 2.0 Type B Male, 1-m|
|USBC2MINI-2M||USB 3.1 Cable - Type C Male to USB 2.0 Mini B, 2-m|
|USBC2TYPEB-2M||USB 3.1 Cable - Type C Male to USB 2.0 Type B Male, 2-m|
|USB-C Docking Station|
|USBC2000||USB C Docking Station|
|USBC2000 4KDUAL||USB-C 4K 230W Docking Station|
|USB-C Industrial Cable|
|IC1103A||USB-C to DB9 Adapter – 5-ft. (1.5-m) Convert USB-C data to serial RS-232 data|
|IC1102A||USB-C to RJ-45 Serial Adapter, 6-ft. (1.8-m)|
Convert USB-C data to serial RS-232 data
About the Author
George Liu has 12 years’ experience in the cabling, data, and video connectivity industry. As a Project and Product Manager at Black Box, he works directly with domestic and international OEM suppliers on new product launches. George is a certified PMP and CQE, and he is a master’s candidate in the industrial management program at National Taiwan University of Science and Technology.
|
<urn:uuid:4ab763ad-887b-4d30-a8d6-9656b5cd0f51>
|
CC-MAIN-2022-40
|
https://www.blackbox.com/en-gb/insights/blogs/detail/technology/2021/09/01/what-is-usb-type-c
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00655.warc.gz
|
en
| 0.711198 | 2,681 | 2.671875 | 3 |
Malware: why you should not forget it exists
Malware is an umbrella term used to describe all malicious programs to provide one or several benefits to cybercriminals, all while being damaging to victims affected. Malicious software can be targeted at a computer network within the company or individual personal computers. It includes a wide array of types, such as Trojans, rootkits, ransomware, viruses, some potentially unwanted programs, spyware, etc.
Malware is usually installed on the system without the user’s knowledge or approval, and there could be many attack vectors that malicious actors employ to distribute it. For example, one of the most popular methods for malware distribution is malicious spam emails (otherwise known as malspam), although more advanced methods, such as software vulnerabilities or exploit kits, can be used as well.
Only an updated anti-malware program is capable of preventing its infiltration or mitigating the impact of an infected machine. Security experts urge people to consider installing a reputable application for protecting their computers and avoiding cyberattacks.
Malicious software is mostly used to initiate unauthorized activity on a computer and help its owner to generate revenue. It can be designed to steal personal information, like logins and banking data, or it can try to encrypt precious files on a computer and make its owner pay a ransom in exchange for the decryption key.
Nevertheless, some versions of malware (adware, browse hijackers, and similar) are used just for showing promotional content on peoples' computers and generating pay-per-click revenue. Almost every type of malicious software can block or corrupt legitimate security software. In addition, they can also update themselves, download additional malware or cause security flaws on the affected PC system.
Many computer users go lightly about it, as they believe that they can always outsmart criminals and always stay on top. Unfortunately, it is not how it works – most malware is designed in a way to stay invisible to computer users, hence they won't even know that it is installed.
Additionally, some people intentionally engage in high-risk behavior and put their computer safety and personal security at risk. One of the best examples is software cracks or pirated programs – these are downloaded to avoid licensing process of software and receive it for free. This can often come at a cost, as there is almost no way to check whether a software crack is boobytrapped with malicious code, which would infect a computer with ransomware or other dangerous malware.
This is why you should not ignore the fact the malware is out there on the world wide web, and you should take precautions to prevent its infiltration and dire consequences of its presence on a PC.
Evolution of malware
The first example of malware showed up in 1986 when two brothers from Pakistan released a program known as Brain. It is considered the first malicious software sample, which was compatible with IBM. It spread through floppy disks and caused only annoying messages on the affected system.
The next serious threat showed up only in 1992. It was called Michelangelo and seemed to be much similar to viruses of our days. According to various reports, almost 20 thousand PC users reported about the data loss because of the Michelangelo virus.
At the beginning of the 2000s, security experts started noticing a serious growth in malware. Modern parasites, such as worms and trojans, started spreading around. Infected PCs were connected to botnets and turned into huge revenue machines.
Nowadays, malware authors are getting more and more serious and release more modern versions of computer threats. It has been reported that, since 2005, the amount of malware has increased from 1 million to 96 million different versions. Quite impressive, right?
By the end of 2020, ransomware has become the to-go malware that attacks regular computer users and prominent companies and organizations. With the help of malware, cybercriminals are also capable of breaching networks and servers of businesses, stealing data, and sometimes putting customers' private information at risk.
Crooks won't stop because malware became an extremely lucrative, although illegal, business. This is why home users, as well as corporations, should increase the level of security for all devices that are connected to the internet.
Malware infiltration techniques
Malware can be spread using various methods – here are the most common ones:
- Illegal and infected websites. Illegal websites have always been considered the main participant in the distribution of malware. The majority of such sites are filled with pornographic content, but you can also get infected after visiting a gaming, torrent, or even a legitimate news website that was compromised by attackers who injected a malicious script into it.
- Infected emails and attachments: Typically, this malware distribution scheme relies on botnets that are used to send misleading email messages to recipients. These fake emails are supposed to convince people to click the malicious link or download an infected executable file to the system. Nowadays, hackers have increased the number of fake email messages because people can hardly check their trustworthiness before downloading them to the system.
- Malvertising. Malware can also be spread thru malicious ads and links, and, in fact, they have been actively exploited nowadays. These links and ads can disguise themselves as updates for needed software, information about price reductions, and offers to take part in the survey. As soon as the victim clicks such link or ad, malware enters the system and causes unwanted activity.
Other malware. Different types of malware can be used for downloading additional threats to the affected PC system. If your computer is infected with ransomware or rogue anti-spyware, you may discover that another malware virus, such as adware or browser hijacker, was installed on your computer without your authorization as well.
Symptoms of the malware attack
One of the most common signs showing that your computer is infected with malware is fake security notifications and messages about your locked files. In this case, you can be infected with one of these malware versions:
The first group of threats seeks to scare users into believing that they are dealing with a reputable security utility that is trying to warn them about viruses. In reality, all these warnings are used just to scare users into purchasing fake anti-spyware.
The second group of malware shows a single warning, claiming that the user needs to pay a fine for illegal activity on the Internet. In addition, ransomware can also encrypt your important files and then make you pay the ransom.
If you have been suffering from redirects on a web browser or an excessive amount of pop-up ads, then you are dealing with:
These programs rely on a pay-per-click scheme, so their main aim is to hijack the web browser and display different kinds of pop-up ads. After clicking them, a user is redirected to sponsored websites. The majority of such programs are not malicious, but they have disrupted their victims with undesirable and sometimes even harmful content.
System slowdowns, stability issues, performance-related problems, and blocked security applications can also be noticed after the infiltration of malware. Unfortunately, this symptom can hardly help you to identify the type of malware you are dealing with, as it could be anything of the following:
Each of these malware threats is capable of using a considerable amount of computer resources. In addition, such programs can easily block legitimate security software and try to prevent their removal in this way. Beware that ignoring these symptoms can lead you to additional issues, like identity theft or losing your banking data and other information. Finally, keeping malware on the system can make your computer vulnerable to other threats in the future.
This program can also be added to the “fake PC optimization tools” category because it claims to be capable of improving a computer's performance, but it has nothing to do with that. Its activity is based on useless system scanners that report about invented registry entries, bad files, and other system components that are supposedly slowing PC's performance.
Once it convinces its victims that they have to remove this “harmful” data from their computers, the PC optimizer redirects them to its purchase page. Of course, you should never pay for its licensed version because you won't see any improvement after downloading it to your computer.
This is an especially dangerous example of malware, which has also been categorized as “Trojan Horse.” When inside the system, it can easily change the computer's settings, monitor your activity on a computer, and steal your important data.
It can also download other threats to the system without your authorization and knowledge. This virus has been actively spread with the help of fake Flash and FLV Player updates, so it is believed that thousands of computers have already been infected with COM Surrogate malware.
No matter that most adware can hardly initiate serious issues on your computer, DNS Unblocker should be avoided. First of all, it can cause an excessive amount of pop-up ads and similar commercial content on each of your favorite websites.
Secondly, it may use these ads to redirect you to malicious websites. Finally, it can collect information about your browsing activity on the Internet and then share it with related (and unrelated!) parties.
Malware removal options
The most reliable way to remove malware is to use a reliable anti-malware tool. Only an automatic removal option can help you to identify each malware version and eliminate each malicious component. In addition, you may be dealing with the seriously difficult type of malware, which may block your security software. If our recommended tool failed to help you fix your computer, you should perform these alternative steps:
- Repeat installation of anti-spyware. Then rename executable file and launch it;
- Reboot computer to Safe Mode and repeat installation of anti-malware;
- Install alternative anti-malware program;
- Fix virus damage with the help of a repair tool ReimageIntego;
- Contact 2spyware customer service via the “Ask Us” section.
Latest malware added to the database
Information updated: 2021-06-03
|
<urn:uuid:a1d36ba9-7994-4b22-9a67-ad49d422b30e>
|
CC-MAIN-2022-40
|
https://www.2-spyware.com/malware-removal
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00655.warc.gz
|
en
| 0.948257 | 2,047 | 3.421875 | 3 |
Why Data Classification Matters for Records Management Success
- Records management (RM) is the administration of digital or paper records. It includes the creation, maintenance, and destruction of records.
- RM aims to ensure that records are created and maintained to facilitate their retrieval and use while ensuring their authenticity, integrity, and reliability.
- Data classification is a core component of records management. It organizes data into categories to manage it more effectively.
With the proliferation of electronic records, it is essential to classify and manage them in accordance with their value and legal requirements. Discussions surrounding records management and data classification often lead to debates. Stakeholders tend to have very different opinions on what should be done with an organization’s data and how that data should be managed. However, some general principles can help to guide these discussions and lead to more productive outcomes.
Records Management: A New Approach to an Old Problem
As the world becomes increasingly digital, organizations find that their traditional methods of managing paper records are no longer effective. As a result, many are turning to records management solutions that can help them manage both digital and paper records. Records management is not new, but it has changed how it is approached.
In the past, records management was often seen as a compliance issue. Organizations were required to keep certain records for a certain period, and they needed to ensure that those records were properly stored and maintained. While compliance is still an important part of records management, the focus has shifted to include a wider range of benefits.
Today, records management is seen as a way to improve efficiency, save money, and protect an organization’s data. By properly managing their records, organizations can reduce the storage space they need, make it easier to find and retrieve information, and ensure that their data is properly protected.
There are several benefits to implementing a records management solution, including:
- Improved efficiency and productivity: A records management solution can help organizations more effectively manage their records, saving time and money.
- Reduced risk: A records management solution can help organizations to reduce the risk of losing important records.
- Compliance: A records management solution can help organizations to meet their legal and compliance obligations.
- Improved decision making: A records management solution can help organizations to make better decisions by providing easy access to records.
To have an effective records management program, it is important to first establish a clear understanding of the organization’s data and its location. This can be difficult, as data is often spread across different departments and systems. Once the data has been identified, it needs to be classified into different categories. This will help to determine how the data should be managed and what level of protection it requires.
Once the data has been classified, it is important to establish management rules and procedures. These rules should be designed to ensure that the data is accessible when needed and protected from unauthorized access. The procedures should also be reviewed regularly to ensure they are still effective.
It is also important to plan how data will be disposed of when it is no longer needed. This plan should ensure that the data is securely destroyed and that no unauthorized access to the data is possible.
How Do I Get Started With Records Management?
There are four basic steps involved in getting started with records management:
- Determine what type of system will work best for you. There are many different ways to organize your papers and documents, so take some time to explore your options and find what works best for you.
- Identify which papers and documents need to be kept. Not everything needs to be saved forever, so it’s important to know what can be safely discarded and what needs to be kept long-term.
- Store your papers and documents in a safe place. Once you’ve determined what needs to be kept, ensure it’s stored properly, so it doesn’t get lost or damaged.
- Maintain your system on an ongoing basis. Implementing a records management system is not a one-time task; it’s something you’ll need to do on an ongoing basis as new papers and documents come in.
What Is Data Classification?
Data classification is organizing data into categories that can be used to manage the data more effectively. One of the most important aspects of data classification is determining how data should be categorized. Data classification schemes typically use a hierarchical structure to organize data.
However, there are many different ways to approach data classification. The best approach will vary depending on the type of data being classified and the goals of the classification scheme. In general, however, data classification schemes should be designed to meet the following criteria:
- The categories should be clearly defined, so there is no ambiguity about what data belongs in each category.
- The categories should be mutually exclusive so that each piece of data can only be classified into one category.
Workplace data can be classified into four primary categories: public, internal use only, confidential, and restricted.
- Public data is information that can be accessed by anyone without restriction. This category includes information typically published by the organization, such as press releases, product descriptions, and marketing materials.
- Internal use only data is information that is not intended for public release. This category includes employee records, financial data, and trade secrets.
- Confidential data is information that must be kept secure and is only accessible to authorized individuals. This category includes supplier contracts, customer lists, and product development plans.
- Restricted data is information subject to special restrictions, such as legal limitations on its use or disclosure. This category includes personal Identifiable Information (PII) and Health Insurance Portability and Accountability Act (HIPAA) data.
There is a reason why data classification is a critical component of effective records management. Without proper data classification, your records management efforts are likely to fail. Data classification provides a framework for understanding the value of data and how it should be protected. When data is properly classified, organizations can make informed decisions about how to store, manage, and dispose of data.
Increasing Records Management Compliance in Your Organization
Records management compliance is critical for any organization. Maintaining accurate records helps to ensure the safety and security of your business operations and protect your customers, employees, and other stakeholders.
There are several steps you can take to increase records management compliance in your organization, including:
- Define your records management objectives and goals.
- Implement policies and procedures for records management.
- Educate employees on records management compliance.
- Conduct regular audits of your records management system.
- Implement technology solutions to automate records management.
Data governance is critical to the success of any organization. You must ensure that your organization complies with records management regulations and best practices. Implementing these practices can help safeguard your data and improve your bottom line.
When it comes to records management, there is no one size fits all solution. The best approach depends on the organization’s specific needs and the type of records being managed. Many different records management systems and software are available, so it is important to research to find the one that best suits your needs. Whatever system you choose, it is important to ensure that it is properly implemented and regularly reviewed to meet your organization’s needs.
|
<urn:uuid:62809fcd-934a-48cd-9b2c-767027618b52>
|
CC-MAIN-2022-40
|
https://www.alliancetechpartners.com/data-classification-matters-and-records-management/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00655.warc.gz
|
en
| 0.93551 | 1,497 | 3.046875 | 3 |
Link Aggregation is a nebulous term used to describe various implementations and underlying technologies. In general, link aggregation looks to combine (aggregate) multiple network connections in parallel to increase throughput and provide redundancy. While there are many approaches, this article aims to highlight the differences in terminology.
Link Bonding (a.k.a. teaming, bundling, etc.)
This is generally implemented using 2 or more links between two logical devices. This could be 2 servers, 2 switches, a server to a switch, or various other combinations. Using standards such as LACP, the two links are combined into a single logical link, with traffic being spread across them evenly. Since this is typically done at Layer 2, failure detection and isolation can be done quite quickly. Thus limiting the impact of a link failure. This is also useful for increasing the available throughput between two devices, without purchasing much more expensive hardware (2x1Gbps vs 1x10Gbps). See Figure 1 below.
Load balancing can also be used to describe link bonding. Generally speaking, load balancing is a term reserved for Layer 3+ operations. While application load balancers can be used to distribute load across across an array of devices for a particular application or purpose, this article will concentrate on Layer 3. In that sense, load balancing is commonly defined as a (mostly) even distribution of IP traffic across 2 or more links. This can be done by providing a device multiple equal cost routes to the same destination over equal sized links. See Figure 2 below.
Load sharing is loosely defined as spreading network traffic across 2 or more equal or unequal links/paths. Load sharing can exist between a 10Mbps WAN link and a 100Mbps WAN link. While load sharing often provides the slowest recovery time (dependent on implementation and failure), it is the easiest to implement, most flexible, and still provides levels of redundancy that link bonding and load balancing cannot. As an example, load sharing can allow the use of two different ISPs, with different link speeds, when NAT is implemented. If not using NAT (with other vendor products), this becomes more difficult and can result in outbound traffic traversing one link, while inbound traffic for the same conversation uses another. See Figure 3 below.
Implementation by Cisco Meraki
Cisco Meraki MS switches allow the use of the open standard LACP to provide Layer 2 link aggregation, in the form of link bonding as described above. The MS's LACP hashing algorithm uses traffic's source/destination IP, MAC, and port to determine which bonded link to utilize. This provides highly resilient and equal load distribution across 2 or more links, between two logical devices, with rapid failure detection.
Please refer to the MS Series Administration Guide for details on how to implement.
Link Aggregation is supported on ports sharing similar characteristics such as link speed and media-type (SFP/Copper).
Cisco Meraki security appliances use a proprietary algorithm to provide load balancing across two Layer 3 links (if configured). This can be customized to use different ratios and specific rules for outbound traffic. As NAT is used, flows that are part of a particular conversation will remain on the link they are placed.
Please refer to MX Load Balancing and Uplink Preferences for details on how to implement.
Configuring Link Aggregation between MS and Cisco Switches
You may want to set up and configure a bonded link between your Meraki MS series switch and a Cisco switch. This is often referred to as link aggregation, link bonding or EtherChannel.
In order to configure 2 or more ports (up to 8) to be a port aggregate, simply navigate to Switch > Monitor > Switch ports and select the target ports, then choose "Aggregate". It is recommended that you do not have the target ports physically connected to anything during this step.
On your Cisco switch, you must enable LACP by setting the EtherChannel mode to active or passive depending on the behavior you desire. For further information, please see Cisco's documentation on Configuring EtherChannel (this document is for the Catalyst 3000 series).
|
<urn:uuid:486ef085-eea8-45e8-8075-b49bb8fcfe00>
|
CC-MAIN-2022-40
|
https://documentation.meraki.com/%E6%97%A5%E6%9C%AC%E8%AA%9E/General_Administration/Tools_and_Troubleshooting/Link_Aggregation_and_Load_Balancing
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00655.warc.gz
|
en
| 0.911285 | 862 | 3.578125 | 4 |
The Role of Computer Vision in AR and VR
As AR and VR have expanded reach to almost every industry, ranging from travel and healthcare to gaming and eCommerce, new business opportunities are sprouting up. According to Market Watch, the global Augmented & Virtual Reality Industry is further estimated to grow at a CAGR of 54.91% from 2018 to reach USD 409.99 Billion by the year 2025. Click To Tweet
Augmented reality (AR) is a huge hit for all the industries, whereas virtual reality (VR), created by gamers for gamers, is limited to gaming and entertainment. Even though technologies serve different audiences, they are both key trends to watch out for in the next few years.
In this article, InData Labs, a computer vision technology provider, will give you a broad overview of what computer vision means to AR and VR. Learn about the current use cases for AR/VR in various niches and how using technology can benefit your business.
AR, VR, And What CV Means to Them
VR immerses a person into a virtual world stimulating their real presence through senses. This stimulation can be achieved through a source of content and hardware like headsets, treadmills, gloves and so on. Computer vision aids virtual reality with robust vision capabilities like SLAM (simultaneous localization and mapping), SfM (structure from motion), user body tracking and gaze tracking.
Using cameras and sensors, these functions help VR systems analyze the user’s environment and detect the headset’s location. So, computer vision and virtual reality work together to make products more sophisticated and user-responsive.
You can read one of our our previous articles to delve into some more details on how computer vision works.
Augmented reality has the potential to instill awe in us by converging the physical and real worlds. In reality, computer vision-based AR overlays imagery or audio onto the existing real-world scenery. And it all begins with computer vision. Computer vision (CV) for augmented reality enables computers to obtain, process, analyze and understand digital videos and images. By looking at an object and its appearance, location, and the settings, it identifies what the object is. More simply, this is how Instagram recognizes your friends by photo tags, how you can log in into your bank account with your eyes, and how you can get yourself a flower crown on Snapchat.
Take Snapchat filters. When you look at photos, you see your face, while computer vision sees data. Machine learning, a method of data analysis, incorporated in an AR app soothes the pain of object detection. CV, in turn, maps facial features from photos or videos. It reads the geometry of your face, captures facial landmarks and also takes into account the universal truths about the human face such as asymmetric features, mimic facial muscles, etc.
To unpack a predefined AR content (be it a deer face or hearts around the head), you should get an accurate face scan. In this case, computer vision enables AR image processing, optical tracking, and scene reconstruction, which is vital for any immersive app. Next, a computer vision-based AR system scans your photo with sensors to add the real-time visual effects to your face. It all brings up the mix of the physical world and AR data.
In fact, computer vision goes above and beyond working closely with AR and VR to provide the users with sophisticated interactive content.
How AR and VR Benefit Businesses in 2020
From theory to reality, AR and VR have become mainstream technologies paving the way towards mass adoption in various industries. Obviously, both technologies are at the forefront of innovation and aren’t going anywhere.
Computer Vision and Augmented Reality for E-Commerce
Recently, AR has made it possible for retailers to showcase their products in real-time. IKEA was one of the first to roll out an AR application that enabled buyers to visualize products within their homes. Today, more and more retailers are utilizing AI software to elevate the shopping experience and ease purchasing decisions for their clients.
The same goes for online clothing stores. The technology enables shoppers to virtually try on clothes and find their perfect fit. Nowadays, the number of online stores unveiling their fitting rooms by an app is growing. For now, computer vision-based augmented reality has been proven efficient in providing a better customer experience, improving brand perception and boosting sales.
AI Augmented Reality For Education
When it comes to education, AR and VR are total game-changers. They have the potential to improve the learning process and better motivate and engage students. More importantly, the technologies have proven to be successful in real-time training. When theory fails to improve the recall, virtual reality AI comes to the rescue. Thanks to this, students have a range of VR tutorials and modeling sessions where they can obtain hands-on experience and polish up their techniques.
Machine Learning-Based VR for Gaming
Virtual reality is evolving at breakneck speed. And some of the latest VR breakthroughs haven’t been possible without machine learning and computer vision.
According to Grand View Research, the global virtual reality in gaming market size is expected to reach USD 45.09 billion by 2025. Lately, virtual reality powered by computer vision has given a brand new twist to the video game industry. There’s a number of benefits VR has to offer for the gaming business:
- significant increase in sales
- enhanced user experience
- improved player retention
AI-Powered Augmented Reality for Travel and Tourism
The technology has been actively utilized by travel agencies and hotels to better the overall brand reputation and boost revenue. Within the hospitality and tourism industry, augmented reality fueled by computer vision acts as a powerful tool to bring more interaction into hotels and resorts and convince travelers into impulse booking.
On top of that, some travel agencies develop AR apps that offer breathtaking immersive tours. The ultimate goal of those apps is to take the potential traveler on an interactive tour somewhere sunny and give them the most of the information about the destination. For sophisticated travelers, there’s an opportunity to get a unique view of a sight or a resort from the drone. Thanks to AR drone image processing, tourism agencies are now reaping the benefits of drone technology and promoting tourist destinations.
VR and AR for Healthcare
These days, AR is making a significant contribution to the healthcare industry. The innovation empowers healthcare professionals to provide better diagnosis and make surgery safer. Using AI coupled with computer vision and AR, surgeons can now place surgical incisions more precisely and prevent tissue damage.
Moreover, computer-vision based virtual reality is the next big thing for mental health and psychotherapy. It’s utilized to treat patients with post-traumatic stress disorders (PTSD), depression, anxiety, and other mental-related issues.
Thanks to AR and VR, we have an opportunity to blur the line between the world and reality. Both technologies are promising, and are here to stay. Since computer vision is the fuel behind most AR and VR applications, in the next coming years, we’ll see the three of them working closely together.
Build Your Computer Vision Models with InData Labs
Have a project in mind but need some help implementing it? Drop us a line at [email protected], we’d love to discuss how we can work with you.
|
<urn:uuid:f241dbaf-f879-4b00-99df-d576b3a9156d>
|
CC-MAIN-2022-40
|
https://indatalabs.com/blog/computer-vision-ar-vr
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00655.warc.gz
|
en
| 0.925147 | 1,529 | 2.921875 | 3 |
Python Modules are the libraries that includes a set of functions, variables etc. that are defined earlier. In other words, they are the files that includes ready to use functions, variables etc. There are well known Python modules while you can also create your own module. With module architecture, you do not need to write every function times and times. You can create one time and then you can use the module that this python function resides.
There are widely used modules in python. We can say these modules well-known modules. There are a lot of modules like this . Some of these well-known modules are given below:
The extension of module files is “.py”. Both the well-known modules and newly created modules have this extension.
So, how can we use an existing well known module? How can we create a new module in Python? How can we use just created module in python? In this lesson, we will focus on these questions.
You can also Download Python Cheat Sheet!
To use the functions or variables in a well-known python module or a newly created module, firstly we should import the module that the function resides in. We use “import” keyword with the name of the module to add a module into the code.
For example, if we would like to the functions in math module, we use the below line:
You can also check Python Math Functions
You can also watch the video of this python lesson!
If we need only specific parts of a module, we can define it in the code. When we define this, the other parts of the module is not included. We can do this with “from” keyword.
Here, we get only sqrt function from the math module of python.
To create a module, the only thing we need to do is defining functions and then save it with “.py” extension. After that our module is ready to use. To use this module, we need to put this module in the same directory with the code file in which we will use this module. Or we an put the created module to the lib directory of Python (This is “C:\Users\asus\anaconda3\Lib” in my PC).
Let’s show this with an example. In the below example, firstly we will create a python module that includes two functions. After that we will use them in another coding example. To do this, we will import this module
We will record this file as “ourmodule.py”. Then, we will use this name in our other code file.
The output of this code will be:
We can also define different variables in a Python Module. And after that when we import this file to other file, we can use these variables easily. Modules is a very efficient mechanism also for such usages.
In the below example, we will define a list in a module and then use it with import keyword.
We will record this file as characters.py. And when we need to use them, we will add this moduel with import keyword.
The output will be:
As it is also defined in this new code file.
We can define aliases to the modules. To do this we use “as” keyword after import keyword and the name of the module.
We can use Python dir() function to list the functions in a Python module. Let’s use this with math module and list all the functions listed in this module.
The output will be a list of the defined funtions in math module.
This is basically module part of python course.
|
<urn:uuid:da71b0d3-d138-4598-920d-41a14f352ebc>
|
CC-MAIN-2022-40
|
https://ipcisco.com/lesson/python-module/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00655.warc.gz
|
en
| 0.817423 | 760 | 3.890625 | 4 |
The structures and requirements of data centers might be somewhat different. A data center created for a cloud service provider like Amazon, for example, meets different facility, infrastructure, and security criteria than a wholly private data center, such as one built for a government facility dedicated to securing sensitive data.
Regardless of classification, an effective data center operation is achieved through a balanced investment in the building and the equipment it houses. Furthermore, because data centers frequently store an organization’s business-critical data and applications, both the facilities and the equipment must be protected against intrusions and cyberattacks.
The following are the main components of a data center:
- The usable space available for IT equipment is referred to as a facility. Data centers are the world’s most energy-intensive institutions since they provide 24-hour access to information. Both design and environmental control are addressed to keep equipment within prescribed temperature/humidity ranges.
- The core components are equipment and software for IT operations and data and application storage. These are examples of storage systems, servers, network infrastructure, such as switches and routers, and information security elements, such as firewalls.
- Support infrastructure is equipment that helps ensure that the maximum level of availability is maintained securely. Data centers are divided into four levels by the Uptime Institute, with availability ranging from 99.671 percent to 99.995 percent.
The following are some examples of supporting infrastructure components:
- Battery banks, generators, and redundant power sources are examples of uninterruptible power supplies (UPS).
- Computer room air conditioners (CRAC), heating, ventilation, air conditioning (HVAC) systems, and exhaust systems are examples of environmental control.
- Biometrics and video surveillance systems are examples of physical security systems.
- Personnel available 24 hours a day, seven days a week, to monitor operations and maintain IT and infrastructure equipment.
In recent years, data centers have seen substantial changes. Datacenter infrastructure has transitioned from on-premises servers to virtualized infrastructure that supports workloads across pools of physical infrastructure and multi-cloud environments as enterprise IT demands to continue to migrate toward on-demand services.
Visit our blog section to learn more about the components of data centers.
|
<urn:uuid:318027b8-18de-4f51-adcf-bed9901f4b52>
|
CC-MAIN-2022-40
|
https://www.akibia.com/what-is-in-a-data-center/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00055.warc.gz
|
en
| 0.926838 | 459 | 2.90625 | 3 |
Thank you for Subscribing to CIO Applications Weekly Brief
Technology's Role in Defining the Future of Food Industry
Today’s technology is a driving force for innovation that challenges even the most established companies to modernize and rethink how they remain relevant. No industry is impervious to technological progress’ revolutions. Some industries, however, realize the need for these transformations slower than others. The food packaging industry has lagged behind others for long, plagued by archaic practices and an increasingly enraged base for consumers. However, technology and innovative ideas have finally overcome consumer outrage. Technology affects almost every move, by even influencing people’s food choices.
The shift of the population from rural to urban life will significantly increase demands for future food supplies and shorter, more efficient supply chains. Furthermore, wealth inequality continues to increase, creating economic disparities in access to healthy food between socio-economic and income groups. Healthy eating is a growing trend in developed nations across the world.
Many companies like IBM, have developed blockchain implementations to track the delivery of food. The reason behind these implementations is to scan the product and know the complete information about the product like where was it grown and when was its shipment made. This can avoid the waste if contamination is found. UBS’s food delivery projects will increase from $35 billion to $365 billion by 2030. Many start-ups focus on offering shared kitchens for new and old restaurants.
A lot of food in the United States today goes waste. Even if consumers do not buy ripe fruit or throw their leftovers, food ends up in the trash. In order to combat this waste, several companies have begun to take food waste and turn them into cosmetics. It might seem surprising to imagine edible food coming out of an electronic printer, but it is a legitimate operation. One of the companies has already been able to make pure sugar and a 3D printer for candies. They’ve collaborated with Hershey’s to create 3D-printed chocolates.
The food industry will constantly evolve and impact lives in unimaginable ways with growing technologies. As technology advances, people's relationship with food also evolves. It can teach the means to develop without harming the planet and find a way to solve the hunger of the world one day.
Check This Out: Food and Beverage Technology Review
|
<urn:uuid:b6113e59-8419-4e70-85df-46c50440cd22>
|
CC-MAIN-2022-40
|
https://www.cioapplications.com/news/technology-s-role-in-defining-the-future-of-food-industry-nid-2868.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00055.warc.gz
|
en
| 0.949169 | 473 | 2.9375 | 3 |
- The rapid growth in the use of applications has significantly added to the digital clutter that exists on people’s digital devices: for example, users typically install 12 Android apps every month but delete only 10, in effect adding two apps to their device on a monthly basis.
- As a result, some apps are left unused and idle on digital devices: On computers, at least 30% of installed applications are never used.
- The danger is that apps remain active even though the user is not using them: on average users have 66 apps on their Android device, but an experiment which installed a representative sample of 66 of the most popular Android apps, found that 54 of them consumed 22Mb of traffic per day without a user even interacting with them.
- Poor user maintenance of device content also generates a build-up of digital clutter: although in 55% of cases people regularly revise the contents of their device and delete unused docs and apps, in 32% of cases people only do this occasionally and in 13% of cases they try not to delete any docs and apps at all.
- Improper app hygiene extends to managing apps: the survey found that 65% of users update apps on their smartphones as soon as they are released, but 24% only do so when they are forced to. Moreover, only 40% intentionally adjust apps settings of each application on smartphone.
- This can be a problem because apps have access to user sensitive information: 96 in 100 Android apps start working without users launching them, and 83 in 100 have access to sensitive user data such as accounts, contacts, messages, calls, files stored etc.
- Some of the major problems that result from digital data overload are an increase in intrusive and unsolicited ads that often use vulnerabilities to penetrate the device: this was cited by survey respondents as a problem on smartphones (61%), tablets (47%) and for computers (55%). Other prominent threats highlighted were devices developing glitches (38% on smartphone) or a malware infection (28% on computers).
- This digital clutter and careless user behaviour is leaving devices — and the sensitive information they contain — vulnerable to security threats: our survey reveals that over half (56%) of users have lost data on their smartphone, over a third (39%) on tablets and 51% on computers.
The digital age has created a mountain of digital clutter, and the explosion in the use of applications means that an increasing amount of information is saved on smartphones, tablets and personal computer devices.
Whereas, once upon a time, users quickly reached the storage capacity of their phone and had to make space available for new data, today capacity is less of an issue so people take five photos instead of one, “just to be sure”. More space and power equals more data and apps.
Personal and sensitive information, such as address book contacts, text messages, videos and photos, now clutter our digital world, but failure to look after this information is putting it at risk. As much as in the physical world, users need to take the cleaning of their digital ‘home’ more seriously. Just like a clean, uncluttered room breathes fresh energy into your home and life, in the same way, an uncluttered computer or smartphone results in a more enjoyable and productive experience.
Digital clutter is a phenomenon that is the result of poor user device maintenance. Maybe because users no longer have to worry about storage limits, they’ve become lethargic about reviewing and protecting their devices and updating the apps. This means that, increasingly, our digital junk sits on our devices or in the cloud in perpetuity.
The problem is that the more we accumulate digitally, the more people open up their device to security threats that risk sensitive data, such as passport and credit card information, falling into the wrong hands. That’s why users should take time to update and delete unused apps for the essential care of their device.
The purpose of this study is to find out the extent to which users are drowning in digital clutter. We aim to reveal what problems this causes users and offer some useful advice on what can be done to improve the security and safety of sensitive information.
This study is based on insight gained from a unique combination of online research and technical analysis of security threats and app performance:
- Statistics from the Kaspersky Security Network, a cloud-based system that processes depersonalized cyberthreat-related statistics received from millions of Windows and Android devices owned by Kaspersky Lab users across the globe.
- A real-life experiment on Android devices that analyzed the performance of applications was conducted in January 2017 by Kaspersky Lab internal testers.
- An online survey conducted by research firm Toluna and Kaspersky Lab in January 2017 assessed the attitudes of 16,250 users aged over 16 years old from 17 countries. Data was weighted to be globally representative and consistent, split equally between men and women.
Not all the results from each study have been included in this report. To request further data please contact Kaspersky Lab at [email protected].
These days our devices are used as storage for all aspects of our digital lives. Users tend to save everything, even things that they know, deep down, they’ll never need or be able to find. They save files downloaded, apps, emails archived, photos taken and even those funny videos of cats!
This type of data is a major contributor to unnecessary clutter on our devices. But unlike clutter in the real world — where stacks of papers, books, clothing and other assorted junk can be physically seen — there are no obvious outward signs that could indicate a problem.
Our global online survey shows that a large majority of people store a wide range of information on their devices. Overall, the most common items stored on digital devices are general photos and videos for nine in ten (90%) of respondents. This is closely followed by personal emails (89%) and photos and videos of travel (89%), address book information (84%) and personal text or messages (79%).
Data stored on devices:
We found that in almost two thirds (62%) of cases users have their passwords, including auto-login for websites and apps, stored on their devices. More than half also store financial and payment information (62%), or scans of their passports, driver’s licenses, insurance certificates and other sensitive documents (57%).
We see differences from country to country. For example, scans of sensitive documents are particularly popular for users in the UAE (87%) compared to Japan (32%). Similarly, the storage of private and sensitive photos and videos is less big in Europe (48%), than in Russia (67%).
In the face of this ever-increasing mountain of data, we set about understanding how often users tend to wipe data and apps from their devices. Overall, the survey found that in only 55% of cases users said they regularly revise the contents of their device and delete everything they haven’t used in a long time, and in a third of cases (32%) users said they do this occasionally, for example, when they don’t have any more space available. Worryingly, in one-in-ten (13%) cases people said they never delete anything.
Attitudes to app data
A few years ago it would have been hard to predict the rapidly exploding app industry, but there’s no sign of any slowdown in its growth yet. Apps are available for all areas of digital life, from fitness trackers to productivity tools, and from travel planners to social media. The apps and the data they accumulate leads to digital clutter that can rob phones of performance, decrease available storage space and put users at risk of security threats.
Different pieces of research conducted by Kaspersky Lab show that the apps and programs used vary according to device. Generally, and perhaps unsurprisingly, we see that PCs are mostly used for work, tablets for entertainment and smartphones for communication.
Moreover, in our survey we see that the build-up of digital clutter is most acute on the device we carry around with us all day — the smartphone. Smartphones are the devices that typically have many apps that contain the most sensitive user data, such as contactless payment information or private photos and personal messaging.
Applications installed on user devices:
However, we can reveal that large groups of users fail to undertake the basic procedures for keeping this device clutter-free and therefore less vulnerable to security threats. Research based on data from the Kaspersky Security Network (KSN) shows that, on average, users have 66 apps installed on their Android device. However, in the survey users say they only have 15 which perhaps suggests that users aren’t aware of the number of apps, and therefore the volume of data, they carry around with them. We see that, on average, users typically install 12 apps on their Android devices every month but delete only 10, in effect adding two apps to their device on a monthly basis.
With users adding more apps and more data to their devices every month, attitudes to app cleansing are important in order to combat the problem of digital clutter. The survey showed that although three quarters of users (77%) have deleted a smartphone app within month period, 12% don’t remember when they last deleted one. The smartphone is the device that is cleaned most often: for example, 26% of users do not remember when they last deleted an app from their computers. People probably clean smartphones more often because there is less space available: 35% deleted an app from a smartphone because there was no more space, compared to just 13% on computers.
This point is supported by KSN research that shows computer applications are often redundant on the machine. We found that people never use at least a third (30%) of applications installed on their computers over a six-month period (excluding drivers, runtime software and other programs users do not work with directly).
Best practice for the general maintenance of applications installed on devices requires people to understand user agreements and adjust the settings for apps. However, we found that only a third (32%) of respondents read agreements carefully and are able to decline installation of the app on their smartphone if they are not satisfied. This is important because apps have access to a lot of sensitive information on devices.
Furthermore, the survey found that under half (40%) intentionally adjust the settings of each application on their smartphone. This is particularly popular in the US (48%), UAE (46%) and Asia-Pacific (44%) rather than in Israel (26%), Japan (33%) and Russia (36%).
App settings enable the user to manage how the app interacts with the device. For example, apps can get access to user sensitive information, track user locations and share user data with third party servers. Failure to manage these settings can result in unused apps gaining access to information on the device without the user being aware.
Apps and devices
The issue of app cleansing and maintenance is important because, for the smartphone in particular, they contain the most sensitive data and are constantly with us. Improperly managed smartphone apps also represent a security threat because they often transmit data even when they’re not being used.
Kaspersky Lab set up an experiment to test how the world’s top Android apps, defined by KSN statistics, behave in a variety of circumstances. We downloaded 66 apps (the average number of apps installed on one Android device) selected according to popularity. These top 66 apps in total took up about 5GB of storage. The devices were formatted, fitted with SIM cards and restarted, connected to the mobile Internet and with Wi-Fi set-up.
None of the third-party applications were launched by testers and we recorded figures for the data usage of each application. Of the top 66, only 12 applications didn’t have any traffic consumption. But interestingly, on average, the remaining 54 run without user consent consuming 22Mb of traffic per day without a user even interacting with the apps. The resulting impact on the device includes issues with performance and battery life.
This is backed up by the technical findings of the KSN research. Analysis shows that of 100 Android apps that users can manage through installing, deleting or updating, 96 start working without users actually launching them manually. Furthermore, 83 in 100 have access to user sensitive data, such as contacts, files and messages, and can even make calls and send SMSs. This is a tempting prospect for cybercriminals looking to exploit sensitive data.
These findings highlight the importance of managing and deleting unused apps, because they are often working in the background, even if the user is not aware of it.
It is important for users to update apps as soon as new versions are released because they might include security patches that prevent or reduce vulnerabilities in the app. We found that 65% of users update apps on their smartphone as soon as they are released, while a quarter (24%) only do this when they are forced to. The trend for updating apps as soon as they’re launched was found to be particularly strong in the UAE (78%) and Latin America (68%), when compared to Russia (55%).
In contrast, computer users are less likely to update apps. 48% of users update apps as soon as possible, 30% do this only when they are forced to, and 12% try not to update apps on their PCs at all. According to KSN stats, no more than half of users install updates for the most exploited software (such as pdf readers, browsers, etc.) on their computers during the week after these updates are released.
The most popular apps on the app stores issue updates that will often include relevant bug fixes as frequently as weekly, while other release cycles may happen every few months. In fact, Kaspersky Lab’s study into app usage revealed that, on average, the most popular 300 Android apps are updated every 45 days. However, we also show that 88 apps from this list are never updated, leaving them — and their users — at risk of exploitation by cybercriminals.
This is risky, because apps that are not updated are doors for malware to exploit vulnerabilities in the apps and OS as a means of penetrating the device. In 2016, four million exploits were detected, which is 16% more than in 2015. Overall, in the last year Kaspersky Lab’s solutions combated 758,044,650 attacks on Internet users around the world, and many such attacks were using vulnerabilities in software and OS.
Problems with data ‘obesity’
The phrase data obesity has been coined in recent years to describe the way users clutter digital devices with excess information. The analogy implies that you may be continually snacking on new information which may provide very little value in your life — and then creating a number of problems by storing all that nutrition-less information on your devices.
Users exhibit careless behaviour towards the hygiene of their devices despite the fact that we found fears over personal data loss are well founded across all digital devices. Our survey reveals that 56% of users have lost data on their smartphone, 39% on tablets and 51% on computers. In most of these cases, data was deleted because of a damaged device or a user accidentally deleted it, and the third most common reason was malware infection.
We noticed vast differences between users in different countries, however. For example, in the case of smartphones, we found 73% of those in the Asian-Pacific region have lost data in contrast to 44% in Europe.
A wide range of other problems associated with digital data overload are also highlighted in the study. For example, the main problem with devices cited by respondents was intrusive and unsolicited ads, for smartphones (61%), tablets (47%) and for computers (55%). These ads often use vulnerabilities to penetrate the device, and in 2016 the most popular and dangerous mobile Trojans were advertising Trojans that can obtain superuser rights on the device.
Users highlighted problems with battery life, lack of memory and unsolicited ads on smartphones more often than on tablets and computers. This is caused by poor device maintenance and happens when users fail to delete apps and update them, opening the device up to security threats. At the same time, computers faced glitches and malware more than smartphones and tablets.
This report demonstrates the scale of the problem with the data that permeates across all the devices that help us to manage our digital lives. Digital clutter is increasing and so are user problems associated with it.
User behaviour and attitudes to applications are the source of many of these issues. Many users fail to undertake the simple but essential care of their device that cleans and updates software and apps, adjusts settings and uninstalls unused apps. These actions are important to the hygiene of devices and the data that exists on it — from phone through to tablet and computer.
Just as it’s become traditional to clean out our closets, attics, and garages each spring, it’s good practice to regularly clear out and refresh digital spaces. It keeps them running smoothly and protects security, so spending a little time to get your digital house in order could prevent you from losing important or sensitive information in the future.
With time, we’ve unknowingly accumulated mountains of unwanted data that could leave us exposed to ever-increasing cybersecurity threats. The digital world is growing and so is our capacity to store this data. But the fact that we have the capability to store vast information doesn’t mean we should.
Frank Schwab, Professor in Media Psychology at the University Wuerzburg, says that because users don’t understand the risks associated with digital clutter, they don’t invest the time in good device and app maintenance. He said: “People tend to be irrational in evaluating risk in everyday life — in both directions. In some cases, we are much too careful, in others we hardly see the risks. Rational decisions require a very conscious mental act, and this also applies when dealing with digital devices. It is exhausting and requires effort, so we need to invest in it and in many cases we are simply too lazy to do so.”
Frank Schwab continued: “Cyberthreats, in addition, are such that most of us cannot understand how they work, they do not produce dramatic images and they are hardly ever the topic of everyday conversation. For our emotional functioning, which is based on simple rules and experiences, there is no reason to change behavior, as the feeling of threat is extremely low. This means that investing time in keeping our devices clean is psychologically not relevant to us, as we don’t feel any consequences from extensive data clutter.”
Users are advised to take action to clear the clutter from their phone, tablet and computer with the following steps:
- Complete an audit — ensure you know what information is stored where. This will help you to clean devices more easily and give you peace of mind that your data is secure.
- Clean your device — once you know where everything is stored, it is easy to delete unused and unwanted files and apps that may pose a risk to your device.
- Update software — regular updates should be undertaken as soon as new versions are released.
- Use dedicated software — software cleaners such as the one integrated into Kaspersky Lab’s flagship security solutions, scan all applications installed on your device and mark those posing potential risk or those that are rarely used. It will also inform the user if the application slows down the user’s device, provides incomplete/incorrect information about its functions, operates in the background, and shows banners and messages without permission (e.g., advertising). Kaspersky Lab’s flagship security solution for home users also contains an Application Manager feature that sends alerts if a program has been installed without their awareness or clear consent, for example, as additional software during the installation of another application.
For further information on Kaspersky Internet Security go to: https://www.kaspersky.com/home-security
|
<urn:uuid:8f44aa38-40db-47d0-b313-aaceec0b4908>
|
CC-MAIN-2022-40
|
https://www.kaspersky.com/blog/my-precious-data-report-one/14093/?campaign=tcid_admitad_316e7b625aacc5c890d501f05641b0e3_240682_x4&ADDITIONAL_reseller=tcid_admitad_316e7b625aacc5c890d501f05641b0e3_240682_x4
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00055.warc.gz
|
en
| 0.955284 | 4,119 | 2.921875 | 3 |
Auto NAT Mode
Certain configurations of your Meraki Go access point(s) may yield unexpected results. To help ensure that you are always getting the best performance and configuration of your Meraki Go device, the Meraki Go app has certain automatic safe guards.
Note: Meraki Go does not recommend using a Meraki Go access point as an alternative to a network router. Our recommended configuration is having your Meraki Go access point placed behind a network device that is already configured to perform NAT(ie your ISP’s router/modem combo, an existing router you have installed, etc.)
What is Auto NAT Mode?
Auto NAT Mode is an automatic configuration of NAT mode (compared to bridge mode) for your Meraki Go network, with the inability to switch your network back into Bridge mode.
Why is Bridge Mode not Available?
If your Meraki Go access point has a public IP address (an IP not listed in the ranges below), it is unlikely that the rest of the devices on your network will be able to receive an IP address from your ISP (most ISPs only provide 1 public IP address per modem). By automatically forcing your Meraki Go access point into NAT mode, we ensure that your Meraki Go access point can provide IP addresses for the rest of the devices on your network, allowing them to reach the internet.
When will Auto NAT Mode Occur?
Auto NAT mode will become enabled if your Meraki Go access point does NOT have an IP in the following ranges:
10.0.0.0 - 10.255.255.255
172.16.0.0 - 172.31.255.255
192.168.0.0 - 192.168.255.255
You can view the IP address of your Meraki Go access point by going to the Hardware tab > select the device > scrolling down to LAN IP
Device to Device Communication in Auto NAT
What does it do?
Normally, a network configured in NAT mode will limit communication between wireless clients connected to the network, and only allow them to communicate with non-wireless clients on your network, or the internet.
Device to device communication gives you the ability to configure a Meraki Go network to allow wireless clients to communicate with each other (a good example of needing this may be a pair of wireless speakers, or a wireless printer)
Why don't I see the option to configure it?
The ability to configure Device to device communication will only occur if one or more of your Meraki Go access points are configured in Auto NAT mode. More information about Auto NAT mode, and what it is, can be found here.
When not in Auto NAT mode, you will not see this configuration option. This is because if you desire communication between wireless clients on your network in that instance, it is recommended to use Bridge mode.
Where can I configure it?
Device to device communication can be configured by selecting Networks > selecting the network you want to configure > Settings > Advanced settings > Device to device communication.
Recently updated(date updated)
Recently added(date created)
|
<urn:uuid:2f0a2d83-718d-498d-b250-f23b489aef68>
|
CC-MAIN-2022-40
|
https://documentation.meraki.com/Go/Meraki_Go_-_When_Bridge_Mode_is_not_Available_(Auto_NAT)
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00255.warc.gz
|
en
| 0.887178 | 654 | 2.578125 | 3 |
5G also called the fifth generation of wireless cellular networks, will offer new opportunities to all walks of life, including health, supply chain, agriculture, etc. Similarly, emerging technologies such as IoT (Internet of Things), Artificial Intelligence (AI), and others will continue to evolve with 5G in the future.
On the contrary, hackers and other cybercriminals keep an eye on vulnerabilities of developing technologies like Iot and Artificial Intelligence (AI) as a new window of opportunity for widespread cyber-attacks.
Therefore, better cybersecurity practices have become a need of the hour to secure our 5G future. Likewise, it is the right time to understand the cybersecurity 5G network provides and where it needs to improve.
What is 5G, and How does it Work?
If we compare 5G or the fifth generation with the previous four generations, all these generations were responsible for improving connectivity levels. However, the role of 5G is a broader one as it works alongside 4G to enhance the use of mobile broadband access and eventually replace it altogether.
5G is designed to transform our daily lives by offering impressive download speeds, low latency, and much-needed connectivity for billions of devices. It transmits tons of data and uses less power than 4G LTE over shorter distances.
5G Cybersecurity Concerns
The application of 5G technology in different industries can increase hacking and other cyber risks considerably. Here is the list that includes some of the significant concerns related to 5G:
Before the advent of 5G, it was easier to perform security checks because the older networks had limited traffic points-of-contact. As of now, 5G’s based software systems do have more traffic routing points, making a monitoring process difficult. There is a strong likelihood that any unsecured area can hamper other parts of the network’s security.
More Bandwidth will Influence Existing Security Monitoring
There is no denying that current networks are limited in space and speed, but they help professionals monitor security in real-time appropriately. As far as the 5G network goes, the expanded network feature may cause severe cybersecurity issues. Hence, security teams will need to adapt to new methods when it comes to combating cyber threats.
Lack of Security in IoT Devices
There are only a handful of providers that take cybersecurity issues seriously while developing IoT devices. As more and more devices are allowed to connect via 5G networks, the chances of network theft increase to a new plane.
Smart TVs, refrigerators, and door locks are some of the premier IoT devices that can cause network breaches.
Lack of encryption is another problem related to IoT devices that can expose devices’ information to hackers, unfortunately. This way, they can discover how many devices are connected to the network. You can consider such information in the shape of the operating system and device type (depending on its nature, including mobile phones, laptops, modem, vehicles, etc.) that allow hackers to attack them with more accuracy.
Future of 5G and CyberSecurity
During the year 2020, the application of 5G will rise since it has all the right attributes to change the world. Moreover, it will surely provide an entirely new mobile experience to its users. That said, tech developers will have to make sure that cybersecurity issues should not undermine 5G technology’s effectiveness worldwide.
5G Security Foundations should be Enhanced
Developers must devise sound software protection strategies that help them cater to the uncommon risks of 5G. In addition to this, they should interact with cybersecurity organizations to counter privacy risks like hacking, data theft, privacy invasion, and more.
Manufacturers should Improve their Security Efforts
Manufacturers of lower-end products like kids’ smart baby monitors, smartwatches, and others should be given incentives if they focus on improving their consumer protection practices in terms of cybersecurity. But, the cost of creating and applying secure tech should be reduced since it does not encourage manufacturers to consider cybersecurity.
Similarly, they should receive benefits that help them overcome their bottom-line losses significantly.
How should Users prepare themselves for 5G?
If we talk about the end-users, they are the primary beneficiaries who will either suffer or get benefits from collaborating 5G with IoT devices eventually. Thus, they should follow the below-mentioned suggestions to secure their privacy when using iOT devices on 5G networks:
Use a VPN
Millions of people have turned to remote working options due to the COVID-19 pandemic. For instance, 10% of Kiwi workers perform their professional tasks from their homes, and this trend will continue to grow as the government has supported its citizens’ work from home decisions.
But workers will have to rely on a reliable VPN in New Zealand if they want to secure their official data from hackers and other cyber goons’ prying eyes. If they do not take precautionary steps, they can compromise their privacy when using their IoT devices significantly. Consequently, they can become easy targets for hackers.
They should install antivirus software like Avast, McAfee, and Kaspersky to safeguard their devices from becoming infected.
5G, without a shadow of a doubt, will play its critical role in transforming the impacts of numerous industries in the near future. However, all the stakeholders will have to consider different cybersecurity issues related to 5G closely. Otherwise, end-users will have to pay the price rather than reaping the rewards from collaborating this impressive technology with iOT devices.
|
<urn:uuid:867c6025-5b46-4de2-aaa5-fcbac81a78b0>
|
CC-MAIN-2022-40
|
https://gbhackers.com/5g-technology-and-how-it-will-change-cybersecurity/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00255.warc.gz
|
en
| 0.948186 | 1,143 | 3.015625 | 3 |
Many options exist for setting up highly available data storage through a clustered file system, but figuring out what each option does will take a bit of research. Your choice of storage architecture as well as file system is critical, as most have severe limitations that require careful design workarounds.
In this article we will cover a few common physical storage configurations, as well as clustered and distributed file system options. Hopefully, this is a good starting point to begin looking into the technology that will work best for your high availability storage needs.
Underlying Clustered File Architectures
Some readers may wish to configure a cluster of servers that simply have concurrent access to the same file system, while others may want to replicate storage and provide both concurrent access and redundancy. There are two ways to go about providing multiple servers access to the same disks: Let them both see it, or do it through replication.
Shared-disk configurations are most common in the Fibre Channel SAN and iSCSI worlds. It is quite simple to configure storage systems so that multiple servers can see the same logical block device, or LUN, but without a clustered file system, chaos will ensue if both try to use it at the same time. This problem is dealt with by using clustered file systems, which we will cover in a moment.
Generally speaking, shared disk setups have a single point of failure: the storage system. This is not always true, however, as “shared disk” is a confusing term with today’s technology. SANs, NASappliances and commodity hardware running Linux can all replicate the underlying disks in real time to another storage node, which provides a simulated shared disk environment. Since the underlying block devices are replicated, the nodes have access to the same data and both run a clustered file system, but this replication breaks the traditional shared disk definition.
“Shared nothing,” in contrast, was the original answer to shared disk single points of failure. Nodes with distinct storage would notify a master server with changes as each block was written. Nowadays, shared nothing architectures still exist in file systems like Hadoop, which purposely creates multiple copies of data across many nodes for both performance and redundancy. Also, clusters that employ replication between storage devices or nodes with their own storage are also said to be shared nothing.
You cannot access the same block device via multiple servers, as we noted. You always hear about file system locking, so it’s strange that normal file systems cannot handle this, right?
At the file system level, the file system itself is locking files to protect the data against mistakes. But at the operating system level, the file system drivers have full access to the underlying block device, upon which they are free to roam. Most file systems assume that they are given a block device, and it’s theirs and theirs alone.
To get around this, clustered file systems implement a mechanism for concurrency control. Some clustered file systems will store metadatawithin a partition of the shared device, and some choose to utilize a centralized metadata server. Both allow all nodes in the cluster to have a consistent view of the state of the file system, to allow safe concurrent access. The model with the central metadata sever, however, is sub-optimal if your goal is high availability and eliminating single points of failure.
One other note: The clustered file system model requires swift action when a node does something wrong. If a node writes bad data or stops communicating its metadata changes for some reason, other nodes need to be able to “fence” off the offender. Fencing is accomplished in many ways, most often using lights-out management interfaces. Healthy nodes will Shoot The Other Node In The Head (STONITH), or yank its power, at the first sign of inconsistency to preserve the data.
Best Clustered File Systems
- GFS: Global File System.
- GFS, available in Linux, is the most widely used clustered file system. Developed by Red Hat (NYSE: RHT), GFS allows concurrent access by all participating cluster nodes. Metadata is generally stored on a partition of the shared (or replicated) storage.
- OCFS: Oracle (NASDAQ: ORCL) Clustered File System.
- OCFS is conceptually very much like GFS, and OCFS2 is now available in Linux.
- VMFS: VMware’s (NYSE: VMW) Virtual Machine File System.
- VMFS is the clustered file system that ESX Server uses to allow multiple servers access to the same shared storage. This makes virtual machine migration (to different servers) seamless, as the same storage is accessible at the source and destination. Journals are distributed, and there is no single point of failure between the ESX servers.
- Lustre: Sun’s (NASDAQ: JAVA) clustered, distributed file system.
- Lustre is a distributed file system designed to work with very large clusters containing thousands of nodes. Lustre is available for Linux, but its applications outside the high performance computing circle are limited.
- Hadoop: a distributed file system, like Google (NASDAQ: GOOG) uses.
- This is not a clustered file system, but rather a distributed one. We include Hadoop because of its rising popularity, and the wide array of storage architecture design decisions that can take advantage of Hadoop. By default, you will have three copies of your data on three different nodes. Changes are replicated to each, so in a sense it can be treated as a clustered file system. Hadoop does, however, have a single point of failure: the name node, which keeps track of all file system level data.
Choosing a Clustered File System
Having too many choices is never a bad thing. Your implementation goals will dictate which clustered or distributed file system and storage architecture you choose. All of the mentioned file systems work very well, assuming they are used as intended.
Article courtesy of Enterprise Networking Planet
Follow Enterprise Storage Forum on Twitter
|
<urn:uuid:a7b0571b-5828-4ec8-830f-b79172f9d5f6>
|
CC-MAIN-2022-40
|
https://www.enterprisestorageforum.com/networking/storage-basics-clustered-file-systems/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00255.warc.gz
|
en
| 0.938417 | 1,256 | 2.921875 | 3 |
The name Bombe generally refers to a device that British cryptologists used to decipher encrypted German military communications during World War II.
Bombe was used to reveal some of the settings of the Germans’ Enigma machine, which was used for the encryption. With some basic understanding of the Enigma device workings, the British and Allied forces were able to substitute, omit, and reverse engineer the methods their German counterparts used to encipher sensitive information of a strategic nature. The success ascribed to this British military and academic decryption effort using the Bombe, its successive iterations, and syndicated Allied devices shortened WWII by as much as two years, by some estimates.
Many people contributed to the thought and production behind the Bombe. The device is most often attributed to groundbreaking, foundational work by the Polish cryptologist Marian Rejewski, known as the “bomba”, of the Biuro Szyfrów (Cipher Bureau), as he devised and constructed the first successful bombe. It then benefited more generally and materially through the work of British multidisciplinary scientist Alan Turing, who worked at the UK Government Code and Cypher School (GC&CS), Bletchley Park, it benefited from improvements by British mathematician Gordon Welchman, working at same.
The Bombe and the British era, the Turing-led project, were popularized in the 2014 film The Imitation Game starring Benedict Cumberbatch.
“In WWII cryptographers, mathematicians, and other scientists responded to the Allies’ call to break Germany’s use of Enigma to encrypt sensitive military communications. From their project came the creation of the Bombe decryption device, negating the Germans’ longtime advantage of surprise and secrecy.”
|
<urn:uuid:fc047308-f60d-4cf8-8dc9-cc4a81dc9451>
|
CC-MAIN-2022-40
|
https://www.hypr.com/security-encyclopedia/bombe
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00255.warc.gz
|
en
| 0.958506 | 365 | 3.71875 | 4 |
A History of Symbolism
The heart shape we know today has been in use for at least 800 years. Cursory research on usage of the heart symbol reveals four hearts on a bible held by Jesus in the Empress Zoe mosaic in the Hagia Sophia. The heart symbol persists through the Sacred Heart devotion within the Roman Catholic faith, in which the heart was a symbol of Jesus's love and peace. The symbol appears frequently in renaissance, far east, and eventually western painting, sculpture, and pottery. More recently, the heart symbol represents the vitality of a hero clad in green bearing a wooden sword.
The shape of the heart symbol has changed only slightly since the early 13th century, but the meaning has remained intact. The heart is a symbol of love, most often romantic love, but love in a broader sense as well.
Modern Love800 years of a direct correlation between the heart symbol and the concept of love; that's a hell of a legacy to carry into the 21st century. In fact, I argue that the symbolism behind the heart and the image itself cannot be separated. The heart doesn't symbolize love; the heart is love.
Now, if you'll forgive me, I'd like to hate on Facebook for a paragraph or two. Because Facebook is the harbinger for the end of meaningful interpersonal relationships and the death of free and courageous expressions of humanity. I don't believe this is hyperbole, either.
Facebook devalues the meaning of "friend" and "like" to the point where these terms would not be recognizable to 20th century humans. Friend now means someone who has a page on Facebook that you find agreeable for any reason, no matter how trivial. Friend no longer implies a personal, emotional connection between two people. In the same manner, like has been bastardized from its pervious meaning of "to express personal interest in a person, place, or thing." Contemporary descriptive definitions of like skew towards "to express passing, fleeting, and temporal favor in a person, place, or thing, usually as a means to signify personal preference." Friend isn't friend, and like isn't like. Facebook is an awful, awful place.
With friend and like forever ruined, it's only fitting that Facebook (through Instagram), and now Twitter, have clandestinely agreed to morph the meaning of love by saturating our social media feeds with the heart symbol. Flick through IG, and throw hearts in the direction of #destroyedplates, #nofilter, and #tbt photos. And now, Twitter has equated the heart symbol with "like" in our timelines. So many hearts, so little emoted love.
The majority of my actions on Twitter were favorites. I'd scroll through while on a call, or while walking to my car, or while doing any number of mindless activities, and I'd throw a star to tweets that I found amusing, or relevant, or important, or indicative of the online persona I wanted to project to the world. In some ways, my collective favorites were representative of my interests, perhaps my entire being.
But make no mistake: I do not love any of the content I see on Twitter. I don't love funny tweets from @manwhohasitall. I don't love thought-provoking articles from @nytimes. I don't love the latest posts from the technology vendors whose products have enabled me to build an entire career. I don't love any of these things.
I love my family. I love my wife, my boys, my baby girl. I love old friends with whom I've traveled the world and lived to tell the tale. I love the thought of growing old in the mountains. I love myself. I reserve the use of the word love for the things that I, you know, love. And because love is wrapped up in the symbolism of the heart icon, I can't just spray hearts all over the Twitterverse.
My activity on Twitter will surely be reduced with this change. But I'm not such a curmudgeon that I expect to throw a fit and have Twitter reverse its decision. The heart is likely here to stay, the star is gone forever. I'll just quietly lose interest, as I did with Facebook all those years ago.
Truth be told, I'll be better for it.
|
<urn:uuid:55bcff6c-6a96-47ab-abac-aced6297de0a>
|
CC-MAIN-2022-40
|
https://www.eager0.com/2015/11/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00255.warc.gz
|
en
| 0.948509 | 887 | 2.71875 | 3 |
|Introduction to Object COBOL||Object COBOL Tutorials (Windows & OS/2)|
This chapter describes the key concepts supported by an Object-Oriented Programming System (OOPS), and how they are implemented by Micro Focus Object COBOL. Micro Focus Object COBOL is based on the proposed ANSI standard for OO COBOL.
OO programming introduces the following new ideas and concepts:
You need to understand how these have been implemented in Object COBOL to use it as an effective OO language.
The following sections describe each OO concept in turn, and how it is implemented in Object COBOL. Reading this chapter is not enough to teach you good OO programming; for more information about learning OO consult the section Learning More About Object-Orientation in the chapter Introduction to Object COBOL.
An object is an encapsulation of data and the procedures to operate on that data. The data is known as the object's attributes, and the procedures are known as its methods. Every object in an object-oriented application has a unique object identifier, allocated to it at creation and fixed for its lifetime.
Many of the objects in an OO application represent objects in the real world. For example, a banking system would include objects to represent customers, accounts and ledgers. The attributes of an account would include the balance, and its methods would include Debit, Credit, GetBalance. Figure 2-1 shows two ways of representing such an object.
Figure 2-1: An object
The user of an object can only find out about or change its attributes by making requests to the object. These requests are known as messages, and each message invokes a method supported by the object. The object interface is a description of all the messages to which the object responds.
To find out the balance of an account, you would send an account object the message GetBalance. The actual representation of the data is known only to the account object. As long as the object interface remains the same, a programmer can change the internals of how the object represents and operates on data, without affecting the rest of the system. It is up to the GetBalance method to ensure that it always provides data in the expected format.
A class is a template for creating objects; it embodies all the information you need to create objects of a particular type. An account class creates account objects and a ledger class creates ledger objects. An account object is said to be an instance of the account class.
In Object COBOL, a class is an Object COBOL program, which consists of a set of nested programs. At run-time, when the class is loaded, the run-time system creates a class object, which represents the class. The class object enables you to create instances of the class.
In the Object COBOL class, the outer program contains all the attributes and methods specific to the class object. The object program contains the attributes and methods specific to each instance object. Figure 2-2 shows a class diagramatically.
Figure 2-2: A class program
Why does the class object have separate methods and data to the objects it creates? A class object is an object itself; it provides a set of services and may hold attributes. Although a class is an object, it does not have the same behavior as the instances it creates.
A class is like a printer's plate, printing identical forms. A plate enables you to print a form, but is not a form itself. The Object COBOL class program contains the code and data for both the class and the object. The Object COBOL run-time enables you to create many instances of the class (see Figure 2-3).
Figure 2-3: Creating instances of a class
Each instance is like an independent program in your run-unit, although only the object data exists separately in memory for each object. The code is shared between the instances (see Figure 2-4) .
Figure 2-4: Code is shared by all instances
Methods are the code which implement the behavior of an object. In Object COBOL, each method is a separate program nested within the class or object program. An object method can access its own data, the instance data and data declared in the class program data division (except for class data). A class method can access its own data and the class data.
Note: The implementation of class data and instance data is different between Micro Focus Object COBOL and the proposed ANSI standard for OO COBOL. ANSI uses the working-storage section for object data, whereas Micro Focus instances can access class working-storage and use it for instance initialization data.
Inheritance enables you to reuse code by identifying common characteristics between classes. For example, in our banking example there are likely to be several types of account; for instance, savings accounts, high rate accounts and checking accounts. There are some features that all these accounts have in common; a balance attribute, methods Credit and GetBalance. But there are also differences; a savings account pays interest, while a checking account may allow overdrafts.
An Account class implements all the methods and attributes common to all its subclasses. The subclasses implement the attributes and methods which they uniquely require. In this example, the Account class is an abstract class; it only provides behavior for subclasses and you never create actual instances of the Account class.
A deposit account implements its own version of Debit which does not allow overdrafts. Figure 2-5 shows a possible inheritance hierarchy for Bank accounts.
Figure 2-5: Bank account inheritance
Inheritance also provides easy updates to the system later on, by adding new subclasses as required. For example, the bank might introduce two new types of savings account; one with instant access and a high interest account where you had to give notice of a withdrawal. These could be subclassed from the Savings Account class, with additions to provide the new behavior required. Because they still respond to the message interface used by all account objects, the changes do not ripple through the rest of the system.
Object COBOL is supplied with a Class Library, in which all classes are ultimately descended from class Base. Base provides methods which are required by all classes, which include methods for creating and destroying objects. The classes you write are also likely to be subclasses of Base. You can create your own root class and subclass from that, but you may need to duplicate some of the services provided by Base.
In Object COBOL, a subclass has access to all the class methods of its
superclasses. An instance of a subclass has access to all the object
methods of its superclasses. Data can also be inherited, depending on the
clauses in the
Polymorphism is an important part of any object-oriented programming system. Polymorphism means that the same message sent to different objects can invoke different methods.
For example, consider a graphics drawing system which has objects representing squares and objects representing circles. The methods for drawing squares and circles are different, but the display controller can send the message Draw to any graphical object, without caring what type it is. The receiver of the message will execute its own Draw method, producing the correct result (see Figure 2-6).
Figure 2-6: Polymorphism
As already explained, a message is the way you request an object to perform a service. A message always consists of the following:
Messages may also optionally contain input and output parameters. Where there is an output parameter, the sender will be expecting a reply to its message.
The object reference enables the run-time system to find the object for which the message is intended. The target of a message is known as the receiver. The Object COBOL run-time system uses dynamic binding to determine the receiver of a message. This means that the receiver of a message is determined at run-time rather than at compile-time.
The method selector is the text of the message; it tells the
receiver which method it should invoke. Object COBOL uses a new verb,
invoke to send messages. Figure 2-7 shows the
components of a message both diagramatically and in Object COBOL syntax.
Figure 2-7: A message
Copyright © 1999 MERANT International Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law.
|Introduction to Object COBOL||Object COBOL Tutorials (Windows & OS/2)|
|
<urn:uuid:e080d442-53bd-4ada-83d0-337f83ecdbc6>
|
CC-MAIN-2022-40
|
https://www.microfocus.com/documentation/object-cobol/oc41books/opconc.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00255.warc.gz
|
en
| 0.903006 | 1,772 | 3.890625 | 4 |
Distributed denial-of-service (DDoS) is a cyber-attack that causes mass disruption of services. From 1996 (when first reports about DDoS attacks emerged) to 2010, threat actors used DDoS mainly to promote themselves or political agendas and to encourage social change; in recent years, the financial motive has been more prevalent and more DDoS activities have made extortion a major part of their strategy. In addition, prior to 2020, DDoS actors usually sent empty threats and did not follow up with attacks; since the second half of 2020, however, actors have made good on their threats and have followed up with attacks more frequently.
Although threat actors have monetised DDoS threats and attacks in the past, we believe that popularisation of cryptocurrency, willingness of some organisations to meet extortion demands (as was seen in the ransomware attack on Colonial Pipeline) and affordability of DDoS as a service (DDoSaaS) have encouraged threat actors to pursue these kinds of activities.
DDoS extortion campaigns typically follow one of two kinds of attack chains:
When planning for DDoS mitigation, organisations should consider not only their business obligation to keep services running but also the amount of service disruption they and their customers can tolerate. The Australian Cybersecurity Centre provides some basic guidance that organisations can take to reduce the likelihood and potential impact of a DDoS attack:
By: Ali Sleiman, Technical Director MEA at Infoblox
|
<urn:uuid:e58dd1cb-9394-43fc-a64b-2e3e70499b1b>
|
CC-MAIN-2022-40
|
https://internationalsecurityjournal.com/ddos-extortion-and-mitigation/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00255.warc.gz
|
en
| 0.962367 | 302 | 2.796875 | 3 |
Massive data breaches and damaging cyberattack reports have become common in today’s cybersecurity landscape. New threats emerge constantly as cyberattackers adapt their attacks to bypass traditional security measures and avoid detection. Unfortunately, no target is too small as cyber threats increase in volume. So, why should you consider a multilayer cybersecurity process? Let’s find out.
What is multilayered security?
Multilayered security refers to security systems that protect the most vulnerable areas of your network where cyberattacks or breaches may occur using multiple components. The purpose of a layered security approach is to ensure all the individual components of your cybersecurity strategy are backed up to counter any gaps or flaws.
The layers work together to build a strong foundation for your cybersecurity plan and bolster your defenses. It’s important to ensure that your security approach shields each layer so that private data remains safe.
How does a layered security approach work?
There are different components of a layered security approach used in managing security vulnerabilities. Multilayer security secures information effectively and prevents it from being breached by malicious individuals and hackers. Each shielding component incorporated in the layering process has specific functions that deal with various types of threats to create a safe network.
The layers prevent criminals from accessing the protected network. Note that in an organization, a multilayered security approach focuses on the breaches that result from information security threats. The approach provides the organization with the right tools to defend the organization’s network by applying multiple security solutions.
It also helps businesses evaluate the potential impact of threats and implement policies and actions to fight the threats to curtail the impact.
Protect your network with these essential security layers
Multilayered security can help you protect your business-sensitive data, employees, and network. Here are crucial security layers you should put in place.
- Managed detection and response (MDR)
Managed Detection and Response is an advanced security solution that combines a 24/7 security operations center and next-generation monitoring software to identify and isolate suspicious activity on your network in real-time and detain confirmed threats immediately to prevent spread.
MDR functions as the alarm system that alerts you when a breach occurs and the security camera that catches the criminals sneaking onto your network if your preventive measures fail.
- Dark web monitoring software
The dark web is home to illegal activities like the sale of personally identifying and sensitive data stolen during information breaches. Credentials of employees are a best-seller on the dark web and are used by cyberattackers to access a company’s private information, install malware, send email spam, and more.
Dark web monitoring software scan the dark web for passwords and email addresses associated with your organization’s domain to help you discover and prevent these vulnerabilities before they are exploited by criminals.
- Business continuity and disaster recovery (BCDR)
BCDR solutions can mitigate the damage and downtime associated with cybetattacks, allowing you to restore your operations and information from a backup. However, be sure to:
- Isolate your backups to prevent them from being accessed and encrypted if your network is breached.
- Document, test and update regularly your business continuity plan.
- Phishing simulations and security awareness training
Can your workers identify phishing emails if they slipped into their inbox? Training on cybersecurity best practices is crucial when building your security layers to help your employees spot suspicious emails and other cyber scams that may threaten your network’s security.
You can reinforce the training through periodic phishing simulations to test your workers on their vigilance in spotting phishing emails to strengthen your defenses.
- Multi-factor authentication (MFA)
Enabling multi-factor authentication can help you minimize cyberattack risks. MFA requires different forms of verification to access corporate networks, accounts, or applications. For example, you may be required to enter a one-time code sent via push notification or text message after entering your password.
These additional authentication requirements prevent criminals from exploiting compromised or weak end-user passwords, making it difficult to access your network.
- Email filtering
Cyberattackers know it only takes one click on a malicious link for end-users to grant access unknowingly to their entire company’s network. Filtering emails at the gateway is crucial as it minimizes this risk and helps protect your employees and company from email-related cyber threats such as viruses, phishing attacks, malware, ransomware, and business email compromise.
- Endpoint protection
Each device connected to your networks such as security cameras, computers, printers, copiers, and smart devices is a potential entry point for cybercriminals. All of these entry points, referred to as endpoints should be included in your company’s cybersecurity plan.
A firewall is the first line of defense in your organization’s network security. It monitors outgoing and incoming traffic based on certain rules. The SonicWALL Security Appliance Firewalls act as a barrier between an untrusted and a trusted network, only allowing into your network traffic that has been defined safe in the security policy.
Employing a multilayered security approach is essential as new threats emerge daily. Implementing these cybersecurity layers will mitigate your risks and help you build cyber resilience, putting your company in the best position to prepare for, and recover from cyberattacks.
- How Multi-Layer Security and Defense in Depth Can Protect Your Business – WebRoot
- What is Layered Security & How Does it Defend Your Network? By Amy Mersch, October 4, 2021 – Pro Source
- What’s A Multilayer Cybersecurity Process And Why Should You Consider It? By John Giordani, August 18, 2021 – Forbes
- EVERYTHING YOU NEED TO KNOW ABOUT MULTI-LAYER CYBERSECURITY SOLUTION by Aratrika Dutta, August 19, 2021 – Analytics Insights
- Why You Need Layered Security, November 5, 2021 – Impact
|
<urn:uuid:13b8548f-2fd8-4764-af21-a93ea3f8ee39>
|
CC-MAIN-2022-40
|
https://news.networktigers.com/opinion/should-you-consider-a-multilayer-cybersecurity-process/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00255.warc.gz
|
en
| 0.914316 | 1,231 | 2.59375 | 3 |
In psychology, Capgras delusion is the (unfounded) thought that some close person has been replaced by an identical impostor. Under the spell of this delusion, people feel that something “isn’t right” about the person they know. Everything looks as it is supposed to but it just doesn’t feel seemly.
Whenever I receive what turns out to be a “good” phishing
email, I get much of the same experience. It looks legitimate but often
something feels “off”. I have seen so many attempts over the years that the
small irregularities or suspicious arrivals trigger a warning signal. It’s an
anomaly in a familiar place.
However, we can’t expect everyone to have the same
experience in IT. That’s why cybersecurity exists. We implement systems
to protect users from malicious actors who will try anything to gain access or
information from their victims. Manually noticing phishing or other malicious
attempts takes experience. Often, emails, websites, and apps are crafted in
such a way that they look almost identical. Yet, they are a little different.
There is a field that deals well with detecting such small
differences – machine learning.
A bright future for CyberSec
I don’t even mean “the future” in the exact sense of the word. Machine learning (ML) has been showing off its muscles in cybersecurity for quite some time now. Back in 2018, Microsoft had stopped a potential outbreak of Emotet through clever use of ML in both local and cloud systems.
Philosophically, cybersecurity is a perfect candidate for machine learning as models are predictive. These predictions are derived from massive amounts of data (a common criticism of current ML). After the models are trained, they make predictions on data points that are very similar but not identical to the training data.
Most malicious attacks depend on a similar approach. They
have to fool a human user in order to execute some actions. Clearly, they must
look as similar to something legitimate as possible. Otherwise, it will be
ignored even by those less tech-savvy.
Additionally, many new renditions of malware are somewhat simple mutations of the same code. Since we’ve been dealing with malicious code for several decades now, there’s enough of it out there to create good training sets for machine learning. We also have plenty of innocuous code for anything else we might need.
A common threat: Domain Generated
Domain Generation Algorithms (DGAs) have been a
long-standing threat for cybersecurity. Nearly everyone who has been in this
field has had some experience with DGAs. There are numerous benefits for the
attacker, making it a popular vector of attack.
One of the primary benefits of DGA attacks is that the
perpetrator can flood DNS with thousands of randomly generated domains. Of
those thousands only one would be the real C&C center, resulting in
significant issues for any expert trying to find the source. Additionally,
since DGAs are, mostly, seed-based, the attacker can know which domain to
A common and very popular way to generate seeds is to use
dates. Obviously, it’s very easy to predict which domains to register ahead of
However, DGAs produce URLs that look nothing like a regular
website (with some exceptions, as randomness can lead to something that may
seem like a pattern). For example, Sisron’s DGA produced these samples:
There are two important drawbacks. It makes it easier to
find that something is amiss, even for someone outside of cybersec. Secondly,
after just several domains, it’s clear (to humans) that they are being
However, here’s the fun part – automating fool-proof (or
even something close to that) DGA detection is unbelievably difficult.
Rule-based approaches are prone to failure, false positives, or are too slow.
DGAs were (and in large part still are) a real pain for any cybersecurity professional. Luckily, machine learning has already allowed us to make great strides in improving detection methods – Akamai have developed their, allegedly, very sophisticated and successful model. For smaller players in the market, there are plenty of libraries and frameworks for the same purpose.
Machine learning for other avenues
Cybersecurity is in a fortunate position – there’s millions
of data points and more are produced every day. Unlike other fields,
domain-specific data that can be easily labelled will continue being created
for the foreseeable future.
Yet, if DGAs can be “solved” (I use this word with some
caution) through machine learning, other attack methods almost definitely can
be as well. A great application for machine learning is phishing. Outside of
being the most popular vector of attack, it’s also the one that prominently
uses impersonation and fabrication.
Every (good) phishing website (and email) looks a lot like
it’s supposed to. However, there will always be some discrepancy – an unusual
link here, a grammatical error there, there’s always something waiting to be
After some extensive logistic regression model training, such a tool should be able to output a phishing-probability and assign a specific website to a class. While acquiring data for these models might be a little challenging, there are some public sets available (e.g. PhishTank, used by authors of the study) to ease the process.
The applications I have mentioned are just a quick skim over
the surface. Machine learning models can be applied to probably every sphere in
cybersecurity. Malware detection, OSINT, email protection and others can be
tackled effectively through proper use of ML.
Thus, cybersecurity is in a unique position. The broad
nature of cyberattacks generates large amounts of data that is the foundation
for ML-based solutions. It will not solve everything, especially highly
tailored attacks, but it will raise the bar that attackers need to overcome
immensely. Therefore, cybersecurity should be thought of as the avant-garde
of machine learning applications.
We’ll be discussing more about the ML-based solutions in a free annual web scraping conference Oxycon – the agenda involves both technical and business topics around data collection. Free registration is available here.
Author: Juras Juršėnas, COO at Oxylabs.io.
|
<urn:uuid:eb9ed758-3bd8-4d46-965d-87916cb62466>
|
CC-MAIN-2022-40
|
https://dcnnmagazine.com/data/machine-learning/hunting-anomalies-combining-cybersec-with-machine-learning/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00255.warc.gz
|
en
| 0.948264 | 1,434 | 2.875 | 3 |
Can digital help us to reduce waste and gain greater efficient use of the materials that we have on hand and enable progress toward a circular economy? It has been said that a key barrier to increased reuse, refurbishment, and recycling of goods and materials is a lack of information. In other words, if you don’t have a full understanding of the provenance and condition of an item, you won’t be able to accurately estimate the degree to which that item or its components might be reusable, or the effort and cost to refurbish or recycle it compared to the value of the refurbished or recycled item. Digital brings us new capabilities to sense, analyze and trace the composition, quality and provenance of an item, driving higher confidence and increased propensity toward reuse, resulting in cost savings and efficiency. This has design implications as well, as increased propensity toward reuse will increase the value of products designed for reuse. This will make “design for reuse” as a competitive differentiator. In other words, digital will be a critical enabler of improved Environmental, Social and Governance (ESG) performance, and companies should begin to incorporate circular economy objectives into their digital strategies now.
|
<urn:uuid:0dad2457-c149-4494-b599-93262d45193a>
|
CC-MAIN-2022-40
|
https://digitalbusinessus21.isg-one.com/home/session/702703/isg-predicts-will-digital-bring-the-circular-economy-closer
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00255.warc.gz
|
en
| 0.939715 | 242 | 2.828125 | 3 |
Table of Contents
Rivest Cipher 4, or RC4, is a stream cipher created in 1987. A stream cipher is a type of cipher that operates on data a byte at a time to encrypt that data. RC4 is one of the most commonly used stream ciphers, having been used in Secure Socket Layer (SSL)/ Transport Layer Security (TLS) protocols, IEEE 802.11 wireless LAN standard, and the Wi-Fi Security Protocol WEP (Wireless Equivalent Protocol). RC4 owes its popularity, relating to stream ciphers, to its ease of use and performance speed. Now, significant flaws mean RC4 is not used nearly as often as before.
How secure is RC4?
RC4 was initially used in many applications, like SSL/TLS and WEP, until severe vulnerabilities were found in RC4 in 2003 and 2013. As RC4 was used in WEP, attackers had a chance to practice cracking it as often as they wished. With this practice, a flaw was found in RC4 where the encryption key used by RC4 could be cracked in less than a minute. RC4 keys can come in sizes of 64 or 128-bits, and the 128-bit key is able to be obtained in seconds. At the time, WEP was the only security protocol used for Wi-Fi, so the next phase, Wi-Fi Protected Access (WPA), had to be rushed for use.
Another vulnerability was discovered in RC4 in 2013 while it was being used as a workaround for a cipher block chaining issue that was discovered in 2011. Cipher block chaining is an operational mode used by block ciphers, which RC4 did not use. A group of security researchers found a way around RC4, with only a slight increase in processing power necessary in the previous RC4 attack. Due to these vulnerabilities, and other smaller ones found later, RC4 is no longer a cipher that is recommended to be used.
Variants of the RC4 cipher
There are 4 variants to the regular RC4 cipher:
- Spritz – Spritz is used to create cryptographic hash functions and deterministic random bit generator.
- RC4A – This is a variant that was proposed to be faster and stronger than the average RC4 cipher. RC4A was found to have not truly random numbers used in its cipher.
- VMPC – Variably Modified Permutation Composition (VMPC) is a version of RC4 that was found to have not truly random numbers used in its cipher, like RC4A.
- RC4A+ – RC4A+ is an advanced version of RC4A that is longer and more complex than RC4 and RC4A, but is stronger as a result of its complexity as well.
Advantages and Disadvantages
RC4 boasts a number of advantages compared to other stream ciphers:
- RC4 is extremely simple to use, thus making the implementation simple as well.
- RC4 is fast, due to its simplicity, which makes it a better performing cipher.
- RC4 also works with large streams of data swiftly and easily.
Though it has advantages, RC4 has many disadvantages as well:
- The vulnerabilities found in RC4 means RC4 is extremely insecure, so very few applications use it now.
- RC4 cannot be used on smaller streams of data, so its usage is more niche than other stream ciphers.
- RC4 also does not provide authentication, so a Man in the Middle attack could occur, and the RC4 cipher user would be none the wiser.
|
<urn:uuid:eebf90c2-5bfe-43a3-9af6-af38bcce4173>
|
CC-MAIN-2022-40
|
https://www.encryptionconsulting.com/education-center/what-is-rc4/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00255.warc.gz
|
en
| 0.969522 | 744 | 3.296875 | 3 |
Having your own website is hard enough. In addition to adding content, trying to grow your audience, maintaining it, now you have to be cautious of malware possibly being spread through your beloved website?
According to Palo Alto Networks's recent The Modern Malware Review, “90 percent of Unknown Malware [is] Delivered Via Web-Browsing.”
This confirms that most web-based infections fly under the radar for several hours/days before being detected by major antivirus products.
In our previous blog posts, we've discussed how web exploits affect end users’ machines and serve malicious payloads.
Let's take a look behind the curtain on websites and web servers that house and serve malware and how to better protect your own website.
There are millions of websites and the process of getting your own is fairly trivial. Without a doubt the ease of deploying a website and relative low cost to own one is responsible for many of the security issues we face. While you may think of a website as a personal blog, e-commerce site or forum, the bad guys see it as a resource for many purposes:
- Hosting illegal/copyrighted files for free.
- Hosting malware, phishing and fake pharmaceutical pages.
- Sending spam (malware can run on a website just like it does on your desktop computer).
- Gaming Google’s SEO ranking algorithm by stuffing pages with backlinks.
- Performing Denial of Service attacks to knock other websites offline.
Finding the flaw(s)The dominant web server software is Apache, which runs on Linux. While there is a widely accepted belief that Linux is more secure than Windows, web servers are constantly hacked into by attackers ranging from script kiddies to professional pentesters.
Let’s review some of the most common reasons why websites get hacked:
Stolen user credentials
You access your website by logging into a Control Panel or login page from your favorite blogging software such as WordPress. Occasionally, you may also use an FTP program to upload files. If malware is present on your computer, and it happens to be a keylogger or some other type of password stealer, everything you type, as well your configurations files, can be harvested and sent back to the bad guys. Similarly, logging into your website from a free Wi-Fi hotspot or insecure access point exposes your password.
Most attackers leave the default “admin” username and choose a password that is easy to guess by performing a technique known as a brute force attack. This consists of trying out hundreds of thousands of passwords until one matches. If you use a typical dictionary or a cute pet name, you might as well give the bad guys the keys to your house.
Websites run multiple software programs in order to render pages, store customer data, etc. If those are poorly configured or outdated, a multitude of bugs can be exploited by a remote attacker to gain access to the system.
A very important aspect of Linux security is file permissions. However, it is a double-edged sword because, while if set properly, file permissions can make a site very secure, the opposite is true as well. Many people do not understand permissions well or simply disable them altogether in order to install a plugin that complained about restrictions. You can read more about file permissions in this blog post.
A popular attack method that has allowed countless script kiddies to deface websites and spread malware is called Remote File Inclusion (RFI). It consists of tricking the web server into thinking it should open a file as if it actually resides locally by passing specially crafted parameters into the URL. The remote file could be anything, but hackers will use scripts, also known as “shells’” (more on that later), to gain access and perform administrative operations directly on the website.
Owning the box
The ultimate goal of an attacker is to completely own the target system, something known as being root. Unless you possess the administrative credentials, your local user account has very restricted permissions which do not allow you to alter core parts of the website to do as you please. Once again, web servers have enough vulnerabilities to exploit to elevate a user’s privileges to root.
For the most part, attackers will reuse publicly available exploits, perhaps tweak them a little bit to add their signature or a message to their friends. Even exploits that are several years old still work, which shows one of the big issues with website security: lack of security maintenance. There is also some confusion between the hosting provider and its customers regarding the patching process. Some providers will not perform upgrades for you, especially if you install your own Content Management System (WordPress, Joomla!, Drupal). There are many reasons for that:
- The cost: When you only pay a few dollars a month for web site hosting, your provider is not going to waste its time and money troubleshooting your site.
- The responsibility: If performing an update on your site breaks the database or other critical part, this is a pretty big responsibility to assume. Your hosting company is not a web development studio.
Prevention goes a long Way
- Only administer your website from a device you trust is free of malware. If you aren’t sure, why not run our Malwarebytes Anti-Malware program?
- Do not administer your site from a free Wi-Fi hotspot (i.e., your local StarBucks). If you must, please use a free or cheap VPN program to encrypt your connection.
- Keep your website up to date just like you would (or should) keep your computer patched up. If you use WordPress, the main dashboard will tell you when updates are available.
- If spending time to secure your site is not your cup of tea, you might want to pay a little more and do “managed hosting,” a turn-key solution where everything is taken care of for you.
- Use strong passwords and change them on a regular basis.
- Back up your site at least once a month.
Web malware is quite different from what we see on the Windows platform. For one, there are many more scripts (as opposed to compiled binaries) which can be written in Perl, PHP, Python or simply in bash, the popular Unix shell language. Such scripts are also known as “backdoors or “shells,” since once uploaded onto a website (using for example the Remote File Inclusion we discussed earlier), they allow unfettered remote access.
A popular shell known as C99 lets an attacker browse the entire website’s content directly from its browser:
Figure 1: C99 Shell: A Hacker’s FavoriteIn addition, this shell lets you delete and add files, dump the database and even change file permissions.
In almost all cases involving a site hack, you will a find a backdoor of some sort. It may not have a full graphical interface, but as long as it allows hackers remote access, that is more than enough to keep the site under their control. By nature, shells are very small in size and will try to hide in certain directories or, if that is not possible, will be in folders that have more laxed permissions, such as /images, simply because that was the easiest place to inject them.
How to recognize a backdoorAccessing your files
If you are trying to hunt for malicious files, you will need to access your website internals. You can do so either by FTP, SFTP or SSH. FTP is the old-school way of uploading files using a client like FileZilla or CuteFTP. I recommend using SFTP instead, which supports encryption (as opposed to sending out your login credentials in the clear with FTP). By far the best way to access your web server is using the command line terminal through SSH. Keep in mind that it requires a certain understanding of Linux commands and can seem a little overwhelming. Finally, you can of course browse your files using your web hosting company’s control panel (Cpanel and Plesk to name two).
Figure 2: The Plesk Control PanelFile name patterns/location
Although not a very reliable approach, searching for malicious shells by name can yield some good results. Many hackers will not bother renaming the backdoor they uploaded. So if you see a file called c99.php or r57.php (two very popular backdoors), you are pretty much guaranteed it is bad. Another trick the bad guys use is to rename those files with another extension such as “.txt,” so keep an eye open for those as well (i.e., c99.php.txt, r57.php.txt).
Looking at folders where plugins or images normally reside can be quite revealing if you search for files that have no business being in there.
Figure 3: A Backdoor Hiding Among ImagesFile modification date
If your site was hacked recently but was fine say, a month ago, then you have something to work with: time stamps. Look for any file added or modified recently and treat is as suspicious.
As we talked about earlier, file permissions and ownership are crucial to keeping a website secure. At the same time, many backdoors that are uploaded will often show with unusual attributes or attributes that are once again “out of place” with other files around them. So if you see a file with “777” permissions, it should instantly raise a red flag. Please refer to this article to learn more about file permissions and ownership.
A much more powerful method to identify backdoors is to search for patterns and strings within the files themselves. That is where knowledge of Linux commands such as grep comes into play. This, of course, relies on having a list of malicious strings or patterns that is kept up to date. In many ways, you could compare that to antivirus signatures and a malware database.
Figure 4: Yet Another Backdoor Called FilesManLog analysis
When all else fails, log analysis can be your best friend. Think of log files as the black box investigators recover to find out more about an accident. Logs contain traces of all events that happened on your website, sorted out by timestamp. There are two types of logs often mentioned: Apache’s access and error logs. Every time someone visits a page on your site, a record is created in your Apache’s access logs. The error logs show entries for commands that resulted in an error, often indicating malicious activity, such as trying to brute force a login page or performing a hack. As you may imagine, log files can get really large, which makes searching them a real pain. There are tools such as OSSEC that make this process a little easier.
If you are using your web host’s control panel, the logs may be located as illustrated below.
Figure 5: Control Panel Showing Apache Logs
Full website compromiseInjecting a backdoor on your site is just the first step in a long chain of events to follow. Once the attacker can control your server, it will want to carry out some sort of action. Note that most (if not all) site compromises are automated, meaning there is no human sitting at a terminal and hacking your particular site. Automated scripts are constantly probing the wire to hack anything that is vulnerable.
As mentioned before, your website can be leveraged to do all sorts of things. Let’s take a look at some of the most common motives and how it is done.
Spam (AKA pharma hack)
It seems spam is here for the long haul. Few people know that pharmaceutical spam also affects websites. While it may not be visible to site owners, pharma spam is definitely found by search engines. All of the sudden, your website is pushing fake drugs and other dubious products. Not only is this going to use up a lot of bandwidth, it will put a website in Google’s blacklist with all the resulting consequences on search rankings.
Figure 6: A Legitimate Website Advertising Various DrugsThe pharma hack is very difficult to eradicate because it is often buried deep within the website’s files and even database. For more information on the subject, I recommend this excellent free resource.
Using your website to distribute malware is probably the most popular motive behind a site compromise. A legitimate website already has traffic, perhaps even a good SEO ranking, and costs nothing for the hacker. Most notably, who is going to suffer the consequences if it gets blacklisted? You, and the hacker can just find another victim.
While some websites are used to host malware, the majority are simply part of a redirection chain, making it harder for the authorities to find the culprits. As such, the site’s main purpose is to redirect your legitimate visitors onto malicious sites. One way this is done is by hacking a core Apache file called .htaccess. This file serves multiple purposes and can, in fact, be configured to keep the bad guys at bay. At the same time, it is the file of choice for any hacker to plant its malicious redirection code. Most people who own a website will probably never have heard of .htaccess, that is, until the day they get hacked.
Figure 7: Malicious Redirect Code Within The .htaccess FileThe picture above shows a typical redirection also known as a conditional redirect. The condition is that the visitor to your site must come from one of the sites (mostly search engines) above. If the condition matches, then the rule is to send the visitor directly to the bad guy’s site. This little trick works really well to keep the redirect under the radar from you, the website’s owner, who most likely will always enter the URL to your site directly, without visiting a search engine to access it.
One other method to redirect traffic is to hack your site’s CMS (Content Management System). If you use WordPress or Joomla!, you know they require PHP, a universal server-side scripting language. The core PHP files can easily be injected with malicious content in the same fashion a Windows file can be infected with a virus and still work normally. In many cases the malicious code is inserted either at the top or the bottom of a page, but this is by no means a standard.
If you browse to your site and view its source code, you might see something like this:
Figure 8: Injected Code Used To Push Malware FoundIf you access the infected file directly from your server (via FTP/Control Panel/SSH), you might see this:
Figure 9: A Malicious Obfuscated ScriptThe reason you see something different is that PHP, being a server-side scripting language, will render the page to the client (whoever browses your site from the outside) without exposing the underlying code.
Regardless, the bad guys love to use encryption or encoding and in particular base 64 encoding. The goal is to obfuscate the code enough to make it harder to really know what it does, especially if you were to do a string or pattern search. Besides, base 64 is also used for legitimate uses, so one cannot completely label it as bad without generating false positives. There’s a cool service offered by Sucuri that lets you deobfuscate base 64 and other encodings; check it out here.
Rogue modules and rootkits
The bad guys want to establish a long and undetected presence on your website. Hacking the .htaccess file or your PHP files is too obvious and can be cleaned up too easily. For this reason, we are seeing more and more advanced malware binaries and even rootkits that are much harder to identify and remove.
Hacking an entire web server might require more work but also pays off in a big way. After all, why would bad guys waste time hacking one website at a time when they can control hundreds or even thousands by compromising a single server box where they are all hosted.
In recent news, we have heard of such compromises affecting Apache with, for example, the Linux Cdorked backdoor. What we are seeing are more sophisticated website compromises with on-the-fly malicious payloads delivered only once to the same victim, with advanced mechanisms built in to defeat crawlers and honeypots in order to evade detection.
Just last week we heard German web hosting company Hetzner was hacked and customer data leaked. The report indicates the attackers had used a never-before-seen rootkit with a backdoor component. This shows how quickly the bad guys are climbing up the chain and one can only imagine the potential of hacking at the source.
So what does Linux malware look like? If you ever get your hands on a sample, you can first analyze it statically. In the example below, we will take the now famous Cdorked backdoor, responsible for those stealthy Apache infections. When the file first surfaced it was practically undetected, only slowly getting more generic signatures from major antivirus vendors once it was made public. Truth be told, harvesting such malware is difficult and requires root-level access as well as cooperation from the hosting providers.
The file command in Linux will show you the binary format (ELF) and that it was compiled for a 64-bit processor.
Figure 10: The File Command Reveals Information On The BinaryIf you are interested in checking out its sections, another command (objdump –h filename) will list them all.
Figure 11: Looking At The Binary’s SectionsOf all these sections, the following are the most important:
- .text: It contains executable code.
- .rodata: It contains string constants.
- .data: It contains initialized data.
- .bss: It contains uninitialized static variables.
Figure 12: A Binary String DumpWhile it is no guarantee, you can sometimes get an idea of what’s in the code. But there is nothing like disassembling the file to really get to its core. This is a more tedious and lengthy process which we will not get into right now. However, know that all of this can be achieved with either native Linux commands and some additional open source programs.
Parting thoughtsAs you can see, web server malware has evolved through the years, just like its traditional client-side counterpart. When major hosting providers who supposedly invest a lot of money and technology get hacked, one has to wonder if we can ever really be protected against cyber-criminals. The truth is that for the most part, if you run your own website and perform (or have a company perform) on-going maintenance and prevention, you are most likely safe. In the security field you can never say never, which is why backups are so important when something does happen.
The majority of websites that do get hacked all show failures in one or several areas such as weak passwords, a poorly configured system never updated since it was implemented and a hosting provider that is cheap in all aspects. To some extent, you could compare your website to your car. If you put the cheapest gas in it, never take it to the mechanic for maintenance and always leave it unlocked, you are likely to run into some issues.
StopBadware: StopBadware.org with its community forum BadwareBusters.org is dedicated to fighting web malware and educating people on the issue. It also operates a large URL ClearingHouse for infected websites. Malwarebytes is one of the StopBadware’s Sponsoring Partners.
Google Webmasters Tools: While a lot of people are angry when Google blacklists their sites, there is usually one or more good reasons why it happened. This is a must-go-to resource if your site is infected and you are on the road to recovery.
Sucuri SiteCheck: This free scanner will inspect your site for malware as well as check if it is already on any blacklist.
RedLeg: There are some good Samaritans out there, people dedicated to helping out others in forums. This is the case with this individual who also provides a ton of useful tips on his site.
|
<urn:uuid:15577659-dc70-4131-9dd0-7894acad42fa>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/blog/news/2013/06/a-guide-to-website-security
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00255.warc.gz
|
en
| 0.941999 | 4,177 | 2.65625 | 3 |
Every vulnerability discovered by Chinese researchers has to be immediately reported to the government. No wonder it’s causing a chilling effect among the local hacker community.
A comprehensive research paper by the Atlantic Council deep dives into China’s law requiring to report the vulnerabilities to the government right after notifying vendors. Given that government agencies harvest flaws for offensive use, this practice raises concerns about its impact on international cybersecurity research.
According to the paper, Chinese corporate research teams and individual researchers have dominated marquee hacking competitions and corporate bounty programs for at least a decade. In 2018, China banned its researchers from participating in such events abroad. Soon after, the Regulations on the Management of Network Product Security Vulnerabilities (RMSV) followed, requiring Chinese network product providers to “notify the country’s Ministry of Industry and Information Technology (MIIT) about vulnerabilities found in “network products” within a few days of reporting them to the appropriate vendor.”
The Chinese government is serious about RMSV, and a story about a bug in a logging library, Log4j, is a stellar example. Publicly disclosed at the end of 2021, Log4j caused havoc, with researchers calling it a Fukushima moment for cybersecurity.
“In late November 2021, a researcher at Chinese technology giant Alibaba a severe vulnerability in Log4j and disclosed it privately to the Apache Software Foundation (ASF) team maintaining the library. A month later, Alibaba found itself on the receiving end of government sanctions. China’s Ministry of Industry and Information Technology suspended subsidiary Alibaba Cloud from a cyber threat and information-sharing partnership for six months, apparently for failing to report the Log4j vulnerability, also known as Log4Shell, directly and promptly to the MIIT,” the report reads.
While little is known about the precise punishment mechanism, the Atlantic Council said that the titanic cybersecurity entity was punished for “following what were by all accounts best practices, or at least something close to them.”
“The law has the potential to either funnel vulnerability information to the MIIT well ahead of industry-standard timelines or to create “a chilling effect on future coordinated disclosure” in one of the world’s largest information technology (IT) hubs,” the report said.
Researchers hunt for vulnerabilities in products, open-source libraries, and embedded software for various reasons: prestige, profit, ethical principles, and entertainment, among others. The independent researcher community, the Atlantic Council argues, is essential to managing the level of risk posed by software to users.
Data from technology giants ( Apple, Microsoft, VMWare, RedHat, and F5) discussed in the paper suggests that the RMSV has not yet significantly impacted the supply of vulnerability disclosures. Microsoft is a possible exception here – in 2020, vulnerabilities reported by Chinese researchers plummeted from 59 to 11, where they hover each month since.
“However, that is not to suggest that the research community in China is immune to its legal context. First, the potential for a delayed effect outside of this study’s timeframe remains, especially when acknowledging the considerable vagueness in CVE reporting and dating practices,” the Atlantic Council said.
More from Cybernews:
Subscribe to our newsletter
|
<urn:uuid:1daade06-f775-453a-8602-fe1134505f58>
|
CC-MAIN-2022-40
|
https://cybernews.com/security/bejing-wants-to-know-about-all-bugs/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00455.warc.gz
|
en
| 0.930111 | 674 | 2.59375 | 3 |
In a recent study, business leaders with outsourced their IT were asked to list the benefits of moving to Managed IT.
While the days of downloading new antivirus software from floppy disks or CDs are long gone, having up-to-date protection against cyber threats is still a very present reality. Even if the way we protect our computer systems looks different than it did in decades past, the need for protecting valuable data is more important than ever.
Because the antivirus software industry has changed and evolved, there are many lingering misconceptions about what these programs and tools look like in a modern context. Personal users and businesses need to maintain a strong awareness of antivirus protections in order to avoid damaging (and often expensive) data loss.
In this post, we’ll clear up some of the confusion around modern antivirus software. We’ll also explore the disadvantages of free tools and software and discuss how managed IT services can help secure your data.
Do You Still Need Antivirus Software?
Even in today’s technology climate, antivirus software isn’t completely obsolete. It does, however, look different based on the type of device you’re using and how updated your device’s operating system is within a current framework.
For example, iOS users do not require outside or additional antivirus software. This is because the Mac operating systems (and successive software updates) include internal antivirus protections. Microsoft Windows users, on the other hand, are still subjected to many cyberattacks and virus threats.
In the end, the decision is up to each individual user or business. Antivirus software can provide an extra layer of protection by quarantining and removing viruses, ransomware, malware, and other threats. Although the installation process might look different than it used to, the importance of having basic protection remains the same.
Disadvantages of Free Antivirus Programs
Modern antivirus programs are available in several formats. While some of these are internal (as in the case with iOS), others are available for purchase or free download.
Although “free” can be a good selling point, there are a few red flags to watch for when it comes to relying only on the protection of free antivirus software or applications.
Potential for slower system response
Free antivirus programs are typically built more broadly in order to serve a more general audience. This means that they are not crafted with your particular machine, network, or system in mind. As a result, free antivirus tools can be cumbersome and slow.
Over time, free programs might reduce the speed of your systems and have a negative impact on your team’s ability to work efficiently.
Need to update and maintain frequently
Many free tools also require regular updates. Even though these updates often mean better virus protection, it’s also quite time-consuming and tedious to constantly wonder whether you need the next software patch. A constant need to improve also casts doubt on the program’s security abilities in the first place.
Security loopholes and vulnerabilities
When you opt for a free tool or platform, it’s difficult to have complete assurance. There may be multiple security loopholes and critical vulnerabilities that hackers have already exposed. When you rely on a free antivirus program, you may be subjecting your entire framework to these unforeseen consequences.
Poor coding and framework
Free programs usually don’t have the development teams behind them that fully scalable, paid programs do. A lack of resources can mean that free programs are poorly coded and not designed properly. In addition to a buggy user experience, poor coding can make it more challenging to actually protect your systems from attack and exposure to viruses.
Free Antivirus vs. Paid Antivirus
One of the most significant differences between free and paid software is that free versions are usually reactive—meaning that they respond to security threats after those issues have taken place. Paid options are usually more robust, which means that they are built to be proactive and stop attacks before they occur.
Because paid antivirus tools are more advanced, they can also respond uniquely to attacks, viruses, and other damaging threats in order to re-route or reduce potential damage.
How MicroTech IT Can Help
MicroTech IT of Boise offers comprehensive cybersecurity services to protect your business-critical systems. Our packages include security offerings that help you “lock down” your virtual doors and protect valuable data. We accomplish this through sophisticated data security and protection practices so that you can reclaim peace of mind and get back to what matters most.
It’s never a good idea to leave your security and virus protections to chance. As threats evolve, you need comprehensive tools to ensure that you never lose critical data or the ability to conduct business as normal.
Reach out to MicroTech today to learn more about our expert cybersecurity, backup, and virus protection services. We’ll guide you on a path to better data protection—guaranteed.
|
<urn:uuid:7bbe33f3-8050-43b9-aab7-6363f74e6e4d>
|
CC-MAIN-2022-40
|
https://www.microtechboise.com/blog/why-free-antivirus-software-can-do-more-harm-than-good
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00455.warc.gz
|
en
| 0.937893 | 1,027 | 2.578125 | 3 |
Subnet Mask Cheat SheetRecords Cheat SheetGeoDNS ExplainedFree Network TroubleshooterKnowledge BasePricing CalculatorLive CDN PerformanceVideo Demos
BlogsNewsPress ReleasesIT NewsTutorials
Give us your email and we'll send you the good stuff.
Heather Oliver is a Technical Writer for Constellix and DNS Made Easy, subsidiaries of Tiggee LLC. She’s fascinated by technology and loves adding a little spark to complex topics. Want to connect? Find her on LinkedIn.
There are many cyber threats that target online businesses and individuals, such as identity theft, phishing, ransomware, and spoofing, to name a few. But one of the greatest threats to organizations is DDoS attacks. These types of attacks are skyrocketing—up 278% in the first quarter of 2020 compared to previous years and up 31% more in just the first quarter of 2021. While directed at companies, the damage is felt by everyone who uses the sites affected. DDoS mitigation and prevention are crucial in today’s digital climate.
DDoS stands for distributed denial-of-service. This type of cyber attack generates huge spikes in web traffic using a botnet, which is designed to overwhelm a server or network. DDoS attacks are popular because they can quickly and effectively leave websites and systems without redundancy measures in place completely inoperable.
Tip: Want to learn more about this type of attack? See our “What is a DDoS attack” resource.
Considering the ever-growing reliance the world has on the internet for work, school, and play, including DDoS protection in your DNS strategy is a must. A single minute of downtime can cost businesses as much as $5,600.
Factor in the average length of a DDoS attack, which is up to four hours, and costs could be as high as $1.3 million—and that figure doesn’t even include the cost of staff hours and loss of employee productivity due to sites and services being inaccessible.
To make matters worse, it’s predicted that DDoS attacks will start lasting longer, as many as 10 days. Such an attack, if not mitigated quickly, can easily cripple an organization and damage its reputation permanently.
Did you know?: The cost of downtime can be much higher for some corporations. For example, in the last quarter of 2020, Apple and Amazon reported record-breaking revenues that averaged $950,000 per minute. Just an hour of downtime for a company generating this amount of income would cost more than $57,000,000.
Identifying normal traffic versus malicious activity during a DDoS attack is sometimes difficult. However, with the right DNS monitoring tools, you can spot anomalies or unusual traffic behavior and take steps to protect your domain accordingly. Redundancy, with DNS services such as Failover and Secondary DNS, will also ensure your site remains live during an attack.
Failover is a type of DNS load balancing that acts as a safety net for your domain. This service allows you to configure multiple IP addresses or hosts for a domain and is based on the health of your servers.
The way Failover works varies from one provider to the next, but is a simple and cost-effective solution for keeping domains up and running. At Constellix, health checks are performed through our Sonar Monitoring Suite, which detects anomalies and recognizes issues with your servers. And unlike many of our competitors, we also verify the status of your backup servers before routing your traffic to another IP as an extra precaution.
Failover offers excellent protection from poor-performing servers or outages. The downside is that if your DNS or CDN provider experiences an outage, your domain will still have downtime regardless of how many backup servers you have in your failover configuration.
As with Failover, Secondary DNS is an additional safety measure for your domain. But Secondary DNS is more than just a “backup.” With this configuration, you’ll have two authoritative nameservers for your domain. This option will ensure that your domain remains online even if your primary provider has an outage. Even with a 10+-year company history of zero downtime, we still recommend having two DNS providers.
DNS Monitoring Tools should also be an integral part of your DDoS prevention and mitigation strategy. With solutions like Constellix’s Real-Time Traffic Anomaly Detection (the only one of its kind in the industry), you can see anomalies and unusual traffic patterns as they happen. This allows you to make proactive decisions and prevent DDoS attacks from rendering your site inoperable, as there is typically a noticeable difference in traffic prior to a full shutdown.
When choosing a DNS provider or if you’re attempting to strengthen your current DDoS mitigation strategy, be sure to ask about any DNS analytics and reporting features. Such tools are invaluable in preventing DDoS attacks before massive damage can occur as well as pinpointing misconfiguration errors.
One of the most cost-effective and efficient ways of preventing DDoS attacks is by having redundancy at every point of failure. And fortunately, this can be done on the DNS level when you implement the right services into your strategy. Failover, Secondary DNS, and DNS monitoring tools can all help mitigate and prevent an attack—but utilizing all three of these methods together is the ultimate solution.
If you liked this, you might find these helpful:
Sign up for news and offers from Constellix and DNS Made Easy
|
<urn:uuid:44a52853-04c7-477d-9487-3a50ce61aee4>
|
CC-MAIN-2022-40
|
https://constellix.com/news/ddos-protection-mitigation-dns
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00455.warc.gz
|
en
| 0.942117 | 1,159 | 2.578125 | 3 |
Machine learning and related types of Artificial Intelligence(AI) are used to work to help solve various problems but it still seems surprising that it could be used to help stop the alarming decline in bee populations across the globe.
The Varroa mite Wikipedia describes the Varrao mite as follows: ” The Varroa mite can only reproduce in a honey bee colony. It attaches to the body of the bee and weakens the bee by sucking fat bodies . In this process, RNA viruses such as the deformed wing virus (DWV) spread to bees. A significant mite infestation will lead to the death of a honey bee colony, usually in the late autumn through early spring. The Varroa mite is the parasite with the most pronounced economic impact on the beekeeping industry. Varroa is considered to be one of multiple stress factors contributing to the higher levels of bee losses around the world.”
A recent article claims that the mite rarely kills a bee outright but weakens the bee by sucking blood and weakening it making it susceptible to diseases and causes young to be born weak and deformed. In time this can lead to colony collapse. One problem is that you may not even see the mites as they are only a millimeter or so across. An infestation may not be discovered for some time. As shown on the appended video beekeepers put a flat surface beneath the hive and pull it out to inspect it to find tiny bodies of the mites. It is painstaking and time-consuming work. How machine learning can help Machine learning models are good at sorting through data that is “noisy” such as the flat surface with the varroa mites on it but covered in all sorts of other debris. The machine can be taught to identify the shape of the mites, and count them.
Apizoom Students in Switzerland at the Ecole Polytechnique Federale at Lausanne(EPFL) have created an image recognition device named ApiZoom. When trained on images of mites through a photo, the device can recognize any visible mite bodies in seconds. All a beekeeper has to do is take a photo with a smartphone and upload it to the EPFL system. The EPFL project was begun back in 2017. The model has been trained with tens of thousands of images that have made it progressively better at its job. The success rate of detection is now about 90 percent about the same as humans achieve. The project now intends to distribute the app as widely as it can. Alain Burgnon of the EPFL project said: “We envisage two phases: a web solution, then a smartphone solution. These two solutions allow to estimate the rate of infestation of a hive, but if the application is used on a large scale, of a region. By collecting automatic and comprehensive data, it is not impossible to make new findings about a region or atypical practices of a beekeeper, and also possible mutations of the Varroa mites.”
This kind of systematic data collection would be a major help for coordinating response to infestations at a national level. No doubt ApiZoom could be used globally not just in Switzerland. Apizoom is being spun off as a separate company by Bugnon. There are many ways of dealing with an infestation as described in the Wikipedia article on the virus. Some bee types are resistant to the mite and perhaps more of those bees will be used to produce honey. However, the Apizoom app will no doubt help out to control the mite.[…]
|
<urn:uuid:aa5e7d68-ec8d-4915-97d8-6ba90c334e6f>
|
CC-MAIN-2022-40
|
https://swisscognitive.ch/2019/02/06/artificial-intelligence-used-to-help-save-bee-colonies/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00455.warc.gz
|
en
| 0.946723 | 742 | 3.796875 | 4 |
No matter the size of your business or customer base, if you are selling or storing sensitive information online, then you need to assess your company’s cybersecurity risks. It’s been shown that the average cost of reviving from a serious cyberattack exceeds $1 million.
Fortunately, more and more business owners are becoming aware of cyber threats and have various strategies in place to keep attackers at bay. One report found that IT spending has reached almost $4 billion in 2019, a 3.4% growth from the previous years.
But, with new threats emerging almost every fortnight, what do you focus on? In other words, how do you assess and manage your company’s vulnerabilities?
What Is Vulnerability Management?
Imagine you own a house. You take all the necessary steps to keep it safe from thieves and the elements. You lock the doors and windows when you leave. And, you patch the roof when you find a hole in the shingles.
The same with your company. You stay on top of cyber threats by constantly identifying, categorizing, resolving, and facing a proactive approach.
Vulnerability management is an ongoing process that allows you to identify and address any vulnerabilities that could increase your risks of a cyber attack. These vulnerabilities can appear virtually anywhere, from your operating system (OS) to end-user applications or enterprise applications, so you need to be thorough about how, where, and what you are looking for.
What Are the Steps for Vulnerability Management?
To make sure that no potential threat goes unnoticed, an effective vulnerability management program must follow these steps:
Discover: Always have an accurate overview of the assets that need to be protected. Review and update your inventory after important transactions, such as a merge.
Information Management: Have a team whose job is to make the IT talk easier to understand by your employees, stakeholders, and so on. They should inform every actor in the organization about threats, what to do in case of malicious attacks, and so on.
Risk Assessment: This is vital to ensure that you identify any possible threat at any level in your organization. Keep in mind that vulnerabilities can come not only from within your organization but from outside too, such as from other business partners.
Vulnerability Assessment: With vulnerability assessment, you will have clear recommendations and steps that you need to take to avoid threats and strengthen your security.
What Is Vulnerability Assessment?
While vulnerability management is an ongoing process, vulnerability assessment is a one-time process usually carried out by a team of security experts. Their goal is to identify any vulnerabilities that cybercriminals could use to attack your organization and offer recommendations on how to address and fix those weak points.
After the team has identified and remedied the vulnerabilities, they will also run a penetration test. Its purpose is to see if there are any weak points that the team might have missed and that could compromise your organization.
What Are the Different Types of Vulnerability Assessment
A vulnerability assessment project includes a variety of tools, scans, and processes to ensure no stone is left unturned. Some of them include:
Network-based Scans: These scans have the purpose to identify possible vulnerabilities in your network, both wired and wireless.
Wireless Network Scans: These scans will focus on identifying possible points of attack in your wireless network.
Host-based Scans: These scans are used to identify possible vulnerabilities in servers, hardware, and so on.
Data-based Scans: These scans will look for points of attack in your database to prevent malicious attacks.
Application-based Scans: These scans will look at your websites and apps to detect any software vulnerabilities.
Reach Out to Cyber Sainik for Help
Unless you have a lemon stand and you only take cash, you will need some form of cyber protection against cybercriminals. And, it doesn’t matter if you are just a small business, thinking that a good antivirus will protect you. It won’t and it’s not enough.
If you want to protect your data and want your customers to rest assured that they are safe doing business with you, then you need to be one step ahead of cybercriminals. That means always being aware of your vulnerabilities and ready to remediate them and close and weak points.
|
<urn:uuid:71901fc2-9ddb-4c04-b85b-7c951bdd6ae2>
|
CC-MAIN-2022-40
|
https://cybersainik.com/vulnerability-management-vs-vulnerability-assessment/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00455.warc.gz
|
en
| 0.944202 | 884 | 2.65625 | 3 |
A transformation is a network of logical tasks called steps. Transformations are essentially data flows. In the example below, the database developer has created a transformation that reads a flat file, filters it, sorts it, and loads it to a relational database table. Suppose the database developer detects an error condition and instead of sending the data to a Dummy step, (which does nothing), the data is logged back to a table. The transformation is, in essence, a directed graph of a logical set of data transformation configurations. Transformation file names have a .ktr extension.
The two main components associated with transformations are steps and hops:
Steps are the building blocks of a transformation, for example a text file input or a table output. There are over 140 steps available in Pentaho Data Integration and they are grouped according to function; for example, input, output, scripting, and so on. Each step in a transformation is designed to perform a specific task, such as reading data from a flat file, filtering rows, and logging to a database as shown in the example above. Steps can be configured to perform the tasks you require.
Hops are data pathways that connect steps together and allow schema metadata to pass from one step to another. In the image above, it seems like there is a sequential execution occurring; however, that is not true. Hops determine the flow of data through the steps not necessarily the sequence in which they run. When you run a transformation, each step starts up in its own thread and pushes and passes data.
You can connect steps together, edit steps, and open the step contextual menu by clicking to edit a step. Click the down arrow to open the contextual menu. For information about connecting steps with hop, see More About Hops.
A step can have many connections — some join two steps together, some only serve as an input or output for a step. The data stream flows through steps to the various steps in a transformation. Hops are represented in Spoon as arrows. Hops allow data to be passed from step to step, and also determine the direction and flow of data through the steps. If a step sends outputs to more than one step, the data can either be copied to each step or distributed among them.
|
<urn:uuid:95604454-0504-4c98-a46f-201e47460a52>
|
CC-MAIN-2022-40
|
https://help.hitachivantara.com/Documentation/Pentaho/5.2/0L0/0Y0/040/000
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00455.warc.gz
|
en
| 0.926968 | 460 | 3.109375 | 3 |
EJBCA Enterprise 7.9.0, now generally available, introduces vehicle-to-everything (V2X) PKI capabilities to provide trustworthy and reliable communications between connected vehicles and road infrastructure.
By 2025, it is projected that there will be over 400 million connected cars in operation, up from some 237 million in 2021. Many vehicles already feature new capabilities that allow us to get real-time updates, drive more safely, and even connect with the world around us — whether we’re on our daily commute or a cross-country road trip.
The innovation is endless and it’s all driven by one thing — connectivity. From components within the car to infrastructure on the road, the number of connections in the transportation ecosystem is growing rapidly, and it’s just the beginning.
What is V2X?
Vehicle-to-everything (V2X) is an all-encompassing term for the communications between vehicles (vehicle-to-vehicle) and the infrastructure that surrounds them (vehicle-to-infrastructure). Several applications of V2X communications have emerged and many more are on the horizon, and the benefits are clear.
Communication between vehicles and road infrastructure, such as traffic lights, parking spaces, roadside units, and other systems can help to improve efficiency and safety by allowing vehicles to make real-time decisions based on changing conditions.
Just imagine: You take the train from your home to the city, then walk straight into a ride-share vehicle that is waiting at the station to drive you to your office. Or, imagine there’s been a minor car accident in the intersection, and the traffic lights automatically adjust to prevent further damage, while emergency vehicles are notified immediately.
The possibilities are endless, but the coordination required to make V2X a practical reality is still a work in progress. The future of V2X is dependent on collaboration between automakers, infrastructure developers, and government agencies to establish and comply with industry-wide standards.
Enter C-ITS and PKI
Cooperative Intelligent Transport Systems (C-ITS) is an ecosystem that facilitates communication between vehicles and infrastructure.
Since these communications involve exchanging sensitive data between the vehicles and other entities, such as road infrastructure and road safety applications, it is critical to protect and secure its communication. To enable secure V2X communications for C-ITS, public key infrastructure (PKI) is used to issue and manage trusted security credentials for vehicles and infrastructure components, commonly known as ITS Stations (ITS-S).
Technical security standards related to V2X communications are available and under development. The US standard IEEE 1609.2 covers security aspects, including secure message formats and security management. In Europe, ETSI publishes EU standards for C-ITS that extend the IEEE standard.
V2X PKI with EJBCA Enterprise
EJBCA is a highly scalable and flexible PKI platform, designed for modern connected ecosystems like V2X. Built on open standards and a Common Criteria-certified open-source platform, the certificate authority (CA) is transparent and reliable, with already widespread use across the automotive and transportation sectors today.
With EJBCA 7.9.0, we’ve furthered our commitment to securing the connected vehicle by introducing functionality that allows EJBCA to act as an Enrollment Authority (EA) in a C-ITS PKI, registering ITS entities and issuing enrollment credentials.
While there are several components that make up the C-ITS Security Management System, including the Root Certificate Authority (RCA) and Authorization Authority (AA), the development of EA capabilities marks our first effort toward supporting the C-ITS PKI lifecycle with EJBCA.
To learn more about V2X PKI capabilities with EJBCA, click here.
|
<urn:uuid:b44a97bf-b204-464c-9b7d-014889703be2>
|
CC-MAIN-2022-40
|
https://www.keyfactor.com/blog/ejbca-supports-v2x-pki-for-connected-vehicles-and-infrastructure/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00455.warc.gz
|
en
| 0.933414 | 791 | 2.546875 | 3 |
Those of you familiar with VSAN will be aware that when it comes to virtual machine deployments, historically, objects on the VSAN datastore were deployed either as a RAID-0 (stripe) or a RAID-1 (mirror) or a combination of both. From a capacity perspective, this was quite an overhead. For instance, if I wanted my VM to tolerate 1 failure, I need two copies of the data. If I wanted my VM to tolerate 2 failures, I needed three copies of the data and if I wanted my VM to tolerate the maximum number of failures, which is 3, then I had to have 4 copies of the data stored on the VSAN datastore. In VSAN 6.2, some new configurations, namely RAID-5 and RAID-6 are introduced to help reduce the overhead when configuring virtual machines to tolerate failures on VSAN. This feature is also termed “erasure coding”. However the use of the term “erasure coding” and it relationship with RAID-5/6 has caused confusion in some quarters. If you want to get a primer on erasure coding, and how it ties into how RAID-5/6 configurations are implemented on VSAN, have a read of this excellent article by our BU CTO, Christos Karamanolis.
Introduction to RAID-5/RAID-6 on VSAN
Note that there is a requirement on the number of hosts needed to implement RAID-5 or RAID-6 configurations on VSAN. For RAID-5, a minimum of 4 hosts are required; for RAID-6, a minimum of 6 hosts are required. The objects are then deployed across the storage on each of the hosts, along with a parity calculation. The configuration uses distributed parity, so there is no dedicated parity disk. When a failure occurs in the cluster, and it impacts the objects that were deployed using RAID-5 or RAID-6, the data is still available and can be calculated using the remaining data and parity if necessary.
RAID-5 and RAID-6 are fully supported with the new deduplication and compression mechanisms which were also introduced with VSAN 6.2.
Also note that if you include Number of disk objects to stripe as a policy setting for the RAID-5/6 objects, each of the individual components that make up the RAID-5 or RAID-6 objects may also be striped across multiple disks.
As mentioned, these new configurations are only available with VSAN 6.2. They are also only available for all-flash VSAN. You cannot use RAID-5 and RAID-6 configurations on hybrid VSAN.
VM Storage Policies for RAID-5 and RAID-6
A new policy setting has been introduced to accommodate the new RAID-5/RAID-6 configurations. This new policy setting is called Failure Tolerance Method. This policy setting takes two values: performance and capacity. When it is left at the default value of performance, objects continue to be deployed with a RAID-1/mirror configuration for the best performance. When the setting is changed to capacity, objects are now deployed with either a RAID-5 or RAID-6 configuration.
The RAID-5 or RAID-6 configuration is determined by the number of failures to tolerate setting. If this is set to 1, the configuration is RAID-5. If this is set to 2, then the configuration is a RAID-6. Of course, you will need to have the correct number of hosts in the cluster too. Note that if you want to tolerate 3 failures, you will need to continue using RAID-1.
Overview of RAID-5
- Number of failure to Tolerate = 1
- Failure Tolerance Method = Capacity
- Uses x1.33 rather than x2 capacity when compared to RAID-1
- Requires a minimum of 4 hosts in the VSAN cluster
- Number of failure to Tolerate = 2
- Failure Tolerance Method = Capacity
- Uses x1.5 rather than x3 capacity when compared to RAID-1
- Requires a minimum of 6 hosts in the VSAN cluster
As highlighted in Christos’ excellent erasure coding article above, RAID-5 and RAID-6 configurations will not perform as well as RAID-1 configurations. This is due to I/O amplification. During normal operations, there is no amplification of reads. However there is I/O amplification when it comes to writes, (especially partial writes) since both the current data and parity have to be read, current and new data must be merged, new parity must be calculated, and then the new data and new parity need to be written back. So that results in 2 reads and 2 writes for a single write operation. For RAID-6, the write amplification is 3 reads and 3 writes due to the double-parity.
So while there are significant space savings to be realized with this new technique, customers need to ask themselves whether maximum performance is paramount. If their workloads do not require maximum performance, significant space savings (and thus $$$) can be realized.
Design Decisions – Data Locality Revisited
This is something I mentioned in the overview of VSAN 6.2 features when I discussed R5/R6 as implemented on VSAN. The VSAN team made a design choice whereby core vSphere features such as DRS/vMotion and HA do not impact the performance of the virtual machine running on VSAN. In other words, a conscious decision was made not to do “data locality” in VSAN (apart from stretched clusters, where it makes perfect sense). A VM can reside on any host and any storage in the cluster and continue to performance optimally. This non-reliance on data locality lends itself to R5/R6, where the components of the VMDK are spread across multiple disks and hosts. Simply put, with R5/R6, the compute does not reside on the same node as the storage.
With this design we can continue to run core vSphere features such as DRS/vMotion and HA, and not impacting the performance of VM that is using R5/R6 for its objects, no matter which host it runs on in the cluster.
Tolerating 1 or 2 failures, not 0 or 3
RAID-5/6 configurations can only be used when the number of failure to tolerate is set to 1 or 2 in the policy. If you attempt to tolerate 0 or 3 failures, and you try to deploy a VM with this policy, you will be notified that it is unsupported. A sample warning is shown below:
Note that RAID-5/RAID-6 do not need witness components. With RAID-5, there will be 3 data components and a parity component; with RAID-6, there will be 4 data components and 2 parity components.
This is a nice new feature for customers who may not need to achieve the maximum possible performance from VSAN and are more concerned with capacity costs, especially in all-flash VSAN. This feature, coupled with dedupe and compression should realize significant cost savings for all-flash VSAN customers.
One final note: RAID-5/RAID-6 is not supported in VSAN stretched clusters. This is due to stretched cluster only supporting 3 Fault Domains (site1, site 2 and witness) and RAID-5 objects requiring 4 Fault Domains (RAID-6 requiring 6). Objects must still be deployed with a RAID-1 configuration in VSAN 6.2 stretched clusters. This new space-saving feature is only supported in standard, all-flash VSAN deployments.
|
<urn:uuid:5db2af8f-8241-4092-a390-918dfe473ec5>
|
CC-MAIN-2022-40
|
https://cormachogan.com/2016/02/15/vsan-6-2-part-2-raid-5-and-raid-6-configurations/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00455.warc.gz
|
en
| 0.913596 | 1,579 | 2.53125 | 3 |
Account Takeover (ATO)
A form of identity theft in which the criminal obtains access to a victim's bank, credit card accounts or business systems — through a data breach, malware or phishing — and uses them to make unauthorized transactions.
What is Account Takeover?
Account Takeover (ATO) occurs when a cybercriminal hacks your account or steals your information using your username and password. Account takeovers are perilous threats as they cause financial institutions to lose revenue. With ATO, fraudsters can steal existing accounts like bank cards, credit cards, social media, and even eCommerce websites. Successful account takeovers begin with attackers gathering data from data breaches or obtaining it on the cybercrime underground. Thieves can then access some personal information to make fraudulent purchases. An account takeover can result in fraudulent transactions on consumers' accounts.
As more businesses migrate their computer hardware to cloud computing, the takeover of employee accounts will become more of a threat. SaaS applications like Microsoft Office 365, Zoom, and Salesforce tend to become accessible from the internet. Cloud adoption means that security personnel must look at where users are authenticated. Using identity management alone is a grave threat from online scammers.
How fraud happens as a result of account takeovers
A common way attackers gain access to corporate networks is through spear-phishing emails. Spear-phishing involves sending targeted emails from someone who appears to be legitimate but actually has malicious intent. These emails often contain links or attachments that lead to websites where users are tricked into giving away personal information. Once this information is obtained, it can then be used by the attacker to gain further access to the victim's systems.
Cybercriminals also use social engineering techniques to trick people into handing over their login credentials. Social engineering refers to the practice of manipulating individuals into doing something they would not normally do. For instance, if you receive a phone call from someone claiming to be from your bank asking about a suspicious transaction on your account, you may hand over your login details without thinking twice.
Another method of gaining access to a network is through the use of brute force attacks. A brute force attack uses automated software to try thousands of different passwords until one works. This type of attack is very effective against weak passwords which have been reused too many times.
Account Takeover targets
Account takeover is prevalent across all industries, however, some industries like retail, financial services, video streaming, social media, and entertainment, higher education, and healthcare are often identified as the top money targets. While not a top target in terms of cash jackpots, small businesses often face risks for ATO that are out of proportion with their size. Small businesses often have smaller IT budgets and staff, so they are often less focused on security. Entrepreneurs may take solace in the belief that they’re too tiny to target. That is a false narrative.
Why is Account Takeover hard to protect against?
There are many reasons why account takeover is challenging to prevent. One reason is that there are no clear guidelines as to who is allowed to request changes to someone else's online banking credentials. Another issue is that some banks do not require multi-factor authentication for all logins. It is recommended that users change their usernames and passwords regularly. Also, keep track of any suspicious activity on your accounts. You can check your transaction history and alerts via your online banking portal. Finally, use strong passwords and don't share sensitive information.
Does my bank protect me from ATOs?
Banks offer several layers of protection to safeguard customers' identities:
They employ sophisticated technology to detect malicious activities such as login attempts and failed transactions.
They implement two-factor authentication with either text messaging or app-based services like Google Authenticator.
Banks monitor suspicious activity over time to spot patterns indicative of potential attacks.
They provide 24×7 monitoring and response teams trained to handle any situation involving stolen credentials.
Account takeovers are prevalent across all industries and are carried out by hackers who steal the credentials of legitimate users and then use those stolen accounts for financial gain or malicious purposes. End users can protect themselves through common-sense practices that limit the risk of becoming an ATO victim, including using strong passwords, remaining vigilant about their account activity for any suspicious or unusual behavior, and taking advantage of multi-factor authentication whenever it is available. However, larger organizations typically require layers of protection against ATOs, including sophisticated technology like Intel 471’s Credential Intelligence that delivers coverage across the entirety of the underground marketplace offering. This technology empowers organizations to proactively monitor and mitigate the risk associated with compromised credentials as their compromised credentials hit the marketplace over time, making it easier for dedicated cybersecurity teams to mitigate risk and handle situations involving stolen credentials.
|
<urn:uuid:5ce8ece0-d004-4dab-bf49-9bfb556e37b1>
|
CC-MAIN-2022-40
|
https://intel471.com/glossary/account-takeover-ato
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00455.warc.gz
|
en
| 0.937975 | 980 | 2.59375 | 3 |
Real-time social media like Twitter could be used to track HIV incidence and drug-related behaviors with the aim of detecting and potentially preventing outbreaks, a new UCLA-led study shows.
The study, published in the peer-reviewed journal Preventive Medicine, suggests it may be possible to predict sexual risk and drug use behaviors by monitoring tweets, mapping where those messages come from and linking them with data on the geographical distribution of HIV cases. The use of various drugs had been associated in previous studies with HIV sexual risk behaviors and transmission of infectious disease.
“Ultimately, these methods suggest that we can use ‘big data’ from social media for remote monitoring and surveillance of HIV risk behaviors and potential outbreaks,” said Sean Young, assistant professor of family medicine at the David Geffen School of Medicine at UCLA and co-director of the Center for Digital Behavior at UCLA.
Founded by Young, the new interdisciplinary center brings together academic researchers and private sector companies to study how social media and mobile technologies can be used to predict and change behavior. (See the center’s Twitter account.)
Other studies have examined how Twitter can be used to predict outbreaks of infections like influenza, said Young, who is also a member of the UCLA Center for Behavioral and Addiction Medicine; UCLA’s Center for HIV Identification, Prevention and Treatment Services; and the UCLA AIDS Institute. “But this is the first to suggest that Twitter can be used to predict people’s health-related behaviors and as a method for monitoring HIV risk behaviors and drug use,” he said.
|
<urn:uuid:5084c730-7e0a-4d69-a16f-7d1bfc0df28e>
|
CC-MAIN-2022-40
|
https://www.crayondata.com/twitter-big-data-can-be-used-to-monitor-hiv-and-drug-related-behavior/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00455.warc.gz
|
en
| 0.940692 | 322 | 2.859375 | 3 |
Enterprises are using multiple applications powered by the convergence of blockchain and artificial intelligence, to increase efficiency and effectiveness of RPA
It is common knowledge that Robotics is powered by artificial intelligence delivering excellence and efficiency in well-known areas- cryptocurrencies, chatbots, or voice-assisted technologies.
The field of robotics is immensely challenging, and to grow in this segment, companies need to offer reliable and affordable solutions to their clients and customers.
The exciting news is that RPA is also one of the most promising areas utilizing the convergence of Blockchain and AI. This convergence is now showing never before massive efficiencies in the field of robotics.
Robotics has gained massive popularity across industries over the years using artificial intelligence, making all processes more effective and error-free. Now, blockchain will keep the data decentralized and free from any central or concentrated control. By combining the decentralized power of blockchain with the agility of artificial intelligence, the field of robotics can be elevated and advanced in several ways.
The features offered by artificial intelligence will increase the efficiency of robots using automation multi-fold, while data immutability offered by blockchain will tamper-proof the processes. Leveraging these technologies simultaneously to the robotics, the operating mechanism is pre-set to achieve the desired objectives and business goals.
Swarm Robotics: The one to be benefitted the most?
The significance of artificial intelligence and blockchain is the most prominent in the case of Swarm Robotics. This is mainly because both these innovations can be applied collectively to control a group of robots. AI controls every Swarm Robot as it operates according to the pre-set principles and requirements. The collective response and behavior of the Robots can be significantly enhanced with the application of artificial intelligence and blockchain.
This convergence has enormous benefits on scalability with the enhanced scope of operations. Global enterprises have already started witnessing the application of blockchain and artificial intelligence with the Swarm robotics gaining popularity, specifically in the areas related to entertainment, healthcare, and farming. Although several stakeholders have explicitly expressed concerns about the security and safety of the features, there is hardly any negative view about the potential of applications to benefit the industry. Blockchain is a credible technology measure to alleviate the concerns of the stakeholders about the privacy and secrecy of the data. Using the secure cryptographic signatures and other advanced technologies available in the blockchain space, security, and safety concerns regarding robots can be easily handled.
Artificial intelligence will power the Robots while continuing to be the strength of this integration, while Blockchain technology will be playing a passive role by providing backup support to ensure data security and safety. Hence, with this convergence is applied to robotics in an integrated manner, robotics will transform and benefit the industry in an unbelievably positive way.
|
<urn:uuid:42f27a4b-151a-4504-b3f1-acbdf6374601>
|
CC-MAIN-2022-40
|
https://itsecuritywire.com/featured/robotics-empowering-industries-with-the-support-of-artificial-intelligence-and-blockchain-convergence/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00655.warc.gz
|
en
| 0.932934 | 542 | 2.578125 | 3 |
Researchers at Technion Center for Security Science and Technology (CSST), Hebrew University and University of Texas at Austin have published a paper (Power to peep-all: Inference Attacks by Malicious Batteries on Mobile Devices) explaining how “poisoned” batteries in smartphones can be leveraged to “infer characters typed on a touchscreen; to accurately recover browsing history in an open-world setup; and to reliably detect incoming calls, and the photo shots including their lighting conditions.” Going further, the researchers also describe how the Battery Status API can be used to remotely capture the sensitive information.
The “attack” starts by replacing the battery in the target smartphone with a compromised battery. Perhaps by poisoning the supply chain, gaining secretive access to the device, or selling the batteries through aftermarket resellers. The specific method is left as a thought exercise, but for the risk analysis, we assume that the battery has been replaced and is thus exploitable.
Smartphone users will tell you that the battery is the most frustrating component of their devices. To improve this experience, smartphone batteries include technology to report on current charge rates, discharge rates, charging method, etc. With this information, the device can provide feedback to the user and change operating behavior to maximize battery life.
This requires a communications channel between the battery and the smartphone, and this is the channel the researchers leveraged to exfiltrate data. The information is not restricted to only the operating system but, also exposed to the Battery Status API as defined by the W3C organization meaning it can be captured by a malicious website if accessed through a vulnerable browser (Chrome.) So the attack starts with a compromised battery, leverages the Battery Status API to expose the captured data and sends it to a malicious website through a vulnerable browser. Lots of moving pieces to line up, but plausible. So what information can be exposed this way?
The researchers showed an ability to identify the characters typed on the screen, identify incoming phone calls, determine when a picture is taken and identify metadata for that photo. The characters being typed aren’t read directly, but the poisoned battery infers what is typed by measuring the effect on battery parameters.
This has an effect on the accuracy of the information being captured. Determining when a picture is taken or when a call is received is accurate 100% of the time. But identifying what characters are typed is only accurate 36% of the time. If the eavesdropper is able to narrow the potential characters being typed, for example, if it is known the person is typing a website URL or booking tickets on a travel website, accuracy increases to 65%.
When considering all of the potential cyber threats that exist, this definitely counts as a low risk. Replacing a cell phone battery is difficult to do without the owner being aware, and even if you manage to change the battery, the information it gathers is prone to error and capturing the information remotely is a complex endeavor. But the risk is tangible, and if not mitigated, it could grow to become significant. Mozilla and Apple have already removed support for the Battery Status API from their browsers, and the W3C organization has updated the Battery Status API specification.
Currently, Chrome is the only “vulnerable” means of exfiltrating the data through this specific attack. However as we have seen repeatedly, once a novel approach is identified, others will expand and evolve the attack. This will be an interesting one to watch.
|
<urn:uuid:e5d9ba63-c949-4a5d-ac02-23ea35bd6f84>
|
CC-MAIN-2022-40
|
https://securityaffairs.co/wordpress/73908/mobile-2/cellphone-battery-data-exfiltration.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00655.warc.gz
|
en
| 0.942306 | 716 | 2.734375 | 3 |
The name of a network interface is a string that is generated based on the interface attributes. This is a predictable naming scheme used in OS Linux that overcomes the shortcomings of the legacy “ethX” naming scheme.
We will use the enp0s3 device to clarify the predictable interface naming scheme in Linux. The “en” stands for Ethernet, “p0” is a bus number of the Ethernet card and “s3” is a slot number. The two-character prefix “en” identifies the type of the interface. For example, the prefix “en” is for Ethernet card type, “wl” for Wireless cards, “sl” for serial lines etc.
The legacy interface naming scheme causes problems when there are multiple NICs on a computer because the interfaces are named based on the order in which the Linux kernel finds them during boot. If an interface is removed or added, the previously added interfaces may change names. Therefore, it is no longer recommended to use it, even though the latest Linux distributions still support it.
Fortunately, Linux allows us to create our own interface naming scheme, which, when properly maintained, helps greatly during network monitoring and troubleshooting. It gives administrators the freedom to choose an interface name based on criteria, such as what type of interface it is, where it is located and connected, or what it is used for. This makes interfaces much easier and faster to identify during network monitoring and troubleshooting.
Interface names/descriptions can be obtained from NetFlow starting with v9 (Cisco), NetStream (Huawei), and IPFIX. It should be emphasized that the name and description specify the interface on which the Flow exporter captures packets. The generated flows are then exported to the Flow Analyzer.
NetFlow v9 defines the IF_NAME field type (value 82), which is the abbreviated name of the interface, e.g., “FE0/1”, and the IF_DESC field (value 83), which is the full interface name, e.g., “FastEthernet 0/1” (Figure 1). Similarly, IANA defines element IDs 82 and 83 for IPFIX, which are InterfaceName and InterfaceDescription. Again, the first field is a short name uniquely describing the interface and the second field represents the interface description.
Figure 1 – Cisco NetFlow v9 Fields IF_NAME and IF_DESC
We will illustrate the importance of custom interface naming using the network infrastructure shown in Figure 2. The IPFIX exporter, which in this scenario is nProbe, captures packets on the interface eth0 connected to SPAN port configured on the switch in DMZ. The flows are exported from nProbe to Noction Flow Analyzer (NFA).
Figure 2 – Network Infrastructure with Flow Exporter and Analyzer
Nprobe is started from Linux CLI with the following parameters:
$ sudo nprobe -i eth0 -V 10 -n 10.0.0.1:2055 -T="%SAMPLING_INTERVAL %IN_BYTES %IN_PKTS %IPV4_SRC_ADDR %IPV4_DST_ADDR %IPV4_NEXT_HOP %L4_SRC_PORT %L4_DST_PORT %SRC_VLAN %DOT1Q_SRC_VLAN %SRC_TOS %TCP_FLAGS %PROTOCOL %IP_PROTOCOL_VERSION %DIRECTION %FLOW_START_MILLISECONDS %FLOW_END_MILLISECONDS %IN_SRC_MAC %OUT_DST_MAC %ICMP_TYPE %BIFLOW_DIRECTION %L7_PROTO_NAME %INTERFACE_NAME" -t 60 -d 15 -l 60
– i: interface where packets are captured
– V: flow export version: 10 – IPFIX, 9 (v9), 5 (v5)
– T: flow template definition. NetFlow V9 and IPFIX flows have a custom format that can be specified at runtime using this option.
-t: maximum flow lifetime
-d: maximum flow idle lifetime
-l: maximum queue timeout.
The parameter %INTERFACE_NAME instructs nProbe to include interface name in exported IPFIX flows (Figure 3). However, the name says nothing about where the packets are being captured. Therefore, we create our own naming scheme by renaming the interface eth0 to dmz0. At first glance, it will be clear that the port on which nProbe captures packets is connected to dmz network.
Figure 3 – The First Flow with Interface Name eth0
1. Renaming Network Interface eth0 to dmz0 on Debian 10 Linux
Many Linux distributions support renaming interfaces to user-chosen names for example internet0, dmz0, lan0 according to their physical locations or MAC addresses as part of their networking scripts. For that, we will create a .link file in the directory /etc/systemd/network/, which chooses an explicit name dmz0 for the interface eth0.
The link file contains a [Match] section, which determines if a given link file may be applied to a given device, as well as a [Link] section specifying how the device should be configured.
|NOTE: If the exporter has multiple interfaces to be renamed, a separate link file must be created for each interface.|
In the next section, we place a match condition in the link file based on either the physical location of the interface or its MAC address. Both options work well, choose the one you prefer more.
1.1 Getting Physical Location of Device eth0
Udev is the device manager for the Linux 2.6 kernel and later which allows us to identify devices based on their properties, like vendor ID and device ID. It is part of systemd and thus installed by default.
We will use the command udevadm to get the path ID of the device eth0. The Path_ID is a unique identifier of the device which consists of the PCI domain, the bus number, the device number and the PCI device function.
$ sudo udevadm info /sys/class/net/eth0 | grep ID_PATH
Figure 4 – Getting ID_PATH for Device eth0
We can break the device string “0000:00:03.0” down as following:
00 – Bus number the device is attached to
03 – Device number
0 – PCI device function
1.2 Getting MAC Address of Device eth0
We need to obtain the MAC address of the interface eth0 (Figure 5), so we can create a Match condition based on the MAC address.
$ ip link show dev eth0
Figure 5 – Checking MAC Address of Interface eth0
1.3 Creating Link File
Link files encode configuration for matching network and must have the extension .link, otherwise, they are ignored. Let’s create a link file with a Match condition and Link section that links a new device named dmz0 to the eth0 interface based on the physical location.
$ sudo vi /etc/systemd/network/10-rename-eth0.link
In case we want to create a Match condition based on the MAC address of interface eth0, change the above configuration as follows:
1.4 Updating Network Configuration
The interface eth0 will be changed to dmz0 after a reboot, so we need to update the network configuration to use the new interface name. Edit the /etc/network/interfaces file, and change all instances of the eth0 interface to the new name – dmz0.
The sed command replaces all the occurrences of the keyword eth0 with dmz0:
$ sudo sed -i 's/eth0/dmz0/g' /etc/network/interfaces
As the next step, we will check if the interface eth0 has been changed to dmz0 (Figure 6):
$ cat /etc/network/interfaces
Figure 6 – Network Configuration Changed
Reboot Debian and verify that the new network interface name dmz0 is in use (Figure 7):
$ ip link show dev dmz0
Figure 7 – Interface dmz0 is in use
Now we can run nProbe with the same parameters as before; however, we need to replace eth0 with dmz0 after the -i parameter.
The exported IPFIX flows now contain updated interface name -dmz0 (Figure 8).
Figure 8 – Updated IfName Identification Value in IPFIX Flow
InterfaceName and InterfaceDescription are fields within IPFIX flow that provide information about the abbreviated and the full name of the interface on which the flow exporter captures packets.
For this reason, it is important to name the interface so that it is immediately clear where the interface is connected, which then helps in network monitoring and troubleshooting. However, both the exporter and the flow analyzer must support these features.
The interface name and description identification via NetFlow, IPFIX, and NetStream is supported in Noction Flow Analyzer starting with v 21.11. This capability is especially helpful at times when the use of SNMP isn’t the best option or is simply not possible.
|
<urn:uuid:f3f8fc95-8b0e-4361-95de-bf6e0f456621>
|
CC-MAIN-2022-40
|
https://www.noction.com/blog/interface-names-and-descriptions-configuration
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00655.warc.gz
|
en
| 0.83181 | 2,041 | 3.203125 | 3 |
Achieving information security is a never-ending challenge as bad actors find ways to get around every new protective layer. Like all other information security technologies, two-factor authentication can be bested by a determined intruder.
Two-Factor Authentication Means Users Need More Than a Password
The idea behind two-factor authentication (2FA) is that passwords by themselves are relatively weak security. Instead of users needing just a password, they need to prove their identity in two different ways. These ways include:
- Something you know, like a password.
- Something you have, like a cellphone that can receive a single-use token.
- Something you are, like your fingerprints or retinal scan.
It’s important to note that a password plus security questions is not an implementation of 2FA; the security questions and the password are both “something you know.” In effect, the security questions are simply secondary passwords.
Two-Factor Authentication Is Vulnerable to Attacks
Although 2FA adds an extra layer to security, that doesn’t make it invulnerable. There are several approaches a hacker can use to get past it:
- SIM hacking. In this approach, the bad actor effectively takes over the phone number of the mobile device used as part of the 2FA. This enables them to receive the single-use tokens and login.
- Phishing. Phishing can direct users to malicious sites where single-use passwords are captured. A hacker watching the site in real-time can use the token to access the targeted site before the token expires.
Making Two-Factor Authentication Effective
These vulnerabilities don’t mean that you shouldn’t use 2FA to increase the security of your systems, but it does mean you need to be smart about how you implement it.
In particular, there’s an implementation of 2FA that is not vulnerable to SIM hacking or phishing. Instead of a user providing a token that was sent to them, this implementation requires a hardware key to be plugged into the user’s device. Because of the extra cost and potential inconvenience, this may be most appropriate when you have highly sensitive data to protect. It’s also important to note that at least one version of a hardware key was itself found to be improperly implemented and vulnerable to attacks.
Two-factor authentication should also be integrated into an effective overall information security strategy. Employees need to be trained to detect and avoid phishing emails. Your infrastructure should include firewalls, blacklists, filters, and other controls that help protect employees and their credentials from dangerous sites.
|
<urn:uuid:359f009e-6b0e-41be-9285-a814fd71d381>
|
CC-MAIN-2022-40
|
https://www.ccstechnologygroup.com/two-factor-authentication-has-vulnerabilities-as-well-as-benefits/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00655.warc.gz
|
en
| 0.916606 | 533 | 3.375 | 3 |
What is ICAM (Identity, Credential, and Access Management), and why does it matter to your business? You’re going to get the answers to both those questions. That’s not all. You will also find out the best ways to put this framework into action without putting your technology team under pressure. Start with understanding the fundamentals of the ICAM concept.
What Is ICAM?
The ICAM abbreviation stands for Identity, Credential and Access Management. It is best known as a standard issued by the U.S. General Services Administration, a U.S. government agency. The concept is vital for several reasons. First, it brings together several related ideas into a single overarching security framework. Second, the U.S. government is one of the largest buyers in the marketplace. Therefore, if you want to keep supplying the government, you need to understand their expectations.
At its most basic, ICAM is made up of several interrelated concepts, as stated in the Federal Identity, Credential and Access Management (FICAM) Roadmap and Implementation Guidance.
- Identity Management. According to the roadmap, “The primary goal of identity management is to establish a trustworthy process for assigning attributes to a digital identity and to connect that identity to an individual.”
- Credential Management. This concept applies to a variety of tokens, including digital certifications, smart cards, and cryptographic keys.
- Access Management. The art and science of approving or preventing access to a resource.
- ICAM Intersection. Fundamentally, the aim is to view the above disciplines on a holistic basis. Each area – identity, credential and access – need to operate together.
Tip: Lifecycle management is a recurring concept for ICAM. It’s not enough to set up accounts and access correctly. You also need to modify, remove and update those accounts and permissions regularly.
Why ICAM Matters: The Five Goals ICAM Achieves
Federal agencies need to pursue ICAM implementation for several reasons. Whether you are in government yourself or work closely with the government, it is crucial to understand these goals. In essence, these five goals define the business value for implementing this new security framework.
1) Comply with Federal Laws, Regulations, Standards and Governance Relevant to ICAM
2) Facilitate E-Government by Streamlining Access to Services
3) Improve Security Posture across the Federal Enterprise
4) Enable Trust and Interoperability
5) Reduce Costs and Increase Efficiency Associated with ICAM
In our experience, Goals 1 and 3 might be familiar to many of you. Governments are concerned about security attacks, so Goal 3 makes sense. Likewise, government agencies need to keep up with laws and regulations that may have security requirements. The remaining goals may be less familiar, so let’s consider those in further detail.
Facilitate Access To Services
Security is only partially about keeping out attacks. ICAM is also concerned with facilitating access to the right people. Keep this in mind if you are looking at a security change that would undermine authorized access. From a security performance perspective, make sure you measure your effectiveness in fulfilling user requests. If you lose track of user satisfaction, you will face more complaints.
Resource: Want to improve security performance over time and demonstrate your value to management? Produce a dashboard showing how you are managing security. To get you started, read our short guide to access management key performance indicators. Find Out if Your Access Management Program Is Successful with KPIs.
Enable Trust and Interoperability
These two security concepts are both related. Without trust, it is challenging to interoperate between different systems. If your organization cooperates with other organizations through APIs (application programming interface) and uses SaaS applications, study this principle carefully.
Reducing Cost and Increasing Efficiency
Security expenses are never decided in the abstract. This principle requires that security operations and systems also need to keep costs and efficiency in mind. In our view, security efficiency is one of the essential quick wins. When you demonstrate that you are using your IT security budget effectively, executives are more likely to approve future budget requests. In the short term, increasing efficiency also means you can protect more of your organization’s assets with the same budget.
Why Cost And Efficiency Are The Silver Bullet In Reaching ICAM Success
While every principle matters and contributes to security success, there is one principle that plays a bigger role compared to everything else. That principle is efficiency! Without this in place, you will drown in work. There are simply never enough hours in the day to design, build and fully implement a full security system.
There are a few tools available that make ICAM easier to implement. To get you started, look at the following options. First, implement a single sign-on solution so your users will have fewer passwords to memorize. Single sign-on helps your end-users become more productive. You also need to look for ways to support your managers and support functions. That’s where using a group approach to identity management helps. Use Group Enforcer to simplify how you manage groups of users.
Reduce User Service Costs
In traditional IT security arrangements, end users have to wait on the phone and talk to somebody to get help. That approach does not scale up. What if you need to support people outside of traditional business hours? In that situation, you need a self-serve software solution like Apollo. This A.I. virtual chatbot is designed to help you with repetitive security tasks like managing password resets.
Why ICAM Matters Even If You Have Nothing To Do With The U.S. Government
We get it. U.S. government standards documents do not make for exciting reading. If you directly serve the U.S. government as a vendor, you may have no choice but to keep up with their requirements. Does that mean you can or should ignore ICAM in other cases? The answer is no. ICAM is a helpful standard to consider as you build your company’s IT security strategy. In technology management, it is easy to become narrowly focused on achieving your immediate goals. For example, you might decide to create a balanced scorecard for IT security based on the five ICAM goals.
|
<urn:uuid:3b84bf10-80bb-4d1b-ac53-3dd1c58fd057>
|
CC-MAIN-2022-40
|
https://www.avatier.com/blog/icam-identity-credential-and-access-management/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00655.warc.gz
|
en
| 0.940649 | 1,297 | 2.953125 | 3 |
Technologically inclined businesses and other organizations have long enjoyed what Icall IT “trickle down”: Continuing, rapid development results in the mainstreamingof hardware, software and services that were originally unthinkably expensive andspecialized. This doesn’t mean that higher-end products ever disappear.
In fact,the baseline performance of enterprise solutions typically ratchets ever upward. However,the effective result everywhere else is to put what were once radically powerful andunaffordable tools into the hands of most any business.
Supercomputing — along with associated high-performance and technical computing –shows how this works. Once these technologies were almost completely relegated towell-financed government and university labs and major enterprise data centers. Thecontinual evolution of Intel’s x86 microprocessor architecture, along with complementaryclustering, grid and virtualization technologies, made x86 the dominant player in modernsupercomputing it is today.
The latest Top500.org list of the world’s best supercomputers, published in June, providesclear evidence of this. The top-rated supercomputer — the “Sequoia” installation at theDOE’s Lawrence Livermore National Laboratory — is an IBM BlueGene/Q system basedon the company’s Power architecture, as are three of the list’s other top 10 systems.However, five of the top 10 utilize Intel Xeon or AMD Opteron processors. Moreimportantly, out of all the systems on the latest Top500.org list, nearly 87 percent (434)are x86-based.
Moreover, the users of these systems have changed significantly. In 1993, whenTop500.org began collecting supercomputer statistics, fewer than a third of the systemswere being used in industrial settings. Today, more than half of the world’s fastest computersare being used by enterprises. Notable changes have also occurred elsewhere, ashighly scalable and affordable x86-based technologies have taken supercomputing andHPC deep into the commercial market.
There’s Something About Dell
What does any of this have to do with Dell’s new C8000 Series? Quite simply, thesenew solutions are designed to extend the company’s already substantial hyper scalecomputing portfolio into new areas.
Dell launched its Data Center Solutions (DCS) group in 2007 to focus on the emergingcommercial hyper scale market, and the company has done very well overall. IDC’sanalysis of FY 2011 worldwide server sales revenues placed Dell firmly in first place inDensity Optimized (IDC’s term for hyper scale) system revenues with a 45.2 percent share(HP was a distant 2nd with a mere 15.5 percent). While the segment’s FY2011 revenuestotaled less than US$2 billion (compared to the worldwide x86 server market’s $34.4 billion), IDCsaid that demand for Density Optimized systems grew by a robust 33.8 percent in FY2011compared to just 7.7 percent for x86 solutions.
Dell means for the new C8000 Series to expand its leadership position by using highlyconfigurable, flexibly deployable solutions to widen the pool of hyper scale use casesand potential customers. Along with typical HPC and Web 2.0 and hosting applications,the C8000 Series can also support both parallel processing-intensive scientificvisualization workloads and the high-volume storage demands of Big Data applications.
Plus, the new systems take full advantage of Dell’s innovative work in fresh air cooling,which allows servers to be deployed without costly air conditioning systems or coolingupgrades. Plus, they can be placed in nontraditional settings, including Dell’s innovativeModular Data Center infrastructures. That means that Dell’s C8000 Series is likely tofind fans among a variety of organizations, including new and even smaller companiesinvestigating the hyper-scale market.
The new Dell systems should also pique the interest of longtime HPC and technicalcomputing players. In fact, the Texas Advanced Computing Center (TACC) is anearly advocate of the C8000 Series and is basing its upcoming petascale Stampedeinstallation on “several thousand PowerEdge C8000 servers with GPUs to help speedscientific discovery.” When it opens for business in 2013, Stampede will qualify asthe most powerful system in the National Science Foundation’s eXtreme Digital (XD)program with a peak performance of 10 petaflops, 272 terabytes of total memory and 14petabytes of disk storage.
So how big a deal is Dell’s C8000 Series? Some will suggest that the small sizeof the hyper scale market (at least compared to general purpose server opportunities)makes any effort small potatoes. That may be true in today’s dollars but makes lesssense looking ahead. Several of the use cases for the C8000 Series — hosting, Web 2.0and Big Data, in particular — are growing rapidly, and interest in commercial HPC andscientific computing applications is also robust.
Given the development of these markets over the past half-decade and the promiseof their continuing growth, Dell’s 2007 entry into hyper scale solutions looks extremelyfar-sighted. Given the company’s longstanding investments in that effort, its resultingleadership position is hardly a surprise. The new C8000 Series proves Dell is continuingto look forward and developing solutions its customers will need tomorrow but can alsouse quite handily today.
|
<urn:uuid:edd3fab2-8d55-4342-b78e-b3358186e22f>
|
CC-MAIN-2022-40
|
https://www.linuxinsider.com/story/dell-takes-the-long-view-with-hyper-scale-computing-76233.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00055.warc.gz
|
en
| 0.919304 | 1,159 | 2.546875 | 3 |
Scale your network ability with artificial intelligence!
Software defined networking (SDN) uses software — instead of more traditional hardware-based methods — to define both networking architecture and network control. While this has major, favorable implications for the scalability of networks, the promise of maintaining global awareness by abstracting control away from the data plane is revolutionary for the networking space. The possibilities of interacting with networks programmatically have opened new venues of research using artificial intelligence (AI), with new research papers showing tangible, positive impacts. While promising, most results today remain limited to the academic and research space with few business solutions leveraging the full extent of the mixed SDN-AI model potential.
Artificial intelligence as it applies to SDN can be divided into three major categories: machine learning (including supervised, unsupervised and reinforcement learning), meta-heuristics and fuzzy inference. Each of these three major categories are discussed below along with current technique examples.
Supervised machine learning has been used primarily to create intrusion detection/prevention systems, and perform load balancing, application identification, optimal virtual machine placement and packet/traffic classification. Example models of supervised machine learning include neural networks, decision trees and supervised deep learning.
Unsupervised machine learning is used primarily for denial-of-service (DDoS) detection but has also been used to optimize WiFi infrastructure using clustering. Increased download times and lower packet error rates are two benefits of clustering. Unsupervised learning has been successfully utilized to detect advanced persistent threats (APT) and perform security assessments on a network. Common algorithms are k-means, self-organizing maps, restricted Boltzmann machines, hidden Markov models and unsupervised deep learning.
Reinforcement learning has been very successful in the SDN space. It has been used for a wide range of applications including routing, adaptive streaming, intelligent architecture systems, network management and traffic engineering.
Meta-heuristics are sets of algorithms including ant colony optimization, evolutionary algorithms, simulated annealing and genetic algorithms. These systems have been extremely successful in maximizing network utilization, load balancing, routing, virtual network planning and security (including new network defense techniques to prevent DDoS attacks or eavesdropping). Ant colony optimization in particular has been shown to outperform Dijkstra’s algorithm for routing and load balancing.
Fuzzy inference has been used successfully by researchers. Fuzzy inference techniques have been used to introduce new protocols, perform intrusion detection and create optimal network deployments. AI applied to the SDN domain is an active area of research with new techniques still being discovered. The implications of its application to network optimization, allocation and security are significant. As models become more reliable and continue to excel at networking tasks, it is only a matter of time before businesses begin to adopt these techniques as powerful new tools in their infrastructure toolkit! Reference Latah, Madj. Levent, Toker. “Artificial Intelligence Enabled Software Defined Networking: A Comprehensive Overview”. IET Networks. Nov 6th 2018.
Blog Author By Brad Mascho Chief Artificial Intelligence Officer, NCI
|
<urn:uuid:2bcbaac1-383e-4558-bb97-38cbda52bd2a>
|
CC-MAIN-2022-40
|
https://www.nciinc.com/post/ai-in-software-defined-networking
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00055.warc.gz
|
en
| 0.921417 | 652 | 2.9375 | 3 |
Wednesday, September 28, 2022
Published 2 Years Ago on Tuesday, Sep 22 2020 By Adnan Kayyali
Sport has contributed to society in many ways by bringing people and communities together. However, the pandemic put a sudden stop to that and devastated an industry that once thrived on human interaction. With many venues temporarily closed, the future of sport might seem bleak… But all is not lost.
In the UK, a country known for their love of sports, trial games are currently taking place in various stadiums across the country. The goal is opening up the stadiums at a safe capacity adhering to COVID-19 safety measures and precautions.
This endeavor is supervised by the Department for Digital, Culture, Media and Sport (DCMS), along with Sports Grounds Safety Authority (SGSA), along with numerous tech companies bringing what they know to the table, all to allow fans to continue enjoying sports during COVID-19.
“Their major question and challenge is: what can be done to safely embed social distancing into the match-day operation? Each stadium is different and each one needs different solutions,” says Will Durden, director at Momentum Transport Consultancy.
The answer is apparently the use of data analytics, 3D imaging, contact tracing and simulations to map out, predict and organize everything that goes into a stadium. From the routes people take to their seats, how far away each ‘safety bubble’ is from the other, the path from the entrance to the ticket booth, the snack vendor or to the bathroom.
With each stadium being unique in its architecture, a plan will need to be customized. By using large amounts of old footage, AI is able to observe and analyse the movements of people in the stadium and conclude the best paths people should take to stay away from each other at all times.
The software being used can identify bottlenecks and areas of possible congestion around the stadium, and using contact racing apps exclusive to the stadium, people can be notified of anything that goes on. When one orders food from the app, they will be notified of when it is ready, and the rate of orders will be done to prevent long ques.
People who wish to watch sports during covid-19 can be confident that the avenue they choose has been created for their safety. With such technologies being driven by the passion of sports fans, it is exciting to see how such inventions will shape sports events in the not-so-distant future.
Google Workspace vs Microsoft 365, which one is better for you? Many firms, especially startups, have a challenging time deciding. Consider this blog post a quick overview of what to expect from each. We will go through each product’s advantages and disadvantages as well as when and why you might prefer using one over the […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved
|
<urn:uuid:b3e66ff1-d27b-408a-af5e-8d48a15cae07>
|
CC-MAIN-2022-40
|
https://insidetelecom.com/technology-might-help-save-sports-during-covid-19/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00055.warc.gz
|
en
| 0.957883 | 620 | 2.65625 | 3 |
There is simply no all-in-one solution when it comes to security – the growing sophistication of hackers, the combination of human error and internal threats means every network is vulnerable. While many companies are still relying on traditional security methods, such as firewalls and anti-virus solutions, companies need to make sure they are prepared for when (yes, when) a hacker makes it through perimeter defenses, or a rogue employee decides to take data for personal gain.
While security teams are always trying to prevent those attempting to enter an organization’s network, they are all too often left helpless if the intruder makes it through undetected. Once a hacker gets into a network, it often takes weeks, months or years for an organization to realize what has happened and properly assess the damage. At this point, damage control is the only option. However, with deception technology a breach doesn’t have to mean “game over,” organizations are able to get back the upper hand once the attacker or malware has entered the network.
Per the name, deception technology is all about deceiving the intruders so they are unable to find what they are looking for – which in most cases is sensitive data that can be sold or used for monetary gains or encrypted data that demands ransom (ransomware). While the basics are easy to understand, the below breaks down the three processes that make up deception technology: trap, monitor and deceive.
- Trap: Creating a bait to lure them in
Deception technology confuses hackers into accessing decoys inside the organization’s network. These decoys mimic servers, endpoints and devices in the organization. But, how do they trap the hackers into the decoys? When hackers enter an organization, they start looking for valuable information, including cookies, passwords, emails with credentials and account names and passwords. Deception technology plants fake information on these assets that lead the intruder into the decoy systems. An advanced deception solution learns the landscape of the network and strategically places the traps in the areas most saturated with data to lead the hackers to the decoys and away from the sensitive information.
In order for the traps to work properly, they must blend into the network assets, be non-intrusive and make it impossible to differentiate between them and the real data. The challenge is to lure the attacker into the traps, while ensuring the actual user of the asset does not touch the planted decoys. Once attackers make use of the trap and lands on a decoy, they will continue to engage with it, thinking they are getting closer to the information they want, while in reality they are trapped in a mock network that is being carefully monitored by the security team.
Based on the learning of the network and the traffic monitoring, these decoys will begin to match the assets in the network, as well as adapt themselves to the activity of the attacker and respond accordingly. As the decoys detect changes in the organization’s environment, they add traps and applications to adjust accordingly.
- Monitor: Getting a bird’s eye view
What makes deception technology so adaptable and accurate is the ability to constantly monitor the network. While hackers continue to take the bait, they begin to leave a trail outlining their path on the network – a footprint of actions that gives the security team insight into the hacker’s every move. Security teams are able to study the methods used and proactively map out which decoys were most enticing.
With detailed forensics, the security admin has the ability to closely monitor the intruders in a closed environment – made up of decoys and traps – providing insight and relevant data on their purpose of entering the network and how they planned on retrieving the desired information based on their interaction with the decoy system.
With this information at hand, security teams can identify the behavior of an intruder that is harder with other forms of cyber defenses, as well as expose network blind spots that allowed the intruder in. And, the visibility into the intruder’s actions on the network make stopping the damage easier. In fact, according to the Ponemon Institute 76 percent of organizations credit lack of visibility as biggest remediation of advanced threat attacks. The more visibility, the better.
The longer the security team monitors hackers, the more information available to stop them in their tracks. The information gained during the interaction can be shared with other security tools in order to enrich the organization’s threat intelligence. As intruders continue to engage with the decoys, security team can begin to plan how to defeat them. The more they learn, the easier it is to defeat the threat – it’s as if the student becomes the master.
- Deceive and Detect: Exposing the hacker quickly and efficiently
In the end, the goal is to properly detect the intruders – slowing down movement in the organization until they are completely stopped. While the traps and decoys confuse and deceive the hacker, the damage is not completely prevented unless the hacker is detected and stopped in its tracks.
Once the infiltrators are trapped, the security team can lock down their network, patch any areas needed and ensure that the hacker isn’t able to compromise the system. The more insight the security team has, the quicker they can prevent damage and catch the perpetrator. As cyber threats continue to increase and become more sophisticated, speed is a must when protecting a network.
Today’s threat landscape requires a proactive approach to cybersecurity, and deception technology, which can work within an existing security posture, should be a part of every organization’s security armor. By having the tools to trap, monitor and learn, and finally deceive and defeat intruders, security teams can ensure that their data will remain safe and that they have the insight needed to continuously improve their network security against evolving threats.
|
<urn:uuid:6bef4507-c28b-4014-b473-322f2377d975>
|
CC-MAIN-2022-40
|
https://informationsecuritybuzz.com/articles/hackers-will-get-organization/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00055.warc.gz
|
en
| 0.950388 | 1,183 | 2.515625 | 3 |
I was speaking with a client last week and I mentioned the phrase “terminal server” and shortly followed by referring to is as a “Server.” He stopped me cold and asked the question above.
You might wonder the same thing or if not, you have already gone on the next cat video on YouTube. If you have ever had this question let me give you the “I HATE I.T.” version of the answer:
- A server is meant to be a device that serves other computers/peripherals/users.
- A workstation is designed to do work for the user who is accessing it.
HERE COMES THE GEEK TALK
We talk about all different kind of servers:
- Active Directory Server
- File and Print Server
- Database Server
- Application Server
- Mail Server
- FTP Server
- Web Server
- Virtual Server
Of course I made a list. This is all IT guys know how to do. With all of that geek speak I want to point out two items:
LETS TALK VIRTUAL
A physical server is a device you buy from Dell/HP/Lenovo and the cost ranges from ~5K to ~25K and up. The physical server has processor/RAM/Storage and some other options.
A virtual server is an operating system running using the processor/RAM/Storage from the physical server above. The virtual server can dynamically allocate the resources. This is helpful because you can use 2-3 or more virtual servers for a single physical server. Virtual servers have become the standard for server deployments.
Imagine you have a giant 100K sq. ft. warehouse. You wanted to have 10 different types of storage in the warehouse. Doing the quick math that is 10 sq. ft. for each category. However, with a virtual system you can dynamically allocate more resources to different categories. You can have Category A take 35K sq. ft. when it needs it. All of this is managed by the virtual server.
OK THIS IS REALLY COOL
The next item to consider is the terminal server.
This term has been retired by Microsoft and now they use the term Remote Desktop Server. The Remote Desktop Server (RDP) allows for a user to remote in and work on the server instead of working on your computer.
When you use a computer you need to have an operating system, applications, application settings, email, documents, web browsing and other functions. When you use a RDP server all of the computing functions are on the server.
This method of computer usage has been popular for a long time but has become more of a standard. The process allows the user to never worry about:
- Physical hardware failing and losing data
- You can open a document, close your session and come back 30 minutes later and the session be in the same spot
- Central access to data. You don’t have to be at your work desktop in your office to access your data. All of this data is kept on the server.
- Allows for a reduced amount of computing need at your desktop so you don’t have performance issues.
NOW YOU KNOW
I am sure after reading this little bit of info you are going to have something to talk about the next time you are out with friends. Or if you are not a geek and don’t talk about RDP servers in a social environment you might at least have more info about some options for your system.
These two types of servers pointed out, virtual and terminal server, is what make the cloud such an attractive offering for a small business. You can always contact IRIS Solutions for more info.
|
<urn:uuid:b5783fdb-7e97-4f16-9947-cfba081a3a65>
|
CC-MAIN-2022-40
|
https://www.gandalf.irissol.com/blog/2017-09-isnt-server-just-server/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00055.warc.gz
|
en
| 0.938669 | 761 | 2.609375 | 3 |
Trojan.Agent Removal Guide
What is Trojan.Agent?
Trojan.Agent is a generic definition of particular malware that can be set to do a variety of malicious tasks on the infected machine
Trojan.Agent is a type of malicious software that uses deception to access user's Windows machines
Trojan.Agent is a heuristic detection name of a Trojan malware category and is often used by a variety of anti-virus engines. Generic names are given to distinct malware family members or when the threat has never been analyzed before, and no detailed information is available. Nevertheless, even then, most of the reputable anti-malware programs are capable of detecting and stopping Trojan.Agent from infiltrating the computer by using machine-learning and other sophisticated methods.
The main purpose of Trojan.Agent is to access the machine while pretending to be something else – in other words, Trojans are pieces of malicious software that disguise as something desirable, such as an application or an email attachment from a seemingly legitimate source. Once inside, the system, Trojan.Agent virus can be set to perform a variety of activities, including stealing information, a proliferation of other malware, logging keystrokes, send spam, and much more.
|Description||Trojans can represent a wide variety of malware, as their main goal is to gain access to the computer by pretending to be something else|
|Alternative names||Trojan:W32/Agent, Win32.Trojan.Agent|
|Infiltration||Malware is usually downloaded from malicious websites, distributed via malicious email attachments, fake updates, scam sites, etc.|
While Trojans usually lack visible symptoms, users might notice the following:
|Removal||Use reputable anti-malware software to perform a full system scan in Safe Mode as explained below|
|Recovery||In case you experience system instability after you terminate the infection, use ReimageIntego to fix virus damage automatically|
A Trojan.Agent virus belongs to the “Agent” family of malware, which can also be attributed to such threats as worms, backdoors, and rootkits. In other words, the name “trojan” defines its distribution tactic (it tries to present itself as something else), but its functions may vary greatly, so it can also be administered to threats like worms, ransomware, etc. Due to this, Trojan.Agent removal methods may vary, as each of the threats might be set to do different actions on the infected machine.
Trojan.Agent then leverages the ad-revenue for its authors, all while the victim is coping with high computer resources usage, sluggishness of browsers, and other issues. Therefore, despite a popular belief, the increased amount of ads does not always mean adware or a browser hijacker infection but also can be a sign of a Trojan.
Trojan.Agent, depending on its aim, can also sometimes show the following symptoms:
- Increased CPU usage
- Slowness of the machine
- System crashes
- Disabled anti-malware software
- Application launch failure
- Unknown programs running on the computer, etc.
In addition to these “visible” symptoms, Trojan.Agent typically drops a variety of malicious files on the system (in places like %AppData% or %Temp%), spawns various processes, modifies Windows registry keys,, and performs many other technical changes to Windows. However, these are not that easy to spot for novice and regular computer users.
Trojan.Agent is a generic virus that can perform a variety of malicious activities on the host machine
Therefore, the best way to detect and remove Trojan.Agent form the system is by employing a reputable anti-malware program and performing a full system scan. Note that because trojans can be employed to do just anything, it is not impossible that it was used to insert other malicious software on your system. In such a case, anti-malware would get rid of that as well.
Because Trojan.Agent virus tends to modify various system files, it might render the computer damaged after it is eliminated. As a result, you Windows might start crashing, throwing BSODs,, and generally malfunctioning, which could leave you with one option – re-installation of the operating system. To avoid that, we recommend using a PC repair tool ReimageIntego – it can fix virus damage and restore Windows operation to the pre-infection stage.
Trojans can be distributed in various different ways
Trojan.Agent is distributed using different methods that have been widely used by virus creators: it can infiltrate your computer after visiting a malicious website that is filled with infected installers, by clicking on misleading pop-up ad that claims that you need to update one or several of your programs, after opening an infected email attachment that holds macro code inside. Typically, phishing email authors employ social engineering techniques to convince users to open a malicious attachment, so it is important not to do that, even if the email sounds convincing.
Also, downloading illegal programs and cracks can also increase the possibility of downloading Trojan.Agent to your PC system. If you have already noticed that your machine runs slower than it used to run or other symptoms of this cyber threat, you should check your computer for this Trojan horse. Otherwise, you may doom your machine for more serious infections and other issues.
Trojan.Agent can be distributed via spam email attachments
Remove Trojan.Agent from your computer to prevent its compromise
In order to remove Trojan.Agent virus from the system, you should rely on reputable anti-malware software, as tracking all the changes made by it manually would be almost impossible for a regular computer user. However, there are several things to keep in mind before proceed with its termination – for example, the malware might disable your anti-virus to stay on the system as long as possible. Additionally, it could load other malware payloads.
Therefore, Safe Mode might be a mandatory option for a full Trojan.Agent removal – simply follow the guide below. This mode only loads the necessary drivers and processes in order to launch the OS, so malware components are not operational.
Additionally, as a precautionary measure, we also recommend resetting all the installed browsers and resetting all passwords and checking the online banking for illegal money transfers.
Getting rid of Trojan.Agent. Follow these steps
Manual removal using Safe Mode
Safe Mode is an excellent tool when trying to battle malware. Access it if Trojan.Agent removal is causing you troubles:
Manual removal guide might be too complicated for regular computer users. It requires advanced IT knowledge to be performed correctly (if vital system files are removed or damaged, it might result in full Windows compromise), and it also might take hours to complete. Therefore, we highly advise using the automatic method provided above instead.
Step 1. Access Safe Mode with Networking
Manual malware removal should be best performed in the Safe Mode environment.
Windows 7 / Vista / XP
- Click Start > Shutdown > Restart > OK.
- When your computer becomes active, start pressing F8 button (if that does not work, try F2, F12, Del, etc. – it all depends on your motherboard model) multiple times until you see the Advanced Boot Options window.
- Select Safe Mode with Networking from the list.
Windows 10 / Windows 8
- Right-click on Start button and select Settings.
- Scroll down to pick Update & Security.
- On the left side of the window, pick Recovery.
- Now scroll down to find Advanced Startup section.
- Click Restart now.
- Select Troubleshoot.
- Go to Advanced options.
- Select Startup Settings.
- Press Restart.
- Now press 5 or click 5) Enable Safe Mode with Networking.
Step 2. Shut down suspicious processes
Windows Task Manager is a useful tool that shows all the processes running in the background. If malware is running a process, you need to shut it down:
- Press Ctrl + Shift + Esc on your keyboard to open Windows Task Manager.
- Click on More details.
- Scroll down to Background processes section, and look for anything suspicious.
- Right-click and select Open file location.
- Go back to the process, right-click and pick End Task.
- Delete the contents of the malicious folder.
Step 3. Check program Startup
- Press Ctrl + Shift + Esc on your keyboard to open Windows Task Manager.
- Go to Startup tab.
- Right-click on the suspicious program and pick Disable.
Step 4. Delete virus files
Malware-related files can be found in various places within your computer. Here are instructions that could help you find them:
- Type in Disk Cleanup in Windows search and press Enter.
- Select the drive you want to clean (C: is your main drive by default and is likely to be the one that has malicious files in).
- Scroll through the Files to delete list and select the following:
Temporary Internet Files
- Pick Clean up system files.
- You can also look for other malicious files hidden in the following folders (type these entries in Windows Search and press Enter):
After you are finished, reboot the PC in normal mode.
Finally, you should always think about the protection of crypto-ransomwares. In order to protect your computer from Trojan.Agent and other ransomwares, use a reputable anti-spyware, such as ReimageIntego, SpyHunter 5Combo Cleaner or Malwarebytes
How to prevent from getting trojans
Choose a proper web browser and improve your safety with a VPN tool
Online spying has got momentum in recent years and people are getting more and more interested in how to protect their privacy online. One of the basic means to add a layer of security – choose the most private and secure web browser. Although web browsers can't grant full privacy protection and security, some of them are much better at sandboxing, HTTPS upgrading, active content blocking, tracking blocking, phishing protection, and similar privacy-oriented features. However, if you want true anonymity, we suggest you employ a powerful Private Internet Access VPN – it can encrypt all the traffic that comes and goes out of your computer, preventing tracking completely.
Lost your files? Use data recovery software
While some files located on any computer are replaceable or useless, others can be extremely valuable. Family photos, work documents, school projects – these are types of files that we don't want to lose. Unfortunately, there are many ways how unexpected data loss can occur: power cuts, Blue Screen of Death errors, hardware failures, crypto-malware attack, or even accidental deletion.
To ensure that all the files remain intact, you should prepare regular data backups. You can choose cloud-based or physical copies you could restore from later in case of a disaster. If your backups were lost as well or you never bothered to prepare any, Data Recovery Pro can be your only hope to retrieve your invaluable files.
|
<urn:uuid:5ca7234f-06fc-47cb-a967-d9f30db1abef>
|
CC-MAIN-2022-40
|
https://www.2-spyware.com/remove-trojan-agent.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00055.warc.gz
|
en
| 0.892692 | 2,431 | 2.984375 | 3 |
A campus network is generally the portion of the enterprise network infrastructure that provides access to network communication services and resources to end users and devices that are spread over a single geographic location. It may be a single building or a group of buildings spread over an extended geographic area. Normally, the enterprise that owns the campus network usually owns the physical wires deployed in the campus. Therefore, network designers typically tend to design the campus portion of the enterprise network to be optimized for the fastest functional architecture that runs on high-speed physical infrastructure (1/10/40/100 Gbps). Moreover, enterprises can also have more than one campus block within the same geographic location, depending on the number of users within the location, business goals, and business nature. When possible, the design of modern converged enterprise campus networks should leverage the following common set of engineering and architectural principles 10:
Enterprise Campus: Hierarchical Design Models
The hierarchical network design model breaks the complex flat network into multiple smaller and more manageable networks. Each level or tier in the hierarchy is focused on a specific set of roles. This design approach offers network designers a high degree of flexibility to optimize and select the right network hardware, software, and features to perform specific roles for the different network layers.
A typical hierarchical enterprise campus network design includes the following three layers:
- Core layer: Provides optimal transport between sites and high-performance routing. Due the criticality of the core layer, the design principles of the core should provide an appropriate level of resilience that offers the ability to recover quickly and smoothly after any network failure event with the core block.
- Distribution layer: Provides policy-based connectivity and boundary control between the access and core layers.
- Access layer: Provides workgroup/user access to the network.
The two primary and common hierarchical design architectures of enterprise campus networks are the three-tier and two-tier layers models.
This design model, illustrated in Figure 3-1, is typically used in large enterprise campus networks, which are constructed of multiple functional distribution layer blocks.
Figure 3-1 Three-Tier Network Design Model
This design model, illustrated in Figure 3-2, is more suitable for small to medium-size campus networks (ideally not more than three functional disruption blocks to be interconnected), where the core and distribution functions can be combined into one layer, also known as collapsed core-distribution architecture.
Figure 3-2 Two-Tier Network Design Model
|
<urn:uuid:4055165c-9634-4d70-b256-8dc4c3f9d658>
|
CC-MAIN-2022-40
|
https://www.ciscopress.com/articles/article.asp?p=2448489&seqNum=2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00055.warc.gz
|
en
| 0.907175 | 519 | 2.546875 | 3 |
Artificial Intelligence (AI) is shaping an increasing number of sectors globally. Degradation of the natural environment and the climate crisis are complex issues requiring the most advanced and innovative solutions. AI is expected to impact environmental, financial, and job stability, amongst other areas in the future.
But, how much can AI really help contribute to the climate crisis?
Table of Contents
Environmentally, Artificial Intelligence can aid management across agriculture, water, energy, and transport.
In agriculture, AI can better monitor environmental conditions and crop yields. For water resource management, AI can help to reduce or eliminate waste while lowering costs and lessening environmental impact, such as AI-driven localized weather forecasting to help restrict water usage. AI can also manage the supply and demand of renewable energy using deep learning, predictive capabilities, and intelligent grid systems. Finally, AI can help reduce traffic congestion, improve cargo transport, and enable autonomous (or self-driving) cars.
According to Microsoft and PwC UK, using AI for these environmental applications could contribute $5.2 trillion to the global economy in 2030. Also, AI application could reduce worldwide greenhouse gas (GHG) emissions by 4% in 2030, equivalent to the 2030 annual emissions of Australia, Canada, and Japan combined.
This positive impact on the environment somewhat explains the broad harnessing of AI to contribute to managing environmental and climate change.
As a result of the environmental applications, AI could boost global GDP by 3.1 – 4.4% (Microsoft) and can generate a global economic uplift, yielding approximately US$3.6 – 5.2 trillion driven by optimized inputs, higher output productivity, and automation of manual tasks.
More generally, AI technology can help companies encourage fast consumer decision-making and detect fraud and financial crime through machine learning. For example, automated wealth management services (robot advising) and algorithmic trading are helping financial institutions to optimize financial decisions; and ‘smart ledger’ technology could support the take-up of collective defined contribution (CDC) schemes.
However, while AI promises to increase financial stability through minimized error margins, it brings new risks such as interconnectedness between financial markets and confusion regarding machine learning decision-making processes when working with AI. Therefore, macro-level standards need to be implemented, and regulators need to tighten governance on the use of AI by companies (Parker Fitzgerald) to mitigate these risks.
There is no denying that smart machines will make today’s jobs more efficient. However, humans are more likely to work with smart machines in the digital enterprises of the future than being replaced by them.
The AI applications to agriculture, water, energy, and transport will also create 18.4 – 38.2 million net jobs globally (broadly equivalent to the number of people currently employed in the whole of the UK), offering many skilled jobs. And this is just the beginning. If these many jobs are being created in these sectors alone, the possibilities are substantial across industries globally.
Therefore, companies need to train employees to work alongside machines rather than creating a fear culture that jobs will become irrelevant.
In addition to those highlighted in this article, there are so many ways that AI will enable a sustainable future. Companies will be looking to transition into more sustainable and efficient working practices, requiring workforces with the skills to support these changes. Therefore, as a society, we must accept and tackle the future of AI to reap the financial, job, and environmental sustainability and the personal advantages to a more sustainable future on our health, well-being, and lifestyle.
Lead the shift towards artificial intelligence in your organization, explore the latest technologies, such as machine learning and deep learning, with the online Artificial Intelligence MSc at the University of Leeds.
|
<urn:uuid:593a6add-ffde-4195-8fa7-e9a4ca4e2410>
|
CC-MAIN-2022-40
|
https://dataconomy.com/2021/06/how-ai-can-enable-sustainable-future/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00055.warc.gz
|
en
| 0.923065 | 773 | 3.71875 | 4 |
This chapter covers the following topics:
EIGRP Fundamentals: This section explains how EIGRP establishes a neighborship with other routers and how routes are exchanged with other routers.
EIGRP Configuration Modes: This section defines the two methods of configuring EIGRP with a baseline configuration.
Path Metric Calculation: This section explains how EIGRP calculates the path metric to identify the best and alternate loop-free paths.
Enhanced Interior Gateway Routing Protocol (EIGRP) is an enhanced distance vector routing protocol commonly found in enterprise networks. EIGRP is a derivative of Interior Gateway Routing Protocol (IGRP) but includes support for variable-length subnet masking (VLSM) and metrics capable of supporting higher-speed interfaces. Initially, EIGRP was a Cisco proprietary protocol, but it was released to the Internet Engineering Task Force (IETF) through RFC 7868, which was ratified in May 2016.
This chapter explains the underlying mechanics of the EIGRP routing protocol and the path metric calculations, and it demonstrates how to configure EIGRP on a router. This is the first of several chapters in the book that discuss EIGRP:
Chapter 2, “EIGRP”: This chapter describes the fundamental concepts of EIGRP.
Chapter 3, “Advanced EIGRP”: This chapter describes EIGRP’s failure detection mechanisms and techniques to optimize the operations of the routing protocol. It also includes topics such as route filtering and traffic manipulation.
Chapter 4, “Troubleshooting EIGRP for IPv4”: This chapter reviews common problems with the routing protocols and the methodology to troubleshoot EIGRP from an IPv4 perspective.
Chapter 5, “EIGRPv6”: This chapter demonstrates how IPv4 EIGRP concepts carry over to IPv6 and the methods to troubleshoot common problems.
“Do I Know This Already?” Quiz
The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 2-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quiz Questions.”
Table 2-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping
Foundation Topics Section
EIGRP Configuration Modes
Path Metric Calculation
1. EIGRP uses protocol number ____ for inter-router communication.
2. How many packet types does EIGRP use for inter-router communication?
3. Which of the following is not required to match to form an EIGRP adjacency?
Metric K values
Hello and hold timers
4. What is an EIGRP successor?
The next-hop router for the path with the lowest path metric for a destination prefix
The path with the lowest metric for a destination prefix
The router selected to maintain the EIGRP adjacencies for a broadcast network
A route that satisfies the feasibility condition where the reported distance is less than the feasible distance
5. What attributes does the EIGRP topology table contain? (Choose all that apply.)
Destination network prefix
Total path delay
Maximum path bandwidth
List of EIGRP neighbors
6. What destination addresses does EIGRP use when feasible? (Choose two.)
IP address 126.96.36.199
IP address 188.8.131.52
IP address 184.108.40.206
MAC address 01:00:5E:00:00:0A
MAC address 0C:15:C0:00:00:01
7. The EIGRP process is initialized by which of the following technique? (Choose two.)
Using the interface command ip eigrp as-number ipv4 unicast
Using the global configuration command router eigrp as-number
Using the global configuration command router eigrp process-name
Using the interface command router eigrp as-number
8. True or false: The EIGRP router ID (RID) must be configured for EIGRP to be able to establish neighborship.
9. True or false: When using MD5 authentication between EIGRP routers, the key-chain sequence number can be different, as long as the password is the same.
10. Which value can be modified on a router to manipulate the path taken by EIGRP but does not have impacts on other routing protocols, like OSPF?
|
<urn:uuid:a1f71ab6-8825-43a4-a83c-e294b363bbc1>
|
CC-MAIN-2022-40
|
https://www.ciscopress.com/articles/article.asp?p=2999383&seqNum=3
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00256.warc.gz
|
en
| 0.851072 | 1,125 | 3.3125 | 3 |
What is document AI, and how does it work? This primer on all things artificial intelligence and document-based data was written by:
Dr. Chris Dearner,
Grooper Product Manager
14 Truths About How Document AI Works to Get Data:
- What is artificial intelligence?
- What is machine learning?
- What is supervised machine learning?
- What is unsupervised machine learning?
- What is broad vs. narrow AI?
- What is natural language processing?
- What is deep learning?
- What is a neural network?
- How do neural nets work?
- What is Tensorflow?
- What is a GAN?
- What is TF-IDF?
- What is computer vision?
- The brutal truth about AI
Plus: Document AI FAQs
#1 - First, What is Artificial Intelligence?
AI, from Wikipedia:
In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans.
Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".
As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.
A quip in Tesler's Theorem says "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.
Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomously operating cars, intelligent routing in content delivery networks, and military simulations.
Two Sides of Artificial Intelligence
AI is a loose term, but mostly means using computers to solve problems that are generally understood to require “intelligent” judgement, such as recognizing faces, driving cars, classifying documents, making medical decisions, among others.
Many, if not most, AI tasks in the real world involve either:
- Classification of inputs into categories or
- Generation of content similar to a training set.
When you read about AI being used to, for example, screen resumes, predict the risk of recidivism, or diagnose cancer, it falls under the former category.
If you see an AI that generates human-like text, it’s the latter category.
BIG TIP: When people talk about “AI” in the news or in the industry, they usually mean something as general as “using computers to do something hard.”
This may involve machine learning; it may not. They may also be thinking of using neural networks or other sophisticated machine learning systems to solve problems without much human guidance; more on this later.
#2 - What is Machine Learning?
Machine Learning, from Wikipedia:
Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead.
It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task.
Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task.
Machine Learning is a way to build systems that display artificial intelligence (read: can solve hard problems).
Generally speaking, it is a strategy that builds systems that:
- Are not specifically programmed to perform their task until they are given training data
- Get better (to a point) at performing their task when exposed to more data or more inputs
Machine Learning in Document AI
Machine learning algorithms are generally divided into two types: supervised and unsupervised.
Intelligent document processing (IDP), classification, separation, and extraction systems all incorporate Machine Learning algorithms.
Generally, when people are concerned with ML, chances are they’re really concerned with how much work they’re going to have to do to get Grooper to process their data. More on that below, following the section on Unsupervised ML.
#3 - What is Supervised Machine Learning?
Supervised ML, from Oracle’s Data Science Blog:
“Supervised learning is so named because the data scientist acts as a guide to teach the algorithm what conclusions it should come up with. It’s similar to the way a child might learn arithmetic from a teacher.
"Supervised learning requires that the algorithm’s possible outputs are already known and that the data used to train the algorithm is already labeled with correct answers.
"For example, a classification algorithm will learn to identify animals after being trained on a dataset of images that are properly labeled with the species of the animal and some identifying characteristics.”
This is how the central machine learning algorithm in document AI works: the Grooper Architect determines a discrete set of categories, and gives an initial set of properly labeled training for the algorithm to begin making predictions.
Grooper is designed this way because human-supervised ML works better, especially for the sorts of complicated problems that are our bread and butter.
#4 - What is Unsupervised Machine Learning?
Unsupervised ML, from Oracle’s Data Science Blog:
“On the other hand, unsupervised machine learning is more closely aligned with what some call true artificial intelligence — the idea that a computer can learn to identify complex processes and patterns without a human to provide guidance along the way …
"While a supervised classification algorithm learns to ascribe inputted labels to images of animals, its unsupervised counterpart will look at inherent similarities between the images and separate them into groups accordingly, assigning its own new label to each group.”
Why Unsupervised Machine Learning Does Not Apply to Document AI
Grooper document AI does not use unsupervised machine learning, because it:
- Requires very large data sets to work at all
- Doesn’t work very well
- Will the AI get better at classifying (or extracting) on its own?
- Will AI save me from having to understand my documents or data?
The answer to both of these questions is no.
Document AI doesn’t get better on its own, because human design of intelligent document processing systems achieves better outcomes than unsupervised ML in almost every case.
BIG TIP: Nothing – not even Amazon, or Azure, or Watson – will save you from having to understand your own documents and data. (Even Google Cloud's DocAI uses relies on human review to extract information from paperwork).
If someone tells you otherwise, they’re lying; and if someone you’re talking to believes that Grooper (or any other system) will save them from their own document or data problems, they’re setting themselves up for failure.
Some Other General AI Terms
#5 - What is Broad vs Narrow AI?
Broad vs Narrow AI, from Forbes:
“The general AI ecosystem classifies these AI efforts into two major buckets: weak (narrow) AI that is focused on one particular problem or task domain, and strong (general) AI that focuses on building intelligence that can handle any task or problem in any domain.
"From the perspectives of researchers, the more an AI system approaches the abilities of a human, with all the intelligence, emotion, and broad applicability of knowledge of humans, the “stronger” that AI is.
"On the other hand the more narrow in scope, specific to a particular application the AI system is, the weaker it is in comparison.”
Weak (or Narrow) AI vs. Broad (or Strong, or General) AI is a pretty easy distinction:
- Narrow AI solves one or a small number of related problems
- Broad AI solves a wide range of them
Narrow AI exists in a number of different domains. But Broad AI?
Broad AI doesn’t exist. Broad AI will never exist. Broad AI was 10-20 years away in 1980. Broad AI was 10-20 years away in 2000. Broad AI remains 10-20 years away in 2020.
See the pattern? More about this later.
#6 - What is Natural Language Processing?
NLP, from Wikipedia:
Natural language processing (NLP) is a subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.
NLP's Relationship With Document AI
Natural Language Processing really just refers to using computers to process human language. Most Grooper tools are NLP tools: extractors, regular expressions, OCR, TF/IDF, all involve the processing of natural language.
Porter stemming is a form of NLP we use, and Grooper also integrates with Azure Translate to provide translation.
Other advanced NLP techniques, such as sentiment analysis, part-of-speech tagging, named entity tagging, etc. are theoretically possible with Grooper but not included out-of-the-box.
#7 - What is Deep Learning?
Deep learning, from Wikipedia:
Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on artificial neural networks. Learning can be supervised, semi-supervised or unsupervised.
Why Deep Learning Technology Isn't Great with Getting Document Data
Generally, deep learning is synonymous with neural-network-based AI. Deep learning models typically involve multiple layers, each of which extracts different “features” from the data.
But unless you’re talking to an AI researcher, people generally just mean big, complicated AI systems (think Watson or Tensorflow) when they talk about “deep learning” as a concept.
They may be thinking of neural networks specifically, either with or without a great understanding of what these are. More on neural networks in the next section.
Grooper doesn't use deep learning, because it doesn’t work very well for the sorts of problems we solve.
How is Document AI Intelligent?
Keep in mind that since “AI” is an incredibly general term – it can mean as little as “doing difficult things with computers” – it’s impossible to mention all of the methods and algorithms that might be considered AI.
Luckily for us, when people talk about AI, they are usually thinking of a relatively small number of things.
The (Sort of) Bleeding Edge: Neural Networks
#8 - What is a Neural Network?
(Artificial) Neural Network, from Wikipedia:
Artificial neural networks (ANN) or connectionist systems are computing systems that are inspired by, but not identical to, biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with task-specific rules.
For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any prior knowledge of cats, for example, that they have fur, tails, whiskers and cat-like faces.
Instead, they automatically generate identifying characteristics from the examples that they process.
Neural nets are important not only because they are the most important type of AI in development today, but because the philosophy underpinning them is the core philosophy of most AI research.
What I’d like to point to is two things:
- Neural networks aim to generate human-like decisions by imitating a highly simplified model of what we currently understand the brain to be structured like
- The way in which neural networks match “inputs” (a picture of a cat) to “outputs” (the label “cat”) is unpredictable and, in a fundamental way, unintelligible (that is – we can’t understand it)
What the definition above includes is that they’re able to make these decisions without any prior knowledge of cats; what it omits is that the decision process of neural networks, once developed, may bear little or no relationship to human decisioning.
Which gets us to those “philosophical underpinnings” I mentioned earlier – using neural nets to do AI (solve complicated problems) assumes – or asserts - that by imitating a simplified model of the brain we’ll get not only as-good-as-human judgments, but judgments that are ultimately better than the ones humans would make, for reasons we ultimately won’t understand
Remember, it’s not the whiskers and the fur that lets a neural network know it’s looking at a picture of a cat. What is it? Who knows... I’ll leave aside the inherent contradiction here (that a simplified model of the brain will generate better results than a human can) and leave it at that for now. We’ll talk more about the limitations of Neural Net-based document AI later.
#9 - How Do Neural Nets Work?
A neural network, in the most general sense, consists of three layers: an input layer, a hidden layer, and an output layer.
The hidden layer does the bulk of the information processing, and consists of nodes connected to each other in a weighted manner.
When the network processes input, these weighted connections determine how information flows through it and, ultimately, what decision it comes to (or what output it produces).
Neural networks have a mechanism to alter the weights of those connections based on learning. The complexity – and power – of neural networks comes from the number of nodes and layers, but the ways in which the hidden layer makes decisions are difficult to inspect – hence the name.
#10 - What is Tensorflow?
Tensorflow, from tensorflow.org:
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.
Tensorflow is Google’s open-source library for building neural networks. Tensorflow makes it relatively easy for someone with a basic understanding of Python to build and train neural networks on home-desktop grade hardware.
It is used widely in both business and academic applications. You’re unlikely to hear anyone reference this directly (you might!) but it’s an important technology to be aware of.
#11 - What is a Gan?
Generative Adversarial Networks (GANs) are one of the most interesting ideas in computer science today. Two models are trained simultaneously by an adversarial process.
A generator ("the artist") learns to create images that look real, while a discriminator ("the art critic") learns to tell real images apart from fakes.
GANs are essentially two neural networks pitted against each other. They can be trained to generate strikingly realistic images – if you’ve seen pictures of AI-generated faces, those were produced by a GAN.
They are very, very, cool, but (as far as I’m aware) their practical application is still an open question. Again, you’re unlikely to run into someone asking about GANs in the wild, but they’re another important technology to be aware of.
Document AI Without Neural Nets
So, now that we’ve talked about neural nets (and, by association, deep learning), we can talk about the AI/ML algorithms that Grooper uses, and why these tend to work better for us. We’ll get into the limitations of neural-net based AI and unsupervised ML in the next section, too.
What is TF-IDF?
TF-IDF, from Wikipedia:
In information retrieval, tf–idf or TF-IDF, short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in searches of information retrieval, text mining, and user modeling.
The TF-IDF value increases proportionally to the number of times a word appears in the document and is offset by the number of documents in the corpus that contain the word, which helps to adjust for the fact that some words appear more frequently in general.
TF–IDF is one of the most popular term-weighting schemes today; 83% of text-based recommender systems in digital libraries use TF–IDF.
Why TF-IDF Works Quite Well in Documents
We’re going to spend a minute talking about TF-IDF, because it’s the core AI (and only ML) algorithm that Grooper uses. TF-IDF stands for “term frequency/inverse document frequency,” and classifies documents (or “collections”) by comparing how frequently words are seen on the target document types versus how frequently they occur in the sample set as a whole.
Without getting too much in the weeds, TF-IDF works by identifying words (or inputs) that are unique (or more common) in a particular type of document compared to the document set overall. It’s a deceptively simple way of classifying documents, and it generally does so in a similar way to how humans do it: by looking at the individual words on the document.
Now, it’s worth noting that TF-IDF doesn’t “read” or “understand” the words (virtually no ML algorithms do), and it’s not sensitive to where in the documents the words occur. It’s just counting words (or features, if your extractor isn’t feeding it words) to help automate data capture at scale.
Grooper does a few things that make TF-IDF work better than other ML algorithms.
First, and most importantly, it lets you feed TF-IDF anything.
Do you want the algorithm to look at words? Two word pairs? Names of mammals? The presence of dates or names? The words “phantom” and “empire?”
Good. Great. Fantastic, even. You can write extractors to feed that into the algorithm. This is crucial, because it lets a human determine which types of features are most likely to matter for a given document – and there are some types of documents (think a Mineral Ownership Report) where the content of the words won’t ever tell you what the document is.
Our TF-IDF implementation also lets architects inspect the feature weightings, which lets them directly and completely understand how it’s making decisions. This is not possible using Neural nets! And generally not available in other systems, even those implementing TF-IDF or similar algorithms.
TF-IDF also requires a relatively small number of training samples (generally less than 100, although not always so) to reach optimum decision making, which makes it much quicker to train than most machine learning algorithms.
How TF-IDF Works in Grooper Document AI
TF-IDF can be used in data extraction, separation, and classification within Grooper. In separation and classification, it lets you work with unstructured documents.
In extraction, it lets you pick out the correct instance of a value type on a page (e.g. tell date of birth from date of service) by looking at features surrounding the detected value.
If people ask about ML in Grooper, ours is:
- Easier to train
- Provides more insight and
- More control over the classification process
This requires more touch (initially) than unsupervised ML, but it gets much better results over a wide range of problems.
#13 - What is Computer Vision, and How Does It Find Data in Documents?
Grooper also implements a number of other algorithms – generally around image processing – that could be called AI algorithms.
These include our line and OMR box recognition, our blob detection (used in both blob removal and deskew), our ability to remove, e.g. combs from lines, and our periodicity detection.
The Grooper unified console uses a number of Computer Vision algorithms (many developed in-house) that provide best-in-class ability to clean up documents for OCR and detect non-character-based semantic (information containing) elements on the document such as lines, boxes, etc.
#14 - The Brutal Truth About AI
Here’s a fairly representative paragraph about AI (emphasis mine):
Artificial intelligence has been conquering hard problems at a relentless pace lately.
In the past few years, an especially effective kind of artificial intelligence known as a neural network has equaled or even surpassed human beings at tasks like discovering new drugs, finding the best candidates for a job, and even driving a car.
Neural nets, whose architecture copies that of the human brain, can now—usually—tell good writing from bad, and—usually—tell you with great precision what objects are in a photograph.
Such nets are used more and more with each passing month in ubiquitous jobs like Google searches, Amazon recommendations, Facebook news feeds, and spam filtering—and in critical missions like military security, finance, scientific research, and those cars that drive themselves better than a person could.
This was written in 2015 (you can find the article here), and the claims it makes are fairly typical of what people tend to say about AI.
They are also all incorrect.
BIG TIP: AI is not equal to or better than humans at discovering new drugs, finding the best candidates for a job, and even driving a car.
Neural nets cannot generally tell you what objects are in a photograph.
AI cannot tell good writing from bad. (Astute readers will also note that neural net architecture does not copy that of the human brain).
AI Demos Set Very High Expectations
I tried to track down the source of these claims, and generally couldn’t – which is also representative of claims people make about AI.
Big, bold, elusive, and almost universally somewhere from an overstatement to an outright falsehood.
This might not sound quite right to you, though – maybe you’ve seen Azure translate at work, or even Azure image recognition. Maybe you’ve seen other demonstrations of AI that look cool and powerful, and make it seem like there’s very little AI can’t do.
If you’re in sales or marketing, or just a generally skeptical person, you can probably guess what’s going on here: AI demonstrations generally engage in a lot of expectation shaping, and generally have very sturdy guardrails set up just out of sight.
Will Document AI Ever Have General Intelligence?
If I — for example — show you an AI that can correctly categorize an in-focus, good quality picture of:
- A dog
- A cat and
- A house
Does that really prove that it can “tell you what objects are in a photograph?”
If I can build an AI-based program to win Jeopardy, or to beat human grand masters at Chess, does that mean it’s going to be good at other things?
Here’s the central trick that discourse around AI plays on us: by claiming to be based on the structure of the brain, the implication is that AI-based “intelligence” should, will, or can work like human intelligence.
Remember where the author says that neural networks “copy” the structure of the human brain? Ken Jennings is a pretty smart guy, so he’s good at a lot of things that aren’t Jeopardy. My co-worker Randall can identify dogs in pictures, so he can also identify a lot of other things.
But AI doesn’t – and won’t ever – work like that. There is absolutely zero evidence that a trained neural network – or any other AI system – has or can have anything like generalized intelligence. Absolutely none.
The only reason people expect that behavior from AI is because they
- Don’t understand the technology
- Don’t understand the assumptions (what I previously called “philosophical underpinnings”) behind the technology
- Have a very, very impoverished understanding of human cognition
Expecting neural networks to be able to solve problems they’ve never seen before because you can train them to do lots of different things is a little bit like thinking because you can build a lot of different jigs to hold a router that, at some point, it will be able to mill things on its own.
I promise you - it will not.
In the Future, What Document AI Tech Will Help Us With Our Problems?
What this means for us – and for our industry – is that AI and ML won’t save us. Document AI in your environment won’t:
- Save you from having to understand your own documents
- Save you from having to solve hard problems
- Simply turn documents into data without any effort
- Enrich data or parse forms on complicated invoices or receipts without human review.
AI-based tools can be incredibly helpful, but they’re only ever going to be part of a full solution. Intelligent document processing isn’t a narrow problem, and it’s never going to be. That’s why AI won’t save you.
So, having said all that, I am going to say one other thing – document AI is, in fact, the future. It is already doing a great job at transforming documents into structured data, boosting the speed of decision-making and unlocking business value.
Remember how we defined AI as “using computers to do something hard?” That won’t stop happening.
But it won’t happen in the way people expect it — or with the tools people now think are revolutionary.
Document AI FAQs:
What is Document AI?
Document AI is a technology that uses artificial intelligence tools to automate the processing of documents in order to extract all important data. Document AI solutions were developed based on decades of document research to provide basic document retrieval and analyzation abilities of common enterprise documents such as invoices, simple forms, statements, and receipts.
What are the Benefits of Document AI?
Using artificial intelligence to get data and knowledge out of documents brings many benefits. These include, but are not limited to:
- Improved Decision Making: Enterprises make better and faster business decisions through unlocking data and making it accessible to many users and business intelligence applications.
- Enhanced Operational Efficiency: Particularly valuable knowledge can come from extracting structured data from unstructured documents and unlocking their insights.
- Ensured Compliance: Instead of using manual entry work that is rife with errors, automated document processing keeps data accurate and compliant. It also reduces guesswork as it can automate data validation (or provide human-in-the loop validation) to streamline compliance workflow.
- Happier Customers: Leveraging more data increases insights into customer and client data. Cost and time efficiencies can be passed onto customers, to meet and exceed their expectations.
- Other benefits related to customers involve CSAT, advocacy, spending and a customer's lifetime value.
- Security: Many of these solutions include sound security models and world-scale infrastructure to keep your data secure.
|
<urn:uuid:e49fb7ba-796d-4d5c-a433-6942e1cde4d7>
|
CC-MAIN-2022-40
|
https://blog.bisok.com/general-technology/what-is-artificial-intelligence-in-computer
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00256.warc.gz
|
en
| 0.930905 | 5,856 | 3.34375 | 3 |
Microservice architecture, or simply microservices, is a distinctive method of developing software systems that focuses on building single-function modules with well-defined interfaces and operations. The trend has gained popularity in recent years as enterprises look to become more agile and move towards DevOps and continuous testing. Microservices can help create scalable, testable software that can be delivered on a weekly basis.
Microservices have many benefits in terms of usage and execution. They are simpler to understand and deploy and can be reused across businesses. They help in singling out the defect and there is a minimum risk of any changes.
Just as there is no formal definition of the term microservices, there’s no standard model that is present in every system based on this architectural style. Still, you can expect most microservice systems to share a few notable characteristics like:
1. They can be broken down into multiple components
2. They are specially built to facilitate business needs
3. The routing process is fairly simple
4. Decentralized data management
5. Resistant to failure
6. Ideal for evolutionary systems
Microservices are not a silver bullet, and by implementing them you will expose communication, teamwork, and other problems that may have been previously implicit but are now forced out into the open. Like every system, microservices too have their own share of pros and cons.
Microservice architecture gives developers the freedom to independently develop and deploy services
(a)A microservice can be developed by a fairly small team
(b)Code for different services can be written in different languages (though many practitioners discourage it)
(c)Easy integration and automatic deployment (using open-source continuous integration tools such as Jenkins, Hudson, etc.)
(d)Easy to understand and modify for developers, thus can help a new team member become productive quickly
(e)The developers can make use of the latest technologies
(f)The code is organized around business capabilities
(g)Starts the web container more quickly, so the deployment is also faster
(a)Due to distributed deployment, testing can become complicated and tedious
(b)The increasing number of services can result in information barriers
(c)The architecture brings additional complexity as the developers have to mitigate fault tolerance, network latency, and deal with a variety of message formats as well as load balancing
(d)Being a distributed system, it can result in duplication of effort
(e)When the number of services increases, integration and managing whole products can become complicated
(f)In addition to several complexities of monolithic architecture, the developers have to deal with the additional complexity of a distributed system
(g)Developers have to put additional effort into implementing the mechanism of communication between the services
(h)Handling use cases that span more than one service without using distributed transactions is not only tough but also requires communication and cooperation between different teams
Whether or not microservice architecture becomes the preferred style of developers in future, it’s clearly a potent idea that offers serious benefits for designing and implementing enterprise applications. Many developers and organizations, without ever using the name or even labeling their practice as SOA, have been using an approach toward leveraging APIs that could be classified as microservices.
|
<urn:uuid:cc6ddee3-c088-471b-9dc7-a520d35ec3f2>
|
CC-MAIN-2022-40
|
https://intone.com/a-look-into-microservice-architecture/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00256.warc.gz
|
en
| 0.948456 | 668 | 3.375 | 3 |
Thanks to unsafe devices, distracted security staff and COVID-inspired phishing schemes, cyberattacks have hit schools and colleges harder than any other industry during the pandemic.
Cyberattacks have hit schools and colleges harder than any other industry during the pandemic. In 2020, including the costs of downtime, repairs and lost opportunities, the average ransomware attack cost educational institutions $2.73 million. That is $300,000 more than the next-highest sector – distributors and transportation companies.
I study cybercrime and cybersecurity. In my forthcoming book – set to be published in November 2021 – I look at how the shift to remote learning during the pandemic has posed new cybersecurity challenges.
I see six important ways the pandemic has created new opportunities for cybercriminals to attack schools and colleges.
1. Unsafe devices
Devices that were loaned to students during the pandemic often lack security updates. This is a serious issue since in 2020 alone, 1,268 vulnerabilities were discovered in Microsoft products. One such vulnerability can enable hackers to gain higher-level privileges on a system or network, which can be used to steal data and install malware.
As students, teachers and administrators return to school with devices that haven’t been patched in a while, a large number of vulnerable devices are likely to be reconnected to school networks.
2. Distracted cybersecurity staff
The shift to remote learning has also distracted the attention of limited cybersecurity staff from important security issues. In at least one case, persons responsible for cybersecurity were assigned to investigate bad online behavior, such as name-calling, that teachers and administrators handled before.
3. Victims more likely to comply
In 2020, 77 ransomware attacks on U.S. schools and colleges affected more than 1.3 million students and resulted in 531 days of downtime. This downtime was estimated to cost $6.6 billion in economic terms.
At the same time, public schools faced political and social pressure to ensure students’ access to learning opportunities during the pandemic. The pressure to quickly restore networks can make victims desperate and willing to comply with criminals’ demands. For instance, the Judson Independent School District in Texas paid $547,000 to ransomware attackers in the summer of 2021 in order to regain access to its systems and stop student and staff data from being published. In 2020, the Athens Independent School District in Texas paid a $50,000 ransom.
4. Vulnerable platforms
When the pandemic forced schools to use online platforms to conduct classes and evaluate students, it created new entry points for cybercriminals to target.
These platforms include video chat programs such as Zoom and Microsoft Teams, as well as providers of curricula, technology and services, such as K12, recently renamed as Stride. They also include online proctoring services, such as ProctorU and Proctorio.
Collectively, such platforms were targeted in three-quarters of the data breaches in school districts that involved personal information.
In November 2020, online education vendor K12 reported that some students’ information on its system could have been stolen during a ransomware attack, even though the company paid the ransom.
Likewise, in July 2020, hackers stole sensitive personal information from 444,000 students – including their names, email addresses, home addresses, phone numbers and passwords – by hacking online proctoring service ProctorU. This data became available for sale in online hacker forums.
5. More baiting opportunities
Cybercriminals increasingly turned to social engineering attacks during the pandemic. These are attacks in which the cybercriminals use emotional appeals to things such as fear, pity or excitement to bait people into providing sensitive information. For example, cybercriminals have launched phishing campaigns in which they pose as human resources staff and ask recipients to submit information about their COVID-19 vaccination status.
Victims may be lured to give their credentials, click malicious links or download files containing malware. Fear and uncertainty – such as that created by the pandemic – make individuals more susceptible to social engineering attacks.
An analysis of 3.5 million social engineering attacks from June to September 2020 found that more than 1,000 schools and universities were targeted. Educational institutions were also more than twice as likely as other institutions to be victimized by such attacks.
Many of the emails have COVID in the subject line.
In May 2020, the Federal Trade Commission posted a message on its website with a screenshot of a social engineering attack email. The message warned college students that the emails about COVID-19 economic stimulus checks claiming to be from their universities’ “Financial Department” could be from scammers.
6. COVID resources have created new targets
Colleges have been designated to distribute COVID-19 relief funds – and criminals caught on to this. In May 2021, the U.S. Department of Education made more than $36 billion in emergency grants available for students and colleges under the American Rescue Plan Act.
In California, more than $1.6 billion in such grants were available to community college students alone. This explains why, not long afterward, more than 65,000 fake students applied to California community colleges for such aids and loans.
Most two-year institutions don’t have resources to vet applicants. The lack of a requirement for identity verification and other documentation to get COVID-19 relief grants from community colleges also attracted attention from criminals overseas. Many of the fake student applications in the California community college system were from foreign countries.
Officials have been silent about whether these fake students got any money.
The bottom line for schools and colleges is that as they continue to confront the challenges of the pandemic, cybersecurity cannot be placed on the back burner. Ignoring threats to cybersecurity now can be quite costly in the future.
This article was first posted on The Conversation.
|
<urn:uuid:3271a259-48da-4676-b19c-6ff11ba504f6>
|
CC-MAIN-2022-40
|
https://gcn.com/cybersecurity/2021/09/cybercriminals-use-pandemic-to-attack-schools-and-colleges/316131/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00456.warc.gz
|
en
| 0.966401 | 1,198 | 2.640625 | 3 |
Enterprises structure a wide range of expenditures, ranging from the lease they paid for their manufacturing or buildings to the price of raw ingredients for their goods, to the salary they treat their employees to the total costs of developing their firm.
Corporations categorize each of these expenditures to make them easier to understand. Capital spending (CAPEX) and operational costs are two among the most frequent (OPEX).
Capital investments (CAPEX) are large purchases made by a firm intended to be utilized in the long run. Expenditures (OPEX) are the day-to-day costs incurred by a firm to keep its operations running. Here we will discuss the major concepts that are CAPEX vs. OPEX for cloud.
What You Need To know About CAPEX
Capital investments are large purchases involving merchandise that will be utilized in an effort to enhance a firm’s productivity. Capital investments are generally used to purchase an asset such as properties, machinery, and machinery (PP&E).
For instance, if an oil firm purchases a piece of new drilling equipment, the purchase is classified as capital costs. One of the distinguishing characteristics of CAPEX is duration, which means that the acquisitions help the firm for more than one taxation season.
CAPEX denotes the industry’s expenditure on property resources. Capital expenditures are commonly used in the following ways: Different industries may have various kinds of expenditures. The acquired equipment would be for business growth, upgrading old systems, or extending the usable life of an existing asset.
Capital investments are recorded in the accounting records in part under “properties, plant, and machinery.” CAPEX is also included in the operating cash comment’s investment area of organizational culture.
Permanent assets are recorded over time to distribute the equivalent amount across its lifespan. Retention is beneficial for capital expenses because it helps the firm avoid taking a substantial blow to its facts of the matter the same year the item was acquired.
CAPEX can be supported internationally, which is often done using security or loan funding. Companies boost their investment through issuing bonds, going into debt, or using other gilts. Dividend-paying stakeholders pay heed to CAPEX figures, seeking a firm that sends out revenue while continuing to enhance chances for additional profits.
A Close Look Into OPEX
Operational expenditures are the costs incurred by a business to operate during daily business. These charges must be normal and typical in the market in which the corporation works. Organizations keep OPEX on the company financial statement, and it can exclude it from company taxation during the year with which it was expended.
OPEX also includes costs for research & innovation (R&D) and indeed the cost of products supplied (COGS). Overheads are incurred as a result of typical operational processes.
Any industry’s objective is to maximize production concerning OPEX. In this sense, OPEX is a crucial indicator of a company’s performance over the duration.
CAPEX vs OPEX
Capital investments are significant investments that will be used after the financial reporting cycle ends. Operations expenditures are the day-to-day expenditures that keep a business functioning. Because of their distinct characteristics, each is dealt with separately.
OPEX comprises relatively brief costs that are usually depleted in the books of accounts in which they had been acquired. This implies they get paid daily, fortnightly, or annual basis. CAPEX expenditures are payable in full up before.
CAPEX rewards take more time to materialize, including equipment for a significant venture, but OPEX benefits are considerably shorter, including the labor that an individual does on a regular basis.
A technical notice on the terminology used on this page. You may have noticed that we use the terms “public investment” and “operation expense” rather than both spending or both costs.
Expenses are the compensation paid on protracted expenditures in financial reporting. Costs are typically used to describe greater relatively brief expenditures. Most people can’t see a difference unless they’re conversing with business accounting professionals.
CapEx and OpEx elements are budgeted separately with separate certification processes: Represents the maximum amount that must typically be authorized by multiple layers of administration (especially top leadership), which will halt purchase until the clearance is granted, which might severely weigh you down.
Incorporating the IBM Power source as an OpEx item is typically a more straightforward procedure, provided the item is recognized and accounted for in the operation spending plan.
You already have the equipment in a CapEx scenario and then have complete control placed above a white it is use, placement, and disposal.
If you purchase an IBM Power source as an operational expenditure item in the cloud, customers rely on the gear, runtime environment, and management provided by the cloud provider. In OpEx scenarios, particularly with cloud vendors, you create a third company to procure your IT resources, which can influence productivity and outcomes.
Having purchased a capital item necessitates some foresight. IBM Power Systems can be acquired to repair or update the computer every three years. That implies this because when visitors buy the computer, you should get all of the characteristics you anticipate you’ll want from the foreseeable.
Suppose you’ve had a seasonal organization with substantially busier seasons than many others (imagine Christmas crunch for commerce). In that case, you should design your system such that it can consistently function at stellar performance, including during the slack seasons of the year.
Many companies require that all essential IT items or functions be acquired rather than leased or “tried to rent” via an MSP. Other organizations may state the inverse. The question isn’t whether someone is superior to these.
Alternatively, you might not have had a decision between CapEx and OpX. Depending on your company values, a particular purchase technique may be required.
|
<urn:uuid:81772d7c-4185-4e34-b45e-6ede272eb239>
|
CC-MAIN-2022-40
|
https://enteriscloud.com/capex-vs-opex-for-cloud/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00456.warc.gz
|
en
| 0.958026 | 1,202 | 2.71875 | 3 |
With a number of vaccinations now in the market or undergoing testing, we may be approaching the beginning of the end of Covid-19. And whilst the immediate threat to life may begin to subside, it is likely that we will be counting the economic cost for quite some time.
However perhaps an even more profound impact will be felt in how both individuals and societies live their lives in the aftermath of Covid-19. Many believe that this may not be the last such global incident that we will face. If this is proved correct, we should at least now be better informed and prepared for managing to live in a more physically isolated and less communal way. Below are five key areas in which the behaviour of individuals significantly changed during Covid-19, and which may be predictors of how our longer-term lifestyles may evolve.
1. Health – a collective experience
For most countries, the pandemic was the first time in living memory where the collective health of the nation attained a higher priority in the collective conscious than the health of individuals. In addition to the well-publicised country-level data gathered by organisations such as Johns Hopkins University, people were asked to take specific precautions to prevent themselves from spreading or contracting the virus, such as wearing masks, additional hand-washing, and social distancing. These policies and protocols transformed the issue of health from an individual to a collective responsibility.
2. The role, and meaning, of home
In 2020 home went from being a ‘base’, to being an ‘everything place’. As people found themselves locked down, many of the outside amenities they typically enjoyed, such as cinemas, restaurants, and gyms, as well as their offices, were no longer accessible. Overnight, home needed to be adapted to accommodate multiple tasks. In almost no time Zoom became a verb, and home exercise equipment became almost impossible to buy due to its scarcity. From furniture to personal routines, lifestyles dramatically changed, bringing with them a widespread questioning of personal values, and the kind of lives people wanted to live.
3. Finding social meaning during social distancing
The pandemic has had a dramatic effect on the world’s mental health and has impacted all segments of the population. Although it is difficult to meaningfully separate practical causes, such as concerns over job security and personal finances, and emotional causes, such as lack of human contact, the overall sense of isolation and the disruption to social patterns have undoubtedly played a large role. There were many creative examples of using technology to bridge the gap, such as hosting virtual social events, however, 2020 has brought into sharp focus the importance of something we had previously taken for granted – that we are a deeply social species.
4. Adaptation and regulation of digital services
Technology has, without doubt, been one of the unsung heroes of the pandemic. Looking back as little as 25 years, the ongoing functioning of a society faced with lockdown would have been a very different story than the experience in 2020. Without the development of e-commerce, home-delivery, remote working, and mobile communications, the economic and social impact would certainly have been far worse. However, we now face a new challenge. There are many who believe there will be a new wave of almost entirely de-centralised businesses without a traditional head office, and that staff will become increasingly location-agnostic, both nationally and internationally. With this change will come a need for more flexible and forward-thinking global legislatory frameworks to keep this borderless digital world safe, fair, and transparent.
5. Personal service for billions of individuals
The digital world has reimagined personalisation, and, in virtually all key consumer verticals, has banished the notion of one size fitting all. From customised trainers in hand-selected colour combinations to personalised greetings cards, there is undoubtedly an appetite among consumers to stand out and exercise their creative minds, albeit with a cost premium attached. The key challenge in this market is to offer personalisation at scale, and at a price point that matches, or almost matches, the standard product. This will apply to both physical and digital experiences.
Whilst all societies are in a permanent state of evolution, Covid-19 has perhaps brought with it a faster and more profound swathe of behavioural change than we have ever seen in peacetime. We do not yet know if we have tamed this epidemic, if the changes we have seen will stick, or what the longer-term socio-economic effects will be. However, on the evidence so far, the pandemic may eventually be judged as an inflection point. The moment when individual behavioural patterns shifted to a greater state of self-reliance, and during which technology and communications became recognised as the safety net which enabled societies to become more resilient to such events in the future.
|
<urn:uuid:699798b1-bbda-461f-9e04-75169857f4f7>
|
CC-MAIN-2022-40
|
https://www.freemove.com/magazine/post-covid-trends-the-human-impact/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00456.warc.gz
|
en
| 0.971927 | 991 | 3.125 | 3 |
The hype surrounding the Internet of Things (IoT) is starting to die down. What we are seeing in its place is the emergence of new technologies using the IoT that are truly useful and groundbreaking. This is happening across a range of sectors, from healthcare to construction to agriculture. The companies involved in this list are the true pioneers of the IoT.
Until recently, much of the hype around the Internet of Things (IoT) has been in terms of fridges, ovens and other household appliances. Yet this narrow view of IoT misses much of the potential – and with between 20 and 30 billion connected devices predicted by 2020, that’s a lot of potential.
With annual funding doubled over the past six years, there are some surprising ways in which the IoT will be changing industries and consumer habits around the world.
In an article on the subject, CBInsights discusses 11 ways this information can change our lives:
The health industry is starting to popularize ingestible cameras and internet-connected sensors. One example is a pill-sized digestible sensor that can monitor whether patients are taking medicines as prescribed.
A portmanteau of the wearables and headphones that are now so common. These are in-ear devices that can be used for communication, fitness, tracking and biometric data.
3) Computer vision
Advances in computer vision and camera-based navigation mean that there is no longer such a need to focus entirely on GPS for positioning. Drones are now capable of using computer vision algorithms to automatically avoid obstacles.
This is still very much in development, but some companies are experimenting with headwear that can transmit low-energy currents into the brain in order to elevate moods.
Drone technology can even help with ocean exploration. Low-cost submersibles mean that low-cost exploration of the oceans – 95% of which are unexplored – is now possible.
6) Body scanning
Body scanning has for a long time been a costly process involving advanced equipment – and lots of it. However, now this can be done in the home, with the information helping consumers to find the right fit and retailers to better define the ‘average’ human.
7) Smart buildings
IoT will continue to embed chips into our built environment. One example can be found in glazing. The glass is powered by an algorithm that can respond to cloud cover, sun angle, temperature and other details to appropriately and actively tint the glass to help with solar gain.
8) Healthcare charting
Healthcare still relies heavily on paperwork to chart a patient’s history. This can account for up to a third of a practitioner’s time by some estimates. Smart glass technology can help to record and enter patients’ data hands-free – saving a lot more time.
9) Crop dusting
Even in agriculture, IoT has a role. Drone technology plays a strong part in crop dusting and drones can take advantage of information, from soil moisture to contours, to ensure an even and efficient spray.
10) Food safety
Allergen monitors can replace what previously required full organic chemistry labs to quickly test food for anything that might harm the consumer.
11) Mileage-based car insurance
Small, wireless devices that are plugged into a car can monitor distance travelled to get an effective and more accurate car insurance deal that’s based on the amount and how you actually drive.
· By 2020 as many as 30 billion devices will be connect by the IoT
· New technologies coming on stream today are giving us a better understanding of how this technology is developing
· Practical devices are being created
· The positive impact is being felt in industries from healthcare to agriculture – and many more
Big Data and related technologies – from data warehousing to analytics and business intelligence (BI) – are transforming the business world. Big Data is not simply big: Gartner defines it as “high-volume, high-velocity and high-variety information assets.” Managing these assets to generate the fourth “V” – value – is a challenge. Many excellent solutions are on the market, but they must be matched to specific needs. At GRT Corporation our focus is on providing value to the business customer.
|
<urn:uuid:2bbd0493-4763-49a9-9d96-b4aeb71a620d>
|
CC-MAIN-2022-40
|
https://www.grtcorp.com/content/11-ways-iot-changes-world/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00456.warc.gz
|
en
| 0.935523 | 903 | 2.75 | 3 |
Behind every SMS scam is someone capable of using spoofing attacks and social engineering techniques to exploit victims – and trends. Our Threat Intelligence team has detected numerous scams piggybacking off various trends – be it in the form of seasonal holiday scams, or attacks exploiting global events like the Covid-19 pandemic and Russia’s invasion of Ukraine. SMS scammers keep a keen eye on what is going on in the world and are constantly looking for new opportunities to exploit subscribers.
Covid-19 SMS Scams
The Covid-19 pandemic undoubtedly affected the lives of the global population. Many of us were contacted via SMS regarding PCR tests, public health advice and Covid-19 vaccinations. This communication gave threat actors a chance to masquerade as legitimate public health bodies, tricking mobile users into clicking links and retrieving sensitive information. We have described the anatomy of Covid-19 scam text messages in a previous blog.
A good example of a Covid-19 messaging scam we have observed is one that occurred in Canada. This attack commenced very shortly after the Canadian government had announced the provision of welfare payments to individuals who may have been out of work due to Covid-19. The attackers designed an SMS message purporting to be from a public health body offering the subscriber an opportunity to claim a welfare payment. As is standard with phishing scams, the message contained a shortened URL which directed the user to a very convincing website (well designed with no spelling mistakes, used real logos). The site was even bilingual (English and French), which is a requirement for official Canadian websites.
Once the user arrived on the site, they were directed to enter their social security number which the site then pretended to process. After a short amount of time – 10 seconds or so – the site claims that the user is entitled to a welfare payment. The user was then provided with a link to select their bank from a list of Canadian financial institutions so they could “claim” their payment. Once selected, the they were taken to a fake login page for their bank where they were prompted to input their details.
Unfortunately, we know that SMS scams related to the pandemic are still ongoing, despite the reduction in Covid-19 restrictions. The following is an example we recently identified in the UK:
In this instance, the scammer uses fear mongering as a social engineering technique intended to get the recipient to act without thinking. Messages like these remain a threat to mobile subscribers, with scammers taking stock of new developments in the course of the pandemic and adapting accordingly.
Identity Theft of Ukrainian Charities
Another momentous event being broadcasted on the global stage is Russia’s invasion of Ukraine (Read about the role of mobile networks in the invasion here) Many charities have been set up to aid Ukrainians since the beginning of the invasion in February 2022. Unfortunately, mobile scammers have been taking advantage of these efforts, usually by spoofing the identity of legitimate charities. These attackers send SMS messages containing a link to an unsecure website to mobile subscribers, claiming to be from established charities. The message will usually incorporate social engineering tactics such as creating a sense of urgency and appealing to the receiver’s goodwill to get them to click the link and “donate” to the charity. Of course, there will be no donation - once the victim has entered their financial information on the scammer’s website, the details are likely to be used to defraud the individual. Bad actors may also pose as a charity that does not exist rather than spoof an established organization. Websites like Charity Navigator and CharityWatch can be used to check the legitimacy of different charities. A list of the top-rated charities to help Ukraine can also be found here.
We have seen the opportunistic nature of SMS attacks in terms of large-scale global and political events. Just as we are constantly innovating and deploying new methods to protect mobile subscribers, so too are scammers (this is why managed security is so important). Christmas, Easter, Valentine’s Day, New Years – these holidays give rise to new trends each year as media teams and marketers strive to make this year’s event more exciting than the last, thus opening new avenues for fraudsters who deploy holiday SMS attacks.
This year at Eastertime, the following message was circulated over WhatsApp, claiming to offer the recipient a “Free Easter Chocolate Basket” on behalf of Cadbury. The brand often gets creative around Easter, having carried out similar initiatives in the past so many mobile subscribers may not have been suspicious of this message designed to steal personal information, which could in turn be used for identity theft. Credit: Sky News
Similarly, romance fraud has become commonplace around Valentine’s Day each year. Many of us watched in disbelief this year as multiple women were defrauded by their so-called boyfriend in Netflix’s true crime documentary The Tinder Swindler. Romance scams usually start off on online dating sites, or social networking sites. However, after some time the scammer will want to lure the victim over to WhatsApp or SMS based communication. Romance scams rely heavily on social engineering techniques, with the fraudster aiming to get the victim to drop their guard and gain their trust. The scammer usually steals the identity of someone else to appear more desirable to their target. These scams have been extremely effective. In Australia, for example, a record of $56 million in losses was reported in 2021 – up 44% from last year.
Parcel Delivery Scams
Times of the year that evoke lots of online shopping, like Black Friday, Cyber Monday, and Christmas, also see a rise in scam texts. In particular, fake text messages related to parcel delivery tend to shoot up. Note the use of a shortened URL and personalization here. These are very convincing – even If you are not expecting a delivery, the use of a name can be enough to make you assume it is a legitimate message.
Credit: ABC News
In this blog, we have seen that bad actors are continually evolving their techniques when it comes to SMS attacks, crafting compelling scam campaigns incorporating identity spoofing and social engineering techniques to trick and exploit mobile subscribers. Users must be wary of the messages they receive, watching out for shortened URLs, spoofed phone numbers/senders and social engineering tactics like creating a sense of urgency or attempting to elicit an emotional response. In terms of mobile network security, the examples above indicate the scope and variation of SMS scams, and the attacker’s ability to evolve. Thus, managed security is essential for identifying new scams promptly and keeping subscribers protected. You may also be interested in reading our post on payday loan scams in Mexico.
Caitriona is a recent graduate of the National University of Ireland, Galway, where she completed a bachelor’s degree in global commerce. As part of her degree, Caitriona studied abroad in Canada and worked as a marketing intern back in Ireland. Over the course of her studies, she developed a passion for both marketing and cybersecurity, specialising in marketing in her final year. Caitriona is now working as a marketing assistant at AdaptiveMobile Security, a role that marries both of her passions.
|
<urn:uuid:b8485056-edf4-46a1-8158-78b1078b5f46>
|
CC-MAIN-2022-40
|
https://blog.adaptivemobile.com/how-are-sms-spoofers-exploiting-global-trends
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00456.warc.gz
|
en
| 0.954002 | 1,495 | 2.71875 | 3 |
A new study found that the detection of an attack occurs nine hours after the first victim is hit.
The report, conducted by researchers from Google, Samsung, PayPal and Arizona State University, analyzed millions of visits to phishing pages. Their research led them to find two key stats:
- The detection of each attack occurs, on average, nine hours after the first victim
- The average phishing attack, from first to last victim, lasts 21 hours
Further, more than a third of all victim traffic to phishing websites took place after the attack is detected.
This underscores a few important points. Detection is not a proper security response. For one, it takes far too long to actually detect. And, as this research finds, a significant amount of activity on phishing sites happens even after detection occurs. That means that relying on warning banners as an anti-phishing tool won't get the job done.
As this research shows, trying to remediate after an email has been delivered is a losing proposition. Waiting until the email hits the inbox is a recipe for disaster. It's giving threat actors a head start, one they don't often relinquish.
The better way is to prevent the malicious email from reaching the inbox in the first place. Prevention is the best way forward. Once it hits the environment, it's too late.
|
<urn:uuid:91159e0a-ebc1-4b27-bba5-c8f1721af676>
|
CC-MAIN-2022-40
|
https://www.avanan.com/blog/new-research-underscores-importance-of-prevention-based-security
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00456.warc.gz
|
en
| 0.957494 | 274 | 2.609375 | 3 |
If you want to understand what it takes to collect, track and analyze reams of data, just check the weather. There are constant fluctuations, scores of data points and intense interest from all over the planet. Analyze the data correctly and someone in the state of Washington knows whether or not to wear a raincoat. Do it poorly and there might be a massive traffic pileup from people driving too fast on slick roads.
Bryson Koehler understands this dynamic. As CIO of The Weather Company, he’s charged with increasing the accuracy of weather forecasting for the various entities the company owns, which include the Weather Channel and the Weather Underground mobile app.
The app in particular uses a massive personal sensor network to increase accuracy. Even a smartphone can be a basic weather station: The Weather Company uses algorithms that can determine the outside temperature for that user based on what the phone is reporting.
There are 100,000 sensors sending data worldwide (and 40,000 in the United States alone). Understandably, processing the data is no easy task.
[Related: Coping with weather may require a change of computer]
“Some of the data is interesting – such as lightning data or pollen data – and it doesn’t always help us create a forecast, but we can tell people who have allergies what to expect,” Koehler says. “Other types of data we get in real time, such as aircraft telemetry data – installations on commercial aircrafts that we bring down in real time to see atmospheric conditions.”
Koehler says the flight data is incredibly helpful. It can be used to alert airlines about possible changes in flight plans, or let them know the wear and tear on a plane is not as significant as it might have seemed during a flight. This data can help minimize delays, since the airlines are required to do extra safety checks related to severe weather. The Weather Company can tell if the real-time weather data did not reach as high a threshold as the pilot might have reported.
The analysis is intense. Stations provide data for humidity, barometric pressure, dew point, UV load, rainfall, wind and many other factors. There are billions of reports sent in each month, according to Koehler. The station data is repurposed into a format people can use and understand.
“People can pull up different layers of maps, and they can pull up forecasts from all over the globe,” he says. “In contrast, the National Weather Service in the U.S. has about 3,500 recording stations that they own and operate on behalf of U.S. taxpayers.”
More instruments mean more data
It’s an interesting dilemma to have such an abundance of data to process. Koehler says that the NWS is one of the world’s most “most instrumented” government agencies. Yet, the Weather Company has to deal with many thousands of personal weather stations worldwide. Some of the stations are not easily accessible – they could be in a remote region of Iceland. Some of the weather sensors are as small as a Coke can and some involve an antenna that is three-feet tall.
[Related: How to profit from the ultimate big data source: the weather]
The Weather Company acts as a “clearinghouse” for this data collection, says Koehler. The company monitors the stations and knows exactly how each one works – that the station is a RainWise product that collects data every second versus a Netatmo station that might not collect as often, for example.
Part of the challenge is in interpreting the data correctly. The Weather Company might look for trends from data collected from multiple phones and stations in the same area. The company has figured out how to compare data sets with varying levels of accuracy and quality and still derive some value, especially in terms of weather trends. All of the collected data is valuable, Koehler says.
Interestingly, the data sets are typically quite small. In total, Koehler says his company collects a “couple hundred” terabytes from personal weather stations.
A whole lotta ping
“It’s a very chatty environment,” he says. “There is a high frequency of ping. So we have to use a very scalable infrastructure, since there are a few hundred devices added every day. And the frequency of the data input continues to rise.”
The Weather Company still uses Amazon Web Services for most of the data collection and processing. At the time of this writing, however, the company had added IBM Cloud to the mix, primarily due to costs and presence in the market.
“IBM Cloud has been growing rapidly, particularly as a resource for large enterprises,” says Charles King, a noted IT expert with Pund-IT. “IBM is dedicating significant budget to rolling out a global network of cloud data centers. By partnering with IBM, the Weather Channel will benefit from IBM’s global cloud resources [to support its own global network] and should also be able to monetize its assets as part of the [Internet of Things] services IBM is envisioning.
“If, as many scientists and insurance companies believe, we’re heading into a future where extreme weather events become increasingly common, the partnership should be a good deal for both companies and their respective customers,” King adds.
“The increasing use of social and sensor networks are producing significant amounts of high-throughput data available for mining in areas like customer behavior, biological systems and environmental conditions,” says Matt Wood, general manager for Data Science at Amazon Web Services. “The critical barrier to big data, which has traditionally been the infrastructure required to collect, compute and collaborate, is now being transformed through the use of cloud computing with AWS.”
In the end, what makes the collection from 100,000 sensors so noteworthy is that it is a major test of cloud infrastructure. King says the data is rich and layered, but fairly consistent and predictable in terms of how often the stations send in reports. Whether the reports are from an airplane, a station in Iceland or a smartphone, the algorithms are ready to help provide a more accurate weather forecast with every single ping.
|
<urn:uuid:0e45e982-357b-4483-a2bf-cc01257d32eb>
|
CC-MAIN-2022-40
|
https://www.cio.com/article/244246/how-to-collect-and-analyze-data-from-100000-weather-stations.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00456.warc.gz
|
en
| 0.95269 | 1,298 | 2.75 | 3 |
Natural Gas Fills the Gap
According to an August 23, 2022, report from the U.S. Energy Information Administration (EIA), natural gas is filling an important gap in baseload generation.
In the U.S. lower 48 states, according to the report, electric power generated by natural gas-fired power plants hit three new highs in July 2022 – July 18, July 20, and July 21, with the highest demand ever in history, at 6.37 million megawatthours, on July 21. (The previous high was on July 27, 2020.)
Despite relatively high natural gas prices, demand for natural gas for electricity generation has been strong throughout July as a result of three things: above-normal temperatures, reduced coal-fired electricity generation, and recent natural gas-fired capacity additions.
1 – Temperatures: “U.S. electricity demand usually peaks in the summer because of demand for air conditioning,” said the report. “This past July was especially hot, ranking as the third hottest on record in the United States. Before this year, the previous daily peak for natural gas-fired electricity generation had occurred on July 27, 2020, when natural gas prices were historically low.”
In July 2020, the Henry Hub natural gas price averaged $1.77 per million British thermal units (MMBtu). This July, the natural gas price averaged $7.28/MMBtu, over four times more expensive than in July 2020. Typically, noted the report, higher natural gas prices reduce natural gas price competitiveness relative to other sources, especially coal.
2 – Coal: So what role specifically did coal play in the increased consumption of natural gas? “This summer, coal-fired power plants have not been used as much as in prior summers,” said the EIA. “Continued retirements of coal-fired generating plants, relatively high coal prices, and lower-than-average coal stocks at power plants have limited coal consumption.” In May 2022, coal inventories at power plants averaged 20 percent lower than the prior year levels.
3 – Capacity: The third reason for the increased consumption of natural gas is that new capacity has increased the availability and use of natural gas-fired electricity. “Over the past 10 years, developers have added about 62 gigawatts of combined-cycle gas turbine capacity,” said the report. “The increased number of combined-cycle gas turbines in use has led to efficiency gains and less conversion losses, which means more electricity can be generated from the same amount of natural gas.”
|
<urn:uuid:b2174bbb-b7f7-4b0c-b464-693721cc9ebe>
|
CC-MAIN-2022-40
|
https://finleyusa.com/natural-gas-fills-the-gap/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00456.warc.gz
|
en
| 0.962583 | 535 | 3.3125 | 3 |
Malware, short for ‘malicious software,’ is a type of software meant to harm computers and computer networks. We hear about different types of malware, such as botnet malware and ransomware, and different variants of those types of malware as well; but do we know enough about those malware currently threatening us? Here, we take an in-depth look at three of the most talked about malware of 2016.
Mirai Botnet Malware
Mirai is the Japanese word for the future, fitting, in that this is one of the most advanced types of malware yet. This malware, created in August 2016, turns any Internet of Things (IoT) device running Linux into a remotely controlled bot, or application that performs automated tasks, such as setting an alarm, that can be combined with other bots and used as part of a botnet in large-scale network attacks. Though these bots are meant to make our lives easier, they are often not properly secured and can consequently be used in malicious attacks. The most notable use of Mirai botnet malware in an attack happened in October of this year in a Distributed Denial of Service (DDoS) attack against domain name service (DNS) provider, Dyn.
Dyn, the DNS provider for major websites including Twitter, Netflix, Reddit, and Spotify, was attacked by one of the largest DDoS attacks to date, an attack that was fueled by Mirai-infected IoT devices including Internet-enabled DVRs, surveillance cameras, and other Internet-enabled devices. Because of all of the popular websites it affected, this Mirari botnet attack is considered the attack that ‘shook the Internet.’
Mirai easily infects its victims because IoT devices are some of the least protected things out there. The only way as of right now to combat this malware is to secure your IoT devices in various ways.
Scanning the news online with just the search term ‘ransomware,’ delivers a whole host of recent ransomware variants that are threatening our files. One of the variants that is most common among these search results is ‘Locky’ ransomware. This strain of ransomware is titled as such because it renames all of your important files so that they have the extension .locky.
The most common way that Locky infects your computer is via email. What happens is that the victim receives an email containing an attached document (Troj/DocDl-BCF) that is an illegible mess of odd symbols. The document then advises you to enable macros if the ‘encoding is incorrect.’ Seeing that the message on the document file is indiscernible to the reader, he or she will likely enable these macros, resulting in infection. If the macros are enabled, the text encoding is not actually corrected, instead, code inside of the document is run which then saves a file to disk and runs it. The saved file (Troj/Ransom-CGX) serves as a downloader, which fetches the final malware payload from the crooks, which could be anything, but in this case is usually the Locky Ransomware (Troj/Ransom-CGW); Locky then scrambles all files that match a long list of extensions, including videos, images, source code, and Office files.
Once a computer has been infected with Locky Ransomware, the victim’s desktop screensaver is changed to display the ransom payment instructions. These instructions lead the victim to the dark web, where they can pay the ransom. Unfortunately there is not much that can be done other than paying this ransom, which is why it is important to take preventative measures, such as those listed at the end of this article.
Popcorn Time Ransomware
Of all of the current, popular malware out there, ransomware variant, ‘Popcorn Time,’ is among the newest and most evil of them all. This form of ransomware is named after, but not related to, the torrenting site of the same name and it is believed that this malware was created by a team of Computer Science students from Syria.
This variant takes its cue from movies like The Box and the Saw movie series in that it forces its victims to make a detrimental choice: infection of their own files, or their friends’. Once hit with the cyber-attack, the victim has seven days to determine whether her or she will pay the 1 bitcoin ransom, equivalent to about $780 currently, or pass it along to two ‘friends’ instead. If the victim decides to give up his or her comrades’ information, the malware is allegedly deleted from the initial computer entirely and it moves on to ask for payment from its new victims. Once the ransom has been paid by either the initial or secondary victim(s), they will get a decryption code; the victim has four tries to type in the code before his or her computer files are all deleted.
This ‘pass the buck’ payment method is what makes this malware variant so unique. It prompts victims with a moral question that might turn up surprising results when their backs are against the wall.
How to Avoid These Major Malware Threats
- Avoid suspicious downloads—Malware infects computers primarily through the user clicking on a malicious link in an email or via a suspicious download. If you do not know the validity of a link, you should not click on it. This is a simple step that can go a long way when it comes to protecting your files.
- Back up your files—If you are unfortunate enough to be the victim of a malicious ransomware attack, you can avoid paying the criminals if all of your data is backed up to an external hard drive or some other source. The FBI advises victims of this crime to not pay the ransom, so as to discourage the hackers from doing the same thing again; they instead recommend that victims of the cyber-crime report the incident to the government agency so that they can hopefully track down these people.
- Secure your IoT devices—When it comes to Mirai botnet malware in particular, it is important to secure your Internet-connected devices. Many of these devices come with a default password which you should change in order to make it harder for cyber-criminals to get to your data. Also, when at all possible, turn off remote access to your IoT devices. By leaving a device active while not in use leaves it extremely vulnerable to use in an attack similar to that against Dyn DNS.
- Don’t enable macros in documents received via email—Microsoft itself turned off auto-execution of macros by default many years ago as a security measure. Many malware infections rely on persuading you to turn macros back on, so don’t avoid them by not enabling macros.
- Keep your anti-virus & anti-malware updated—While backing up your data and avoiding sneaky sites or links is effective, preventing these malware from getting onto your computer in the first place is a key preventative measure in fighting malware. Keeping your computer’s anti-virus and anti-malware up-to-date is something simple you can do to protect against malware, and most even allow you to set automatic updates, so you rarely need to think about it at all.
Hailey R. Carlson | Axiom Cyber Solutions | 12/14/2016
|
<urn:uuid:075cfb2a-6830-4bd8-82d1-36b06c8fd7be>
|
CC-MAIN-2022-40
|
https://axiomcyber.com/category/malware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00456.warc.gz
|
en
| 0.940808 | 1,523 | 2.890625 | 3 |
SD-WAN vs MPLS: Pros and Cons
As the demand for cloud-based applications and bandwidth requirements continue to grow and branch offices expand, some enterprises and service providers begin to deploy wide area network (WAN) services. MPLS and SD-WAN are the two most important technologies in WAN services. This post will compare MPLS and SD-WAN to give you an idea of which technology is best for you.
MPLS (Multiprotocol Label Switching) is a protocol for efficient network traffic flow between multiple locations. MPLS operates similarly on Ethernet switches and routers, sitting between layers 2 and layer 3 networks. MPLS uses labels with important information so that traffic can be delivered to its destination without the need for in-depth packet analysis by routers. This enables fast packet forwarding and routing within a network, eliminating the inefficiencies of traditional Internet routing. In the MPLS network, the MPLS switch transfers data by popping off its label and sending the packet to the next switch label in the sequence. MPLS network provides customers with a method of prioritizing traffic, thus bringing a sense of traffic predictability within the network. This allows users to leverage a single network connection for multiple applications, providing high-performance and reliable connectivity for critical application traffic.
As illustrated in the figure, time-sensitive applications such as video and voice can take the highest priority while less critical applications would take the lowest priority and still function properly.
Data centers and enterprises are demanding more flexible, open, and cloud-based WAN technologies, rather than proprietary or specialized WAN technology that often involves expensive, fixed circuits, or proprietary hardware. Therefore, SD-WAN (Software-Defined Wide Area Network) is developed and widely applied for WAN connections. SD-WAN erases geographic boundaries, enabling communications between different network endpoints. Thus, it is ideally suited for connectivity between branch offices and central enterprise networks, or between data centers separated by geographic distance. Featuring zero-touch deployment, SD-WAN simplifies management and reduces recurring network costs.
In the SD-WAN architecture, a company can benefit from end-to-end encryption across the entire network, including the wireless WAN, the Internet as well as the private MPLS. All devices and endpoints are completely authenticated thanks to the scalable key-exchange functionality and software-defined security from cloud services in the SD-WAN architecture.
MPLS Pros and Cons
Advantages of MPLS:
High Performance: MPLS predetermines network paths and transports traffic only along the paths. And it uses labels to isolate packets and assign a higher priority to important network traffic, which eliminates the complexity of routing traffic and provides higher performance.
Reliability: MPLS ensures reliable delivery of packets and maintains the quality of real-time protocols.
Disadvantages of MPLS:
Time-consuming Deployment: Configuring and deploying MPLS circuits is a slow process. Organizations using MPLS have difficulty reacting quickly to sudden increases in bandwidth demands.
Cost: With MPLS, the price per bandwidth is much higher than a broadband Internet connection.
Accessibility: It is made exclusive for point-to-point connectivity, meaning that cloud applications and SaaS cannot be accessed directly using MPLS.
SD-WAN Pros and Cons
Advantages of SD-WAN:
Performance: SD-WAN increases business agility and assures application performance.
Costs: SD-WAN removes expensive routing hardware and provides developers the freedom of choice without vendor lock-in constraints, bringing substantial savings. And it can leverage lower-cost services to reduce WAN connectivity costs.
No Geographic Restrictions: SD-WAN erases geographic boundaries and can add and remove connections at any site based on business needs.
Visibility and Control: SD-WAN can control the network more proactively and enable enterprises to mix and match connections based on content priorities.
Operation: SD-WAN simplifies operations through zero-touch provisioning and cloud-based management.
Disadvantages of SD-WAN:
Security: SD-WAN lacks on-site security features. SD-WAN makes every branch connected to the Internet, leaving every site open to attacks. And a data breach on one site could affect the entire organization.
Errors: SD-WANs may experience jitter and packet loss.
SD-WAN vs MPLS: Key Differences
After looking through the pros and cons of SD-WAN and MPLS, the differences between SD-WAN and MPLS might be much clearer. To help you fully understand these two technologies, MPLS and SD-WAN are compared in three key areas: cost, reliability, and security.
Managing the costs of WAN links is a challenge for global enterprises. As mentioned before, the traditional MPLS networks have been proven to be very expensive, and the emerging alternatives SD-WAN presents compelling price as well as performance benefits. Because MPLS networks contain bandwidth-hungry multimedia content, such as videos and AR/VR, the high cost per megabit required for MPLS is unattainable. In contrast, SD-WAN provides optimized multi-point connectivity by distributed, private data traffic exchange and control points, effectively reducing costs.
The reliability of the traffic flowing for businesses is a key concern for enterprises. SD-WAN tends to deliver a business-class, secure, and simple cloud-enabled WAN connection with open and software-based technology. MPLS technology, on the other hand, provides reliable, high-performance connectivity over dedicated network circuits. And it can effectively avoid packet loss. The reliability of both technologies is especially important for maintaining the quality of real-time devices, such as VoIP telephony, video conferencing, or remote desktops.
MPLS does not provide built-in security functionality and any sort of analysis of the data. For MPLS connectivity, traffic needs to be inspected for malware or other vulnerabilities, which requires network firewalls and additional security features to be deployed at the endpoints of the connection. To be fair, many SD-WAN solutions suffer from the same problem. But some SD-WAN solutions include integrated security by unifying secure connectivity.
Can SD-WAN Replace MPLS?
Although SD-WAN and MPLS have advantages and disadvantages in different aspects, SD-WAN can offer similar performance and reliability to dedicated MPLS circuits.
SD-WAN enables organizations to save on network investment by using relatively low-cost network links for most traffic. And SD-WAN also improves organizational WAN flexibility by eliminating the limitations of MPLS circuits. SD-WAN supports bandwidth expansion according to organizations' needs, without the delays associated with MPLS configuration. It optimizes routing so that traffic can be efficiently delivered to its destination, not limited to predefined MPLS circuit paths. And SD-WAN traffic can be routed anywhere, regardless of the geographic restriction the MPLS circuit suffers. That's why SD-WAN can replace MPLS to some extent.
Can SD-WAN and MPLS Work Together?
SD-WAN and MPLS have complementary advantages. The two technologies can work together to address some of their previous limitations. SD-WAN is an overlay technology defined by software, and MPLS sits underneath it to provide transport services.
You don't have to choose between SD-WAN or MPLS. In fact, you can use them together to provide a free network architecture to support your current and future business.
|
<urn:uuid:7c53e8a3-b8f1-4979-be87-66d15008ad5a>
|
CC-MAIN-2022-40
|
https://community.fs.com/blog/sd-wan-vs-mpls-pros-and-cons.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00656.warc.gz
|
en
| 0.919117 | 1,605 | 2.78125 | 3 |
The Hot Standby Router Protocol (HSRP) is a Cisco proprietary first-hop redundancy protocol (FHRP) designed to allow for transparent fail-over of the first-hop IP router, and has been described in detail in RFC 2281.
HSRP provides high network availability by providing first-hop routing redundancy for IP hosts on Ethernet, Fiber Distributed Data Interface (FDDI), Bridge-Group Virtual Interface (BVI), LAN Emulation (LANE), or Token Ring networks configured with a default gateway IP address. HSRP is used in a group of routers for selecting an active router and a standby router. In a group of router interfaces, the active router is the router of choice for routing packets; the standby router is the router that takes over when the active router fails or when preset conditions are met. HSRP active and standby routers send hello messages to the multicast address 220.127.116.11 using UDP port 1985.
The virtual router is simply an IP and MAC address pair that end devices have configured as their default gateway. The active router processes all packets and frames sent to the virtual router address. The virtual router does not process physical frames and exists in software only. The active router physically forwards packets sent to the MAC address of the virtual router. The virtual router MAC address is a well know mac-address and it is 0000.0c07.acxx, where xx is the HSRP group member. For example, if the group is 20 the virtual MAC address is 0000.0c07.ac14 (remember that the number in the mac address is expressed in HEX!!!).
When the active router fails, the other HSRP routers stop seeing hello messages from the active router. So, the standby router will be the new active router and, if possible, a new standby router will be elected. Because the new active router assumes both the IP and MAC addresses of the virtual router, the end stations see no disruption in service. The end-user stations continue to send packets to the virtual router MAC address, and the new active router delivers the packets to the destination.
HSRP has 2 timers:
- Hello interval time: Interval between successive HSRP hello messages from given router. Default is 3 seconds.
- Hold interval time: Interval between the receipt of a hello message and the presumption that the sending router has failed. Default is 10 seconds.
In this example, there are 3 routers connected to the local segment 192.168.0.0/24. These routers belong to the HSRP group number 1 and each physical interfaces have different ip address (192.168.0.11, 192.168.0.12, 192.168.0.13).
When the HSRP is enabled, these routers will be represented by 1 virtual router; in this case the MAC address is 0000.0c07.ac01 (0000.0c07.acxx is the HSRP well-known MAC address and the 01 is the HSRP group number) and the virtual ip address is 192.168.0.1. Remember that the default gateway defined to the PC is 192.168.0.1 and NOT the IP of the physical interface of the routers.
What are the commands used to enable HSRP?
To enable HSRP you must:
- Define the physical ip address of the interface
- Define the HSRP virtual ip address
Ciscozine_1#sh run interface fastethernet 0/0 Building configuration... Current configuration : 123 bytes ! interface FastEthernet0/0 ip address 192.168.0.11 255.255.255.0 standby 1 ip 192.168.0.1 end Ciscozine_1#
Ciscozine_2#sh run interface fastethernet 0/0 Building configuration... Current configuration : 123 bytes ! interface FastEthernet0/0 ip address 192.168.0.12 255.255.255.0 standby 1 ip 192.168.0.1 end Ciscozine_2#
Ciscozine_3#sh run interface fastethernet 0/0 Building configuration... Current configuration : 123 bytes ! interface FastEthernet0/0 ip address 192.168.0.13 255.255.255.0 standby 1 ip 192.168.0.1 end Ciscozine_3#
Remember: The standby ip interface configuration command enables HSRP and establishes 192.168.0.1 as the IP address of the virtual router. The configurations of routers include this command so that the 3 routers share the same virtual IP address. The 1 establishes Hot Standby group 1. (If you do not specify a group number, the default is group 0.) The configuration for at least one of the routers in the Hot Standby group must specify the IP address of the virtual router; specifying the IP address of the virtual router is optional for other routers in the same Hot Standby group.
Optional settings are: preempt, priority, authentication, timers, …
To display Hot Standby Router Protocol (HSRP) information, use the show standby command in privileged EXEC mode.
show standby [type number [group-number]] [active | init | listen | standby] [brief]
- type number: (Optional) Interface type and number for which output is displayed.
- group-number: (Optional) Group number on the interface for which output is displayed.
- active: (Optional) Displays HSRP groups in the active state.
- init: (Optional) Displays HSRP groups in the initial state.
- listen: (Optional) Displays HSRP groups in the listen or learn state.
- standby: (Optional) Displays HSRP groups in the standby or speak state.
- brief: (Optional) Summarizes each standby group as a single line of output.
In this istance, the output of the show standby command is:
Ciscozine_2#sh standby FastEthernet0/0 - Group 1 State is Standby 6 state changes, last state change 00:11:12 Virtual IP address is 192.168.0.1 Active virtual MAC address is 0000.0c07.ac01 Local virtual MAC address is 0000.0c07.ac01 (default) Hello time 3 sec, hold time 10 sec Next hello sent in 2.772 secs Preemption disabled Active router is 192.168.0.13, priority 100 (expires in 7.736 sec) Standby router is local Priority 100 (default 100) IP redundancy name is "hsrp-Fa0/0-1" (default) Ciscozine_2#
If the priority of the routers are the same, the active router (the router that forward the packets) will be the router with the highest ip address and the stanby router will be the router with the second highest IP address.
To debugging HSRP operations use the command debug standby.
|
<urn:uuid:55a6f1f7-3673-43c9-baf1-3a30990402f5>
|
CC-MAIN-2022-40
|
https://www.ciscozine.com/implementing-high-availability-with-hsrp/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00656.warc.gz
|
en
| 0.813773 | 1,496 | 3 | 3 |
To frame the concept of security management versus privacy management especially when it comes to international standards, we need to introduce some concepts. Firstly, we need to understand the boundaries of responsibility between privacy and security when there is a personal data breach. Those responsible for security are tasked with protecting the confidentiality of personal data. Thus, privacy is responsible for classifying the personal data and determining the security measures required.
After the roles have been clarified, the boundaries of responsibility need to be agreed. In order to do this, the organization needs security countermeasure classification categories. One way to do this is to categorise information into existing levels of security classification. For example: top secret, secret, confidential and unclassified. Personal data can be classified based on the level of secrecy required of the information. You may also define and document other forms of data classification schemes for personal data. This definition is needed by those working on or developing privacy measures in order to clearly define responsibilities. For those organizations who are seeking further guidance I would recommend privacy and security international standards such as the ISO/IEC 29151 Code of Practice for Personally Identifiable Information protection as a good technical practice guide.
The difference between privacy and security
Security, by definition means that the organization has a responsibility to secure and protect all types of information. Privacy within this framework means the appropriate use of personal information, and within legal and internationally accepted guidelines. So how do we define “appropriate”?
A layman’s point of view to “appropriate” use might be to avoid the use of personal information that affects the life of an individual in a manner that causes harm. The level of this harm varies and might include a company collecting information to push direct mail to a person after collecting his or her contact details. Direct mail is neither good nor bad and depends on whether the recipient likes to receive it or not. The “appropriate” thing for the company is to then manage the preferences of the individuals. However, there may be another issue. If the company does collect information, but does nothing with it then the customer may wonder why that information was collected. What will it be used for? This is not then a legal issue, but rather one of customer trust and this brings us neatly to the issue of notice and consent.
Notice and consent
Of course, consent is tremendously important. There are two types of consent that are required. The first one is explicit consent and the second is implicit consent. Explicit consent means that an action by the principal of that personal information is required to collect consent for data collection. Consent cannot be assumed. An example is ticking an “agreement” box to agree to receive a newsletter. The second is implicit consent. Action by the individual is not required. If the individual does not tick a “disagreement” box, then the organization can accept that as implicit consent to collect and use their personal information.
Within any large organizations, there are a variety of different types of personal information (such as address, social security numbers, mobile phone numbers, email address) that may be used in different ways. That is why it is essential to build a spreadsheet listing the types of individual information and whether various types of consent have been granted. It is interesting to note that in the European Union under the rules and regulations of the General Data Protection Regulation (EU GDPR), explicit consent is required – these rules and regulations do not permit implicit consent.
Using privacy and security international standards
International standards such as ISO/IEC JTC 1 SC 27 that addresses privacy and security, go a long way towards informing an organization on guidelines of how to treat consent. These international standards also highlight the importance of privacy impact assessments (PIA). In the real world, the PIA might sometimes be the assessment for the impact on business’s regarding a privacy incident or a security incident, rather than be limited to the impact on the individual whose information might be compromised. SC 27 does not fully cover the impact on the individual, but that will probably be the next step in the evolution of that international standard.
In the end privacy and security risks cannot be decided by the company directly, or alone. Risk assessment must be shared between the organization and the customers or consumers. The responsibility of the company is to ensure that it follows all the applicable rules and regulations regarding notice and consent and makes every effort to go beyond simple requirements when it comes to absolute transparency when declaring its intended use of information to the customer or client. Failure to do so will not only incur the wrath of regulators and the attendant fines as well as other possible civil legal penalties, but will also expose the company to enormous reputational risk.
|
<urn:uuid:e72cbc60-b874-4b30-930c-eee7d30a8d5f>
|
CC-MAIN-2022-40
|
https://www.cpomagazine.com/data-privacy/fit-privacy-security-using-international-standards/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00656.warc.gz
|
en
| 0.934515 | 954 | 2.9375 | 3 |
Red Team vs Blue Team Defined
In a red team/blue team exercise, the red team is made up of offensive security experts who try to attack an organization’s cybersecurity defenses. The blue team defends against and responds to the red team attack.
Modeled after military training exercises, this drill is a face-off between two teams of highly trained cybersecurity professionals: a red team that uses real-world adversary tradecraft in an attempt to compromise the environment, and a blue team that consists of incident responders who work within the security unit to identify, assess and respond to the intrusion.
Red team/blue team simulations play an important role in defending the organization against a wide range of cyberattacks from today’s sophisticated adversaries. These exercises help organizations:
- Identify points of vulnerability as it relates to people, technologies and systems
- Determine areas of improvement in defensive incident response processes across every phase of the kill chain
- Build the organization’s first-hand experience about how to detect and contain a targeted attack
- Develop response and remediation activities to return the environment to a normal operating state
Front Lines Report
Every year our services team battles a host of new adversaries. Download the Cyber Front Lines report for analysis and pragmatic steps recommended by our services experts.Download Now
What is a red team
In a red team/blue team cybersecurity simulation, the red team acts as an adversary, attempting to identify and exploit potential weaknesses within the organization’s cyber defenses using sophisticated attack techniques. These offensive teams typically consist of highly experienced security professionals or independent ethical hackers who focus on penetration testing by imitating real-world attack techniques and methods.
The red team gains initial access usually through the theft of user credentials or social engineering techniques. Once inside the network, the red team elevates its privileges and moves laterally across systems with the goal of progressing as deeply as possible into the network, exfiltrating data while avoiding detection.
What is red teaming and why does your security team need it?
Red teaming is the act of systematically and rigorously (but ethically) identifying an attack path that breaches the organization’s security defense through real-world attack techniques. In adopting this adversarial approach, the organization’s defenses are based not on the theoretical capabilities of security tools and systems, but their actual performance in the presence of real-world threats. Red teaming is a critical component in accurately assessing the company’s prevention, detection and remediation capabilities and maturity.
What is a blue team
If the red team is playing offense, then the blue team is on defense. Typically, this group consists of incident response consultants who provide guidance to the IT security team on where to make improvements to stop sophisticated types of cyberattacks and threats. The IT security team is then responsible for maintaining the internal network against various types of risk.
While many organizations consider prevention the gold standard of security, detection and remediation are equally important to overall defense capabilities. One key metric is the organization’s “breakout time” — the critical window between when an intruder compromises the first machine and when they can move laterally to other systems on the network.
CrowdStrike typically recommends a “1-10-60 rule,” which means that organizations should be able to detect an intrusion in under a minute, assess its risk level within 10 minutes and eject the adversary in less than one hour.
Benefits of red team/blue team exercises
Implementing a red team/blue team strategy allows organizations to actively test their existing cyber defenses and capabilities in a low-risk environment. By engaging these two groups, it is possible to continuously evolve the organization’s security strategy based on the company’s unique weaknesses and vulnerabilities, as well as the latest real-world attack techniques.
Through red team/blue team exercises it is possible for the organization to:
- Identify misconfigurations and coverage gaps in existing security products
- Strengthen network security to detect targeted attacks and improve breakout time
- Raise healthy competition among security personnel and foster cooperation among the IT and security teams
- Elevate awareness among staff as to the risk of human vulnerabilities which may compromise the organization’s security
- Build the skills and maturity of the organization’s security capabilities within a safe, low-risk training environment
Who is the purple team?
In some cases, companies organize a red team/blue team exercise with outside resources that do not fully cooperate with internal security teams. For example, digital adversaries hired to play the part of the red team may not share their attack techniques with the blue team or fully debrief them on points of weaknesses within the existing security infrastructure — leaving open the possibility that some gaps may remain once the exercise concludes.
A so-called “purple team” is the term used to describe a red team and blue team that work in unison. These teams share information and insights in order to improve the organization’s overall security.
At CrowdStrike, we believe that red team/blue team exercises hold relatively little value unless both teams fully debrief all stakeholders after each engagement and offer a detailed report on all aspects of project activity, including test techniques, access points, vulnerabilities and other specific information that will help the organization adequately close gaps and strengthen their defenses. For our purposes, “purple teaming” is synonymous with red team/blue team exercises.
Red Team vs Blue Team Skills
Red team skill set
A successful red team must be devious in nature, assuming the mindset of a sophisticated adversary to gain access to the network and advance undetected through the environment. The ideal team member for the red group is both technical and creative, capable of exploiting system weaknesses and human nature. It’s also important that the red team be familiar with threat actor tactics, techniques and procedures (TTPs) and the attack tools and frameworks today’s adversaries use.
For example, a Florida teenager recently used spear-phishing tactics as well as social engineering techniques to obtain employee credentials and access internal systems at Twitter, resulting in a high-profile breach of more than 100 celebrity accounts.
A member of the red team should have:
- A deep awareness of computer systems and protocols, as well as security techniques, tools and safeguards
- Strong software development skills in order to develop custom made tools to circumvent common security mechanisms and measures
- Experience in penetration testing, which would help exploit common vulnerabilities and avoid activities that are often monitored or easily detected
- Social engineering skills that allow the team member to manipulate others into sharing information or credentials
Blue team skill set
While the blue team is technically focused on defense, much of their job is proactive in nature. Ideally, this team identifies and neutralizes risks and threats before they inflict damage on the organization. However, the increasing sophistication of attacks and adversaries makes this an all but impossible task for even the most skilled cybersecurity professionals.
The blue team’s job is equal parts prevention, detection and remediation. Common skills for the blue team include:
- A full understanding of the organization’s security strategy across people, tools and technologies
- Analysis skills to accurately identify the most dangerous threats and prioritize responses accordingly
- Hardening techniques to reduce the attack surface, particularly as it relates to the domain name system (DNS) to prevent phishing attacks and other web-based breach techniques
- Keen awareness of the company’s existing security detection tools and systems and theiralert mechanisms
How Do the Red Team and Blue Team Work Together?
Scenarios When a Red Team/Blue Team Exercise Is Needed
Red team/blue team exercises are a critical part of any robust and effective security strategy. Ideally, these exercises help the organization identify weaknesses in the people, processes and technologies within the network perimeter, as well as pinpoint security gaps such as backdoors and other access vulnerabilities that may exist within the security architecture. This information ultimately will help customers strengthen their defenses and train or exercise their security teams to better respond to threats.
Since many breaches can go undetected for months or even years, it is important to conduct red team/blue team exercises on a regular basis. Research shows that adversaries dwell, on average, 197 days within a network environment before they are detected and ejected. This raises the stakes for companies in that attackers can use this time to set up backdoors or otherwise alter the network to create new points of access that could be exploited in the future.
One important differentiator in the way that CrowdStrike approaches red team/blue team exercises is in terms of the overall strategy. We use red team activities to seed the environment with data so the blue team can gauge the risk associated with each incident and respond accordingly. As such, we don’t treat this exercise as a proverbial war game where our clients attempt to block each and every red team action, but effectively assess and prioritize those events that the data reveals to be the greatest threat.
Red Team Exercise Examples
Red teams use a variety of techniques and tools to exploit gaps within the security architecture. For example, in assuming the role of a hacker, a red team member may infect the host with malware to deactivate security controls or use social engineering techniques to steal access credentials.
Red team activities commonly follow the MITRE ATT&CK Framework, which is a globally-accessible knowledge base of adversary tactics, techniques and methods based on real-world experience and events. The Framework serves as a foundation for the development of prevention, detection and response capabilities that can be customized based on each organization’s unique needs and new developments within the threat landscape.
Examples of red team activities include:
- Penetration testing in which a red team member attempts to access the system using a variety of real-world techniques
- Social engineering tactics, which aim to manipulate employees or other network members into sharing, disclosing or creating network credentials
- Intercepting communication in order to map the network or gain more information about the environment in order to circumvent common security techniques
- Cloning an administrator’s access cards to gain entry to unrestricted areas
Blue Team Exercise Examples
Functioning as the organization’s line of defense, the blue team makes use of security tools, protocols, systems and other resources to protect the organization and identify gaps in its detection capabilities. The blue team’s environment should mirror the organization’s current security system, which may have misconfigured tools, unpatched software or other known or unknown risks.
Examples of blue team exercises include:
- Performing DNS research
- Conducting digital analysis to create a baseline of network activity and more easily spot unusual or suspicious activity
- Reviewing, configuring and monitoring security software throughout the environment
- Ensuring perimeter security methods, such as firewalls, antivirus and anti-malware software, are properly configured and up-to-date
- Employing least-privilege access, which means that the organization grants the lowest level of access possible to each user or device to help limit lateral movement across the network in the event of a breach
- Leveraging microsegmentation, a security technique that involves dividing perimeters into small zones to maintain separate access to every part of the network
How to Build an Effective Red Team and Blue Team
How CrowdStrike® Services can be the right solution for organizations:
Adversaries are constantly evolving their attack TTPs, which can lead to breaches going undetected for weeks or months. At the same time, organizations are failing to detect sophisticated attacks because of ineffective security controls and gaps in their cybersecurity defenses. Security teams need to make sure they are ready for a targeted attack, and the ability to withstand one type of attack does not mean the team has the tools and visibility to withstand a more sophisticated attack.
The CrowdStrike Adversary Emulation Exercise is designed to give your organization the experience of a sophisticated targeted attack by real-world threat actors — without the damage or costs of experiencing a real breach. The CrowdStrike Services team leverages real-world threat actor TTPs derived from intelligence collected by CrowdStrike experts in the field responding to incidents and through the CrowdStrike Falcon® platform, which identifies trillions of events and millions of indicators every week. CrowdStrike Services develops a targeted attack campaign specific to your organization and aimed at users of interest, just as an adversary would. The team takes an objective, goal-oriented approach to the attack, focusing on demonstrating access to critical information in your organization to help show the impact of a breach to your leadership without having to suffer through a real breach. This exercise will help you answer the question, “Are we prepared for a targeted attack?”
|
<urn:uuid:975aa871-fcf1-4dc0-a09f-146df4f24e3b>
|
CC-MAIN-2022-40
|
https://www.crowdstrike.com/cybersecurity-101/red-team-vs-blue-team/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00656.warc.gz
|
en
| 0.941912 | 2,603 | 2.8125 | 3 |
In this December 2021 speech, Bo Li, Deputy Managing Director of the International Monetary Fund (IMF), reinforced how digital technology permeates all aspects of society, increasing our dependency on interconnectivity and reliance on the networks that support it.
As a result, it’s essential to establish building blocks that address systemic risks that could compromise this ecosystem.
Li goes onto to identify 3 key issues around cyber resilience:
- Cyber resilience isn’t an isolated issue: As technology shifts from being a vehicle for efficiency to one that enables new ways of business and social interactions, so does the impact of threats that target weaknesses in technology.
- Cyber resilience isn’t a one-dimensional issue: The origin, motivation, and execution of cyberattacks are varied and ever evolving – and the various safe-guarding governance mechanisms that exist need to adapt with the same agility.
- Cyber resilience is a societal issue: As technology becomes an essential part of life, so cyber resilience becomes increasingly dependent on the behavior and choices of people and society.
These 3 issues strongly align with the cyber resilience challenges the banking and finance sector faces, both historically and in the current era of rapid digital transformation.
The banking and finance sector is critical infrastructure
Payments systems are vast and interconnected, personal and business banking is now performed from anywhere over a multitude of channels, open banking exposes core banking systems to third-party applications – organizations must be able to deliver these services reliably while ensuring they protect customer data, adhere to regulations , and maintain trust.
Further, the deep interdependencies between the banks, the local and global economies they serve, and the potential threats to these systems in case of instability has led the finance sector to be classified as critical infrastructure in many parts of the world. This brings their importance to society to the level of services such as healthcare, electricity, water, and telecoms.
Banking and finance firms must detect and contain cyberattacks
In line with the expectation of other critical infrastructure, organizations in the financial services sector are no longer expected to simply be able to recover and restore after an incident. They must now be able to detect and contain incidences at the outset and ensure that a minimum level of acceptable service can be maintained throughout.
The Bank for International Settlements – “the central banks’ bank” – reiterates this in their Guidance on Cyber Resilience for Financial Market Infrastructures:
“The safe and efficient operation of financial market infrastructures (FMIs) is essential to maintaining and promoting financial stability and economic growth. If not properly managed, FMIs can be sources of financial shocks, such as liquidity dislocations and credit losses, or a major channel through which these shocks are transmitted across domestic and international financial markets. In this context, the level of cyber resilience, which contributes to an FMI’s operational resilience, can be a decisive factor in the overall resilience of the financial system and the broader economy.”
Many regulators are seeking to uplift the cyber resilience posture of their member organizations, including:
- The Australian Prudential Regulation Authority’s Prudential Standard CPS 234
- The OCC’s Sound Practices to Protect Operational Resilience
- The EU’s Digital Operational Resiliency Act
Complementary to these are frameworks like CBEST from the Bank of England that allow members to assess the cyber resilience of their firms’ critical business services.
How Zero Trust Segmentation helps deliver cyber resilience
To quote Bo Li: “The increasing digitalization of financial services, in combination with the presence of high-value assets and data, make the financial system a prime target. The high level of interconnectedness across financial institutions ... creates a potential vulnerability wherein a localized cyber incident could quickly spread across markets and jurisdictions.”
Zero Trust Segmentation helps address these challenges and improve cyber resilience by:
- Protecting high-value assets and data by ensuring that only authorized access is permitted, reducing the available attack surface.
- Providing consistent visibility across the firm’s hybrid infrastructure, highlighting all dependencies.
- Limiting access on known high-risk ports to prevent the rapid spread of ransomware.
Financial organizations globally, including 6 out of 10 of the world’s largest banks, rely on the Illumio Zero Trust Segmentation platform to improve their cyber resilience.
Read our new industry guide to find out how Illumio can help implement Zero Trust Segmentation in your financial services or banking organization.
And learn how our customers in the industry stay cyber resilient with Zero Trust Segmentation:
|
<urn:uuid:e1fcb98c-fdc5-485e-80b9-905e4e776ace>
|
CC-MAIN-2022-40
|
https://www.illumio.com/blog/cyber-resilience-banking-sector-top-security-priority
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00656.warc.gz
|
en
| 0.912922 | 950 | 2.609375 | 3 |
Universities in the United States have seen a new wave of phishing attacks targeting students and staff. The email messages used the theme of online dating to trick the victim into downloading a Remote Access Trojan (RAT) onto their device so that the attackers can steal sensitive information. The RAT being used is named Hupigon RAT and was previously used by Chinese state-backed threat actors as early as 2010. The RAT was originally using zero-day vulnerabilities which affected versions 6, 7, and 8 of Internet Explorer. The current phishing campaign is believed to be the work of financially motivated criminals, not a state-sponsored threat group. The email includes pictures of two women and asks the victim to select one to connect with on a dating website. Once the link to the online dating profile is clicked, an executable used to install Hupigon is downloaded to the victim’s machine. The campaign was most active from April 14-15, 2020, and sent approximately 80,000 messages to different victims at that time. In total, the campaign sent 150,000 emails throughout 60 different countries, with almost half of the emails targeting education establishments.
By Anthony Zampino Introduction Leading up to the most recent Russian invasion of Ukraine in
|
<urn:uuid:d766429d-4133-4257-88b5-2a5536737d04>
|
CC-MAIN-2022-40
|
https://www.binarydefense.com/threat_watch/us-universities-targeted-in-new-phishing-attack-with-old-rat/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00056.warc.gz
|
en
| 0.956893 | 252 | 2.515625 | 3 |
Will the cold storage data center of the future include a DNA synthesizer? According to a new research paper by the University of Washington and Microsoft, it’s a strong possibility.
Today, we generate data faster than we can increase storage capacity. The volume of digital data worldwide is projected to exceed 16 zettabytes sometime next year, the paper’s authors wrote, citing a forecast by IDC Research. “Alarmingly, the exponential [data] growth rate easily exceeds our ability to store it, even when accounting for forecast improvements in storage technologies,” they said.
A big portion of the world’s data sits in archival storage, where the densest medium currently is tape, offering maximum density of about 10 GB per cubic millimeter. One research project has demonstrated an optical disk technology that’s 10 times denser than tape.
Nature’s Data Storage
But there’s another approach that promises storage density of 1 Exabyte per cubic millimeter, or eight orders of magnitude higher than tape. That approach is encoding data the same way nature encodes instructions for building every living thing on Earth: DNA.
In addition to density, DNA storage addresses another big limitation of archival storage: longevity. Tape can hold data for 10 to 30 years before data integrity starts to corrode, and spinning disks are rated for three to five years. DNA’s observed half-life is more than 500 years in harsh environments, according to the paper.
The idea to store data in the form of synthetic DNA has been around for a long time, but the huge improvements in cost and efficiency of synthesizing and sequencing genes in recent years have made its feasibility a lot more probable. Its state of the art went from a 23-character message in 1999 to a 739 kB message in 2013.
As today’s booming biotech industry delivers orders-of-magnitude cost and efficiency improvements in DNA sequencing and synthesis, it quickly raises the limits of how much data can be stored using the method. Growth in sequencing productivity eclipses Moore’s Law, the paper’s authors wrote.
Big DNA Storage Improvements Proposed
The work presented in the paper pushes the technology further in two big ways: the researchers propose a way to improve integrity of stored data (current DNA storage error rates are about 1 percent per nucleotide) and a way to access individual pieces of data in a sequence randomly (with the current approach, you have to sequence and decode an entire DNA pool to access a single byte within).
The paper proposes an architecture for a DNA storage system that includes a DNA synthesizer, a storage container, and a DNA sequencer. The synthesizer encodes data to be stored, the container holds pools of DNA that map to a volume, and the sequencer reads DNA sequences and converts them to digital data.
It addresses the error problem with redundancy, an approach that has been proposed before but without regard to the impact of redundancy on storage density. The new encoding scheme introduced in the paper offers “controllable redundancy,” where you can specify a different level of reliability and density for each type of data.
The problem of random access is solved by using the same technique molecular biologists use to isolate specific regions of a DNA sequence in research. Polymerase Chain Reaction is a technique used to “amplify” a piece of DNA by repeated cycles of heating and cooling. The DNA storage researchers use PCR to amplify only the desired data, which they say accelerates reads and enables specific data to be accessed without sequencing the entire DNA pool.
While DNA storage is not practical today, the rate of progress in DNA sequencing and synthesis in the biotech industry and the “impending limit of silicon technology” make it something computer architects should seriously consider today, the researchers conclude. They envision hybrid silicon and biochemical archival storage systems as the ultimate cold storage of the future.
|
<urn:uuid:95aa3701-346d-4100-9619-d3a765cc3edf>
|
CC-MAIN-2022-40
|
https://www.datacenterknowledge.com/archives/2016/04/18/can-nature-help-us-solve-the-impending-data-storage-crisis/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00056.warc.gz
|
en
| 0.91213 | 807 | 3.375 | 3 |
- Machine learning systems can be vulnerable to discriminatory biases, according to Yieldify CTO Richard Sharp in an article for Enterpreneur.com.
- A number of studies have shown that unconscious bias can slip into machine learning algorithms, like personalized online advertising, if efforts aren't made to ensure such algorithms are fair.
- This could be particularly vexing as machine learning moves into areas like credit scoring, hiring or criminal sentencing.
An examination of machine learning systems found they can discriminate by propagating prevailing social biases.
"If you train a machine learning algorithm on real data from the world we live in, it will pick up on these biases," Sharp wrote. "And to make matters worse, such algorithms have the potential to perpetuate or even exacerbate these biases when deployed."
Sharp suggests tech companies educate their employees about the negative implications of the biases in the models they are building. For example, Google and Facebook both recently developed training courses on unconscious bias.
Without taking control of these issues, companies' systems can end up deeply embedded with social biases, introducing risks to both the company and the user.
|
<urn:uuid:74abefcc-3674-4eea-9a57-6e0514b4d039>
|
CC-MAIN-2022-40
|
https://www.ciodive.com/news/how-human-biases-can-slip-into-machine-learning/424596/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00056.warc.gz
|
en
| 0.934045 | 236 | 3.203125 | 3 |
A DNS Response Flood is a layer 7 attack which floods a target with DNS responses from different attackers.
An attacker generates Standard DNS query response packets with a random record from one of the following types:
"A", "MX", "CNAME", "ALL"
- Type “A” for IPv4 addresses
- Type “CNAME” (Canonical Names) – specifies a domain name that has to be queried in order to resolve the original DNS query
- Type “MX” (Mail eXchange) to request information about the mail exchange server for a specific DNS domain name
As you can see in the image 1 the attacker (220.127.116.11) generates multiple DNS responses for random records (like mx1.zwhyd.vsgmwv.com or A 18.104.22.168).
The target responds with ICMP error message stating that its destination port (53) is unreachable.
“Image 1 – DNS Responses”
As shown in the image 2 DNS primarily uses the User Datagram Protocol (UDP) on port number 53 (While TCP is also part of the DNS protocol, it is not used in this attack vector). DNS Responses contains the query and the answers:
“Image 2 – DNS Response Packet Structure”
Image 3 shows an example of a DNS response packet with an answer that contains the IP of the FQDN record in the query:
“Image 3 – DNS Response Packet Structure”
Image 4 shows a statistical summary. For this single attacker the number of responses/PPS per second is over 25.
“Image 4- Requests Per Second”
Analysis of DNS Response attack in Wireshark – Filters:
As mentioned in the Technical Analysis, for this attack, DNS uses the UDP protocol, so the very basic filter that can be used is “udp”. Furthermore, to identify DNS packets specifically, the “dns” filter can be used.
For showing only DNS responses use “dns.flags == 0x8180”.
If you see a single source sending many such responses, it could be an attacker.
Download example PCAP of DNS Response attack:
*Note: IP’s have been randomized to ensure privacy.Download
|
<urn:uuid:a92ecc14-9c23-4515-ab53-ba34f3fe2114>
|
CC-MAIN-2022-40
|
https://kb.mazebolt.com/knowledgebase/dns-response-flood/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00056.warc.gz
|
en
| 0.844485 | 495 | 2.703125 | 3 |
Welcome to the free SAP Repetitive Manufacturing (REM) tutorial, a part of our free SAP PP Training course. Here, we will explain what is REM in SAP, its key features, and its processes.
SAP Repetitive Manufacturing (REM) is one of the types of manufacturing processes supported by SAP. For each industry, there is a manufacturing process that is more suitable than the other. Factors such as production rates, consumer demands, and product complexity have an influence when selecting the manufacturing process.
The commonly used manufacturing processes are:
- Discrete manufacturing
- Repetitive manufacturing
- Process manufacturing
SAP Repetitive Manufacturing (REM) method is used in industries where the production is quantity-based and not order-based. Another main aspect of repetitive manufacturing is that the production is based on time such as months or years. Examples of industries that use SAP repetitive manufacturing are:
Let’s take a device like a mobile phone, for example. Once the design is finalized, its production will be done in mass quantities. The production is not order-based, the focus is to produce a quantity within a specific period. In this example, the time period or planning horizon can be prior to the release of the new model. So, the manufacturing will be quantity and time-based.
Another key aspect of repetitive manufacturing in SAP is that products have a steady sequence of activities. This means that the routing is simple and there are not many routing variations. Below are some key differences and similarities between discrete manufacturing and repetitive manufacturing.
|Order-based production||Quantity and period-based production|
|Frequent product changes||A steady flow of similar products without many changes|
|Both make-to-order and make-to-stock production methods can be utilized||Both make-to-order and make-to-stock production methods can be utilized|
|Backflushing is done order-based. Backflushing happens at the confirmation of individual orders.||Period-based backflushing.|
|Order-based cost controlling||Period-based cost controlling|
|High product complexity||Low product complexity|
SAP Repetitive Manufacturing Process
SAP facilitates both Make-To-Order (MTO) and Make-To-Stock (MTS) repetitive manufacturing methods:
- MTS: Production is done without any reference to an order. Planned independent requirements are used to create demand in the system. When sale orders are received, those will be fulfilled by the warehouse stock.
- MTO: This is known as sales order-based production. Each production order will have a reference to the sale order. Stocks are maintained with the sale order assignment.
Master Data for Repetitive Manufacturing in SAP
Several master data objects need to be maintained to perform repetitive manufacturing in SAP.
- Material master – In the material master, we need to maintain the repetitive manufacturing profile and the production version.
- Bill of material (BOM) – Material BOM needs to be created.
- Work center – Production lines have to be mapped as work centers.
- Routing – In repetitive manufacturing, rate routing is used. There can be one or more operations.
Other than these master data, we need to define the product cost collector. Next, we can learn how the master data related to repetitive manufacturing is set up in SAP.
SAP Repetitive Manufacturing Profile
The Repetitive Manufacturing (REM) profile is the key to a successful setup of REM in SAP. The REM profile controls the way how REM is carried out in the system. This can be set up by following the below SAP configuration path or using the transaction code OSPT. Existing REM profiles can be changed from transaction code OSP2.
IMG -> Logistics -> Production -> Repetitive Manufacturing -> Control -> Create repetitive manufacturing profile using assistant.
The REM profile contains the below information:
- REM production types: Production type selection is the initial step of setting up the profile. We need to select whether MTO or MTS production method is used.
- Reporting points: This is defined to record consumption details such as work in progress and stock management during inventory management. This is somewhat like a milestone operation used in discrete manufacturing.
- Automatic goods movement: Determines whether goods movement or backflush needs to happen during reporting point confirmation.
- Activities posting: Determines whether the activities need to be posted to a product cost controller at the time of backflush.
- Firming plan orders: This option will make sure that the plan orders are not changed by the MRP run.
- Automatic stock determination: By defining the automatic stock determination procedure, the system will suggest the available stocks for consumption.
- Batch determination procedure: If the material is batch-managed, we can define a batch determination procedure and assign it to the REM profile. Then the system will suggest batches based on the defined criteria.
- Movement types for postings: We can define the movement types to be used for different postings. The system will suggest the default movement types. For example, 261 is the movement type for goods issue. If there are many customized movement types to be used, we can maintain those in the REM profile.
The REM profile is assigned in the material master MRP 4 tab.
SAP Production version is a combination of BOM and routing. In REM, rate routing is used. The production version is maintained in the MRP 4 tab of the material master. Under the planning data section in the production version, we can select rate routing and then give the routing group.
Bill of Material
REM requires a Bill of Materials to be maintained. We can have several alternative BOMs and they need to be assigned to the correct production versions.
Rate routing is specially used in REM. Rate routing enables a production rate to be maintained operation-wise. It has the same structure as routing and can be created from the transaction code CA21.
Product Cost Collector
The SAP order settlement can be done in two ways:
- Order related
- Product related
Order-related settlement is done when the product range is flexible and when the cost is managed in individual production lots. In the product-related settlement, the settlement is done based on the cost of the product cost collector.
A product cost controller is used in SAP REM to capture the actual costs. During production confirmations, the cost controller will collect the costs related to the goods movement performed and the settlement will happen periodically.
SAP Repetitive Manufacturing Process Flow
The above diagram consists of the basic steps followed during a REM process in SAP. Production requirements will be captured based on the MTS or MTO scenario. Next, the production planning and line loading will be done. Line loading or allocating the work centers will be based on the capacity. Then the production lists, which have the materials, quantities, and dates, will be handed over to the shop floor. Based on the requirement, methods like Kanban will be used to handle material staging. The next stage is the production and confirmation of the final order. The REM process will end when the cost captured through the cost collector is settled.
This concludes the tutorial on repetitive manufacturing. In summary, we have studied the key features of SAP Repetitive Manufacturing (REM), master data in REM, and the REM process.
Did you like this tutorial? Have any questions or comments? We would love to hear your feedback in the comments section below. It’d be a big help for us, and hopefully, it’s something we can address for you in the improvement of our free SAP PP tutorials.
Go to the next lesson: SAP Backflushing in Repetitive Manufacturing (REM)
Go to the previous lesson: SAP Scrapping Process
Go to overview of the course: SAP PP Training
|
<urn:uuid:c1328fdf-2ee8-4b30-8081-419a10cd80f8>
|
CC-MAIN-2022-40
|
https://erproof.com/pp/sap-pp-training/sap-repetitive-manufacturing-rem/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00056.warc.gz
|
en
| 0.908672 | 1,637 | 3.015625 | 3 |
Brazil passed its comprehensive general data protection law, the Brazil’s Lei Geral de Proteção de Dados (LGPD) on 14 August 2018. Tailored on its European cousin, the General Data Protection Regulation (GDPR), the new legislation aimed to replace and supplement existing legal norms by regulating the use of personal data by both public and private sectors.
Set to come into force on 15 August 2020, after its initial 18-month deadline was extended by an additional 6 months by President Michel Temer, doubts concerning the LGPD’s future enforcement surfaced when the same president vetoed several acts of the bill before its passing, most notably those needed to create Brazil’s data protection authority, the Autoridade Nacional de Proteção de Dados (ANPD).
All uncertainty has now been cast aside: on 8 July 2019, Brazil’s new president, Jair Bolsonaro, promulgated Law No. 13.853/2019 which amends some provisions of the LGPD and provides for the creation of the ANPD. With its data protection authority now a reality, Brazil is moving full speed ahead towards the enforcement of the LGPD.
Many companies that have gone through the rush for GDPR compliance consider the European regulation to be the most exhaustive of its kind in the world today, but there are notable differences between it and the LGPD which means that GDPR compliance does not ensure LGPD compliance, although it’s definitely a step in the right direction. Let’s look at some of the key differences.
While both the GDPR and the LGPD protect any information relating to an identified or identifiable natural person, unlike the GDPR, the LGPD does not give a detailed definition of what kind of information it refers to, making its scope very broad.
Anonymized data falls outside the scope of both laws as long as reasonable steps have been taken to ensure that it cannot be re-identified. The LGPD however makes an exception: data is considered personal when used for the behavior profiling of a particular natural person, if that person is identified.
Pseudonymized data meanwhile falls under the scope of the GDPR since it’s considered information on an identifiable natural person, but the LGPD does not mention it except in the context of research undergone by public health agencies.
Both the GDPR and the LGPD have an extraterritorial reach: they apply to all companies offering goods or services to data subjects in the EU or Brazil, regardless of where they are located.
There is one notable difference between them: the GDPR explicitly includes organizations that are not established in the EU, but that monitor the behavior of individuals located in it. The LGPD has no such provision. The LGPD will also not apply to data flows that originate outside of Brazil and are merely transmitted, but not further processed in the country.
Legal bases for data processing
One of the major differences between the two laws is the legal bases for data processing. The GDPR lists six, while the LGPD goes further and includes ten. To the GDPR’s original six: explicit consent, contractual performance, public task, vital interest, legal obligation and legitimate interest, the LGPD adds a further four: studies by a research body, exercise of rights in legal proceedings, health protection and credit protection.
The most interesting addition to this list is the credit protection, a provision exclusive to Brazil, which was no doubt included due to current discussions of reform of one of the laws that regulates credit scoring in Brazil, the Positive Credit History Law.
Data Protection Officers
Under the GDPR, data controllers and processors whose core activities consist either of processing operations which require regular and systematic monitoring of data subjects on a large scale, or processing on a large scale of special categories of data, are required to appoint a data protection officer (DPO).
The LGPD on the other hand only requires data controllers to appoint a DPO. However, it does not limit the circumstances under which a DPO must be appointed which means that all companies, regardless of their size, type or the volume of the data they collect will need a DPO. That being said, while this is how things stand at the moment, the ANPD is allowed to adjust this provision and, now that its creation has been ensured, is expected to issue complementary rules to limit the applicability of this particular requirement.
Data Subjects’ Access Requests
An individual’s right to data access is guaranteed under both the GDPR and the LGPD. Under it, data subjects can request access to the data a company has collected about them and can request further actions concerning it: its portability, deletion or correction. The GDPR allows organizations 30 days to answer data subjects’ access requests, while the LGPD only gives them 15 days.
There is also a difference in the cost of the requests: the LGPD makes them mandatorily free of charge, while the GDPR makes gratuity optional.
Mandatory Data Breach Notifications
While both laws have made data breach notifications mandatory, their requirements differ slightly. While the GDPR imposes a strict 72 hours in which companies are required to notify Data Protection Authorities (DPAs) of data breaches, organizations falling under the incidence of the LGPD must do so within an undefined “reasonable” time. This timeframe however is subject to adjustment from the ANPD as well. The LGPD requires companies to also notify data subjects of data breaches, something that is not a requirement under the GDPR.
The GDPR’s notorious fines allow DPAs across Europe to issue fines of up to 4% of a company’s global annual turnover or €20,000,000 (roughly $22,000,000), whichever is higher. Under the LGPD, organizations face similar, if slightly less grave, penalties: up to 2% of their total revenue in Brazil in the previous year or up to 50,000,000 Brazilian Reals (approximately $13,000,000), whichever is higher. The LGPD also lists possible daily penalties to enforce compliance.
Government agencies fall outside the scope of LGPD fines, while the GDPR leaves it up to DPAs to decide on this matter.
While there are a great number of similarities between the LGPD and the GDPR, there are points such as the legal bases and mandatory data breach notifications on which the LGPD goes further than the European legislation.
There are also many provisions left broad in the Brazilian law that are subject to adjustment from the ANPD and that the new authority is likely to tackle in the months leading up to the LGPD’s enforcement. It remains to be seen if the complementary rules it will issue will bring the LGPD closer to or further away from the GDPR.
Frequently Asked Questions
The LGPD (Lei Geral de Proteção de Dados) is Brazil’s new data protection law that establishes how the personal data of Brazilian users should be collected, handled, stored, and shared by organizations. The LGPD is similar to the EU’s General Data Protection Regulation (GDPR) and it applies to organizations that offer their services to people in Brazil.
Brazil’s LGPD applies to all individuals and legal entities, both public and private that carry out personal data processing activities that take place or are related to individuals located in Brazil, aim to supply goods or services in the country or involve personal data collected in Brazil. Like the GDPR, the LGPD has an extraterritorial reach and all companies that serve the Brazilian market are subject to the data protection law.
The GDPR applies to both EU and non-EU companies that offer goods or services to customers in the EU, process the personal data of EU citizens' or monitor the behaviour of individuals in the EU. The regulation only applies to organizations engaged in professional or commercial activities.
The EU’s General Data Protection Regulation (GDPR) sets out seven key principles:
- Lawfulness, fairness, and transparency
- Purpose limitation
- Data minimization
- Storage limitation
- Integrity and confidentiality
Download our free ebook on
A comprehensive guide for all businesses on how to ensure GDPR compliance and how Endpoint Protector DLP can help in the process.
|
<urn:uuid:91ea2583-21f2-4922-b053-ad4d6fc1d72c>
|
CC-MAIN-2022-40
|
https://www.endpointprotector.com/blog/lgpd-vs-gdpr-the-biggest-differences/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00056.warc.gz
|
en
| 0.935059 | 1,731 | 2.546875 | 3 |
With the cybersecurity industry facing a shortfall of 1.8 million professionals by 2022, increased efforts are underway to find and train more infosec pros – especially women who, according to a Global Information Security Workforce Study, comprise only 11% of the cybersecurity workforce.
And although a number of challenges exist in attracting women and young girls to a cybersecurity career, a number of similarities exist between the attributes these women and young girls seek in a career and what the cybersecurity profession can offer, according to a recent survey by Kaspersky Lab and interviews with female cybersecurity pros.
In its global survey of approximately 2,000 females ages 16-to 21 years old, Kaspersky's report, "Beyond 11% - A Study Into Why Women Are Not Entering Cybersecurity," found:
- 72% want a career they can be passionate about
- 83% do not believe a cybersecurity career would be dull
- 23% want a career that can make a difference to society
- 44% believe cybersecurity is helpful to society
- 52% want a career that will enable them to earn a good salary
Median annual salary is $100,000 for cybersecurity staff members, according to a Dark Reading 2016 Security Salary Survey.
"Being passionate is important for any job," says Ambareen Siraj, founder of the national Women in Cybersecurity (WiCyS) organization and an associate computer science professor at Tennessee Tech University. "Cybersecurity is a very dynamic field and you are always learning. If you want to be in a field that is always refreshed and you have a big thirst for learning, then you should consider cybersecurity."
As for the 17% of survey respondents who believe a cybersecurity career would be boring, it comes down to a lack of understanding of the various roles in cybersecurity that can range from technical to training to developing policies, says Mari Galloway, director of finance and communications for the Women's Society of Cyberjutsu.
Benefit to Society
A career in cybersecurity can make a difference in society, Galloway says.
"Take healthcare. So much technology is used to keep people alive. All it takes is one bad hacker to exploit a vulnerability in a hospital system and bring the whole operation down, potentially killing patients," Galloway explains. "It's the cyber professionals' job to ensure things like this don't happen."
Noushin Shabab, senior security researcher with Kaspersky's Global Research & Analysis Team, says she was surprised by the low percentage of women and young girls who noted they wanted to make a difference in society with their career and believed that cybersecurity helped society.
"Despite the [23%] statistic, I feel deep down, a woman wants to make their mark in society," Shabab says. "Hopefully with the hard work and efforts that women around the world are taking in today's world, more women will feel empowered to make a difference in their respective societies. If women believe they can make an impact (big or small) this is already a big start to change how they feel about their careers."
Salary and Job Security
Salaries are a big factor in women's career choices but not the only deciding factor, Siraj says. Cybersecurity not only provides a good salary but, in many cases, infosec professionals are able to work from home and can relocate to a new job with relative ease, since there is virtually no unemployment in the industry, she adds.
Despite these similar attributes that can be found in cybersecurity careers, it remains a challenge to attract women and young girls to the field, these cybersecurity professionals say.
"All that women hear about in the media is about the bad guys in cybersecurity. They don't hear about the researchers who made a difference and helped society," Siraj says. "In the movies and TV shows, cybersecurity professionals are portrayed as guys sitting in a dark room alone, surrounded by computers, and as highly intelligent nerds. That is not how most women want to view themselves."
Shabab noted WannaCry, ExPetr and other large-scale cyberattacks may attract more women to the IT security field, rather than chase them away. These attacks proved cybersecurity is essential for every individual, home user, and enterprise – perhaps fueling a desire to pursue a cybersecurity career and protect what matters most to them, she adds.
A range of efforts are underway to dispel of cybersecurity career stereotypes and educate young girls and women about the profession, these women note. Cyberjutsu Girls Academy, Girl Scouts, Black Girls Code, WiCyS, and others are providing information and role models, they add.
"What will bring more women in are seeing women at various levels making decisions, [girls] getting hands-on experience in STEM, cyber at a young age, providing equal opportunities for women to grow, and laying out a roadmap of potential career paths for young women to visualize where they can go," Galloway says.
- Cybersecurity Faces 1.8 Million Worker Shortfall By 2022
- How to Build a Path Toward Diversity in Information Security
- Making Infosec Meetings More Inclusive
|
<urn:uuid:dc6cd6fc-ca34-4e3b-9c22-e9f99f0b79b7>
|
CC-MAIN-2022-40
|
https://www.darkreading.com/careers-and-people/how-to-attract-more-women-into-cybersecurity-now
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00256.warc.gz
|
en
| 0.957645 | 1,057 | 2.59375 | 3 |
Though it may not be readily apparent, today’s major cities and metropolitan regions are highly connected and data-driven. Upon closer inspection, a myriad of monitoring devices and sensors can be seen providing a constant stream of operational data to municipal systems.
And while city-wide monitoring may not be a new phenomenon, the mass proliferation of Internet of Things (IoT) devices and systems in recent years has resulted in the smart city designation: a city that uses IoT, sensors, and data extensively to improve infrastructure, energy use, utilities, public services, and more.
The global IoT in smart cities market size is projected to hit $260 billion by 2025, at a compound annual growth rate (CAGR) of 18.1% during the forecast period, according to research from Markets and Markets.
This market growth is driven by a combination of factors, most notably an increasing number of government programs and public-private partnership (PPP) initiatives for developing IoT smart city advanced services to enhance the daily lives of citizens. The rising adoption of connected solutions and smart technologies is also expected to drive market growth in this category.
IoT deployments in smart cities are primarily aimed at improving sustainability in the face of urbanization challenges, supporting the implementation of smart intercity transportation networks, optimizing water management efforts, and improving the lighting and heating efficiency in buildings and public structures.
Here, we explore some of the ways that smart cities use innovative digital technologies on top of traditional networks and services to improve the lives of residents and visitors alike:
5 Examples of IoT in Smart Cities
1. Smart Parking Lots
Many smart cities have implemented intelligent parking solutions capable of monitoring for available public parking spaces.
By using underground sensors to detect whether a parking space is occupied, cities can save drivers the hassle of hunting for parking spaces, while reducing emissions and traffic. Users can typically access parking notifications and guidance via a smartphone app and/or website; more commonly, the technology is used for displays in multi-level parking structures.
The global smart parking systems market size was $4.42 billion in 2020 and is projected to increase at a CAGR of 21.5% between 2021 and 2028, according to Grand View Research.
Smart parking system vendors are currently experiencing a sharp decrease in demand due to the ongoing pandemic, with extended lockdown measures and steep drops in traffic to blame.
2. Smart Waste Management
One of the more active areas of smart city innovation involves IoT-based waste management.
These solutions are designed to optimize waste collection processes, reduce the operational costs, and increase the efficiency of waste management as well as mitigate environmental issues related to waste disposal efforts.
For example, waste containers equipped with level sensors automatically notify a centralized waste management platform when waste levels are exceeded; this in turn dispatches a truck driver via smartphone to service the containers.
The smart waste management market was valued at $1.77 billion in 2020 and is anticipated to hit $6.52 billion by 2026, at a CAGR of 25.68% during the forecast period, according to Mordor Research.
3. Smart Traffic Control Systems
To reduce the waste in time and money caused by traffic delays, many city governments are leveraging IoT to automate and optimize the city’s traffic control systems.
These solutions typically consist of a network of sensors installed at intersections for measuring traffic volume and adjusting stop-and-go times accordingly.
For example, Pittsburgh, Pennsylvania is deploying smart traffic signals at one-third of its 610 intersections. Since starting the project, the city has experienced a 41% decrease in intersection wait times and 21% percent reduction in vehicle emissions. Cities like Dallas are also implementing IoT-enabled traffic management systems for improving road congestion management efforts.
The global intelligent traffic management system market size in 2020 was valued at $9.12 billion and is expected to increase at a CAGR of 11.9% from 2021 to 2028, according to Grand View Research.
The ongoing pandemic and its impact on drivers and commuters has severely impacted global demand for traffic management systems.
See more: Top Industrial IoT (IIoT) Trends
4. Smart Street Lightning
IoT-based connected lighting is another highly active smart city domain that helps municipalities increase energy efficiency and reduce energy and maintenance costs.
Smart lights automatically adjust their brightness levels based on street activity as well as transmit maintenance data to anticipate outages and enable faster response times.
Chicago projects that its connected street light program will result in an annual savings of $10 million in energy costs. Miami has some of the most connected street lights in the world — an achievement that has saved the city 44% in energy costs annually, compared to traditional street lights. Paris recently retrofitted its aging streetlight infrastructure with 280,000 connected IPv6-based LED streetlights. Using a Wi-SUN Alliance-supported IEEE 802.15.4 wireless RF mesh architecture, the connected streetlights were implemented as network-as-a-service deployments, saving the city 70% on annual streetlight energy costs.
The global smart lighting market size was $10.79 billion in 2020 and is projected to reach $45.47 billion in 2028, at a CAGR of 19.7% over the forecast period, according to Emergen Research.
Aside from smart street lighting scenarios, increased use of smart lighting in security use cases and for reducing overall power consumption are some key factors behind the increase in the global smart lighting market.
5. Smart Utility Meters
Utility companies serving municipalities are installing IoT-based solutions for automating and optimizing city-wide energy use.
For example, smart meters attached to city buildings are connected to a smart energy grid and enable the utility company to track energy consumption, improve energy flow management, and more. This also enables them to carry out administration operations remotely, such as disconnecting service, implementing new pricing, installing new load management programs, and troubleshooting local power infrastructure.
Additionally, they allow the utility company to pinpoint exact outage locations and restore operations faster and more efficiently.
The global smart meter market size was valued at $21.79 billion in 2020 and is expected to hit $54.34 billion by 2030, at a CAGR of 10.10% from 2021 to 2030, according to Allied Market Research.
Cities are in constant evolution and flux. Smart cities are designed to be responsive and agile to the continuously shifting needs of their inhabitants. By adopting the latest IoT-based smart city technologies, city planners and administrators can access the critical data and necessary insights to improve the lives of their citizens and tackle their most pressing issues.
See more: Best IoT Platforms & Software
|
<urn:uuid:7ae2ed1d-6e7d-47ca-9725-8b86b0dcdc49>
|
CC-MAIN-2022-40
|
https://www.datamation.com/networks/iot-in-smart-cities/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00256.warc.gz
|
en
| 0.932908 | 1,370 | 2.796875 | 3 |
Centuries of scholarship and financial competition have been shaped by the fact that knowledge was scarce—that is, in the economic sense of being something that had to be paid for.
Knowledge is power, power yields wealth, wealth enables access to knowledge: Its been a positive feedback loop, whether the knowledge in question was a map to the Indies or the source code for Windows. Yes, theres been a tradition of academic knowledge being shared, but thats been academic unless you could pay a research staff to filter through the flood.
The feedback loop connecting wealth and knowledge is being broken, though, as public online databases become the norm—even in fields where success was once defined by privileged access to primary sources. Last month, for example, the Sloan Digital Sky Survey released the results of its first year of astronomical data collection, giving any Internet user free access to the worlds largest collection of images and spectra (with four more years of collection planned). The National Science Foundation is funding a National Science Digital Library, a constellation of portals comprising collections in fields such as engineering, science and mathematics, with planned availability next year. By 2010, MITs OpenCourseWare project will freely share lecture notes, course outlines, syllabuses, reading lists and assignments for as many as 2,000 courses. Even today, amateur investigators in almost any field enjoy better facilities for free research and analysis than full-time professionals could buy in previous decades.
It used to be that your wealth or your contacts determined what you could know. Our network of knowledge changes that. Everybody knows.
From now on, finding the sharpest needles in the communal haystack of data will determine who succeeds.
|
<urn:uuid:357df9b3-d5c1-47ad-a90d-93649552b8a2>
|
CC-MAIN-2022-40
|
https://www.eweek.com/news/it-is-common-knowledge/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00256.warc.gz
|
en
| 0.952566 | 337 | 3.03125 | 3 |
One of the most powerful pieces of technology many of us own is our cell phone. Once only a device for placing a call from anywhere, phones today are capable of almost anything a desktop computer can do. This can be empowering, but it can also make you vulnerable.
It can be hard to know how to secure your phone or how to look for signs that someone may be using it against you. When someone with malicious intent gets hold of your device, here are some tips for how to tell if your phone is hacked.
Quick Tip: You can close out of this browser window quickly by hitting Alt+F4 on a PC or Command+Option+Esc on a Mac.
Your Mobile Phone Basics
Before we can get into the specifics of how your phone can be hijacked, you need to know some basics of how it works and what it does. Your phone is capable of quite a bit that you may not be aware of and therefore would never think to check.
You have likely seen requests for permissions on your phone before. When you download and install a new app from the Appstore or Google Play, the app will ask you if it can use certain parts of your phone. These include access to things like your contacts, your microphone or camera, and your location.
There are limitations you can place on apps if you do not want to allow those permissions. You can also elect not to use an app if its required permissions are too intrusive for your comfort.
It is important to be aware of what permissions which apps currently have and make sure that you were the one who allowed those permissions. To view which apps have which permissions, follow these instructions for an Apple device and these instructions for an Android device. From there, you can also revoke any permissions that you did not intend to allow.
It’s important to know what apps you have installed on your phone; this helps with both updating permissions and catching apps you didn’t install (as long as they aren’t hidden—more about that next). While you can see most of your apps on your home screen or in the folders you would normally use, there is a better way to view them.
Your phone will have a list of all installed apps and their permissions in your settings, which you may have already accessed if you’re following along. This will give you a full picture of your installed apps all in one place, and if you see any you don’t remember installing, no longer use, or don’t want anymore, uninstall them.
Note: there may be a lot of installed apps in this list that you do not recognize. Although it can, this does not necessarily mean that your device has been compromised. All new phones come with a certain set of pre-loaded apps. Some run in the background to make your phone run smoother, and some are games or other apps that sponsors paid to have installed on new devices. When in doubt, search the application in either Google (or your search engine of choice) or in your app store.
Here is how to access your apps list via settings on an Android phone (scroll about 75% down the page to the “Managing Your Applications from the Apps Settings Screen” section). For iPhone users, this information will be helpful.
Now you know how to access your apps on your phone. However, it is possible for installed apps to be hidden from that view. If someone had access to your phone, they could easily install a hidden app—much the same as you would install a legitimate one.
These hidden apps may not show up in your normal app list at all, or they may have names that do not match their function. Often, these apps are named innocuous things like “Calculator.”
The purposes of these hidden apps are numerous. Depending on the app and permissions, it may be used to track your location. It also could listen in on your conversations, view your text messages, or spy on your browser activity. They are sometimes called spy apps.
Privacy and Preventing Unwanted Access and Use
All smartphones are capable of some basic security features. Taking the time to familiarize yourself with and enable these features can help you keep your phone private. It may feel daunting at first, but once you get the hang of these fundamentals, you will hopefully have much greater peace of mind.
Note: If you want a deeper dive than what we cover here, Apple has a comprehensive privacy controls document here. There is not a comparable official document for Android because the devices vary, but ComputerWorld put together this useful guide.
Passcodes and Patterns
One of the simplest and easiest ways to secure your device is to add a passcode or pattern to unlock it. This can be a number, usually 4-6 digits, or a pattern you draw.
These features themselves also often have different levels within them. For example, if you select a pattern to unlock, you can set your device to hide the pattern as you’re drawing it. This prevents anyone from spying over your shoulder and easily learning your pattern.
Once you get the basics of how to set up a passcode or pattern, play around with the related settings to access the more complex features.
A step up in security from a standard passcode or pattern, many phones can use biometrics to unlock them. This includes both fingerprint scanning and facial recognition. There are pros and cons to both passcodes/patterns and biometrics.
One reason you may prefer biometrics is that you can give someone else access to your device if you choose to, but you don’t have to share a code that would let anyone in.
For example, if you wanted to let your child play games on your phone, you could program in their fingerprint or face. But, they wouldn’t be able to give anyone else access unless that person was physically present with the child.
You can access the biometric unlock settings from the same place you would set up a passcode or pattern–those instructions are linked in the section above.
Secure Folders and Other Hiding Places
A lesser-known feature for phones, the secure folder and other ways to hide data and apps can be useful. If you have photos or files that you do not want anyone else to see, this feature may help. A secure folder allows you to put files and apps you don’t want prying eyes (or anyone else with access to your phone) to see.
Note: Samsung’s secure folder has more options and is more versatile than Apple’s; with Samsung, you can put photos, videos, apps, and other files in your secure folder. With Apple, you’re a little more limited.
Private Browsing on Mobile
As we explained in the first part of this series, private browsing is possible on your phone. This means that your web activity will not be tracked in your history, nor will it be tied to your account (unless you sign into your account while in private browsing mode). All of the major mobile browsers, including Chrome and Safari, allow for this option.
Note: while your browser itself will not retain any information about the sites you visit, if you are connected to WiFi, a tech-savvy user will still be able to see which sites you visit. To avoid this, switch off WiFi during your private browsing session and use your cell network (data coverage) instead.
There are unfortunately many ways that a person with access to your phone could use it against you. The good news is that there are ways to detect malicious use and ways to guard yourself against further misuse of your mobile technology. There are also tons of free resources online that can help you learn about the capabilities of your phone. When in doubt, contact your cell phone carrier (Verizon, T-Mobile, AT&T, etc.) and set up a meeting with them to learn more about your device.
This post is part of a series. The first post covered the basics of digital abuse. If you haven’t yet, we highly recommend you check it out.
In future posts, we will share tips for financial data security, home and physical security, and other information for empowering you to keep yourself secure. Let us know if there is something specific you want to know, and we’ll talk about that, too.
If you want to join a group of supportive folks chatting, check out our CEO Evan Francen’s daily inSANITY check-in meetings.
|
<urn:uuid:566416c7-0c03-453e-b607-21f942eb52c2>
|
CC-MAIN-2022-40
|
https://frsecure.com/blog/how-to-tell-if-your-phone-is-hacked/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00256.warc.gz
|
en
| 0.946357 | 1,770 | 2.609375 | 3 |
It’s a few weeks into 2020 and after celebrating the coming of 2020, it is time to set some goals for this new year! Setting some goals for your tech habits for this year can help improve the security of your life.
BACK UP TO THE CLOUD:
Cloud can be considered as a digital lifesaver nowadays. Your data is critical and losing it in some horrible catastrophe scenario can render the data on our PC or smartphone completely inaccessible. Sometimes, there may be a backup, however, it is from several months ago and missing everything you’ve done lately. Here are some solutions you can input to avoid these scenarios. Create an Amazon Drive, Dropbox, Microsoft OneDrive, or Apple’s iCloud account, in order to sync your important data files to that same cloud automatically.
GET SMART ABOUT PASSWORDS:
Using a bad and easy-to-guess password or reusing any password can ruin your life. So, generating a strong, unique password for every account is very important. All passwords should be at least 10 characters long, with uppercase, lowercase, and unique characters. Using a password manager, like LastPass, can help you securely store all the heavily encrypted password files in one location.
TURN ON 2FA EVERYWHERE:
Adding multi-factor authentication or two-factor authentication (2FA) to every important online account is crucial. Implementing 2FA helps put a roadblock in an attacker’s way to steal your credentials. This is especially important for email credentials, any kind of banking or payment service, and all your social media accounts. Both Google and Microsoft make simple, elegant authenticator apps for smartphones. You should absolutely try the free authentication apps to protect your data.
TAKE YOUR UPDATES:
Some people deem that the best version of your OS was the one released three years ago, or five years ago, or even ten and everything that has happened since has been an unmitigated disaster. In fact, every major software platform updates itself continuously. Problems with updates are relatively rare and generally solved within days or, very rarely, a week or two. It is easy enough to defer updates for up to a month while you wait for others to identify any issues.
INVEST IN AN ANTIVIRUS:
A basic antivirus on your computer is better than nothing. Antivirus software helps protect you from known malware. There are plenty of free antiviruses available for you to use and install. Follow the concept of Buy One and Get One Free. In other words, invest in one antivirus software and install one free version because if one software fails to detect malware, the may other may be able to detect it instead.
It’s 2020 and time to Implement Security Changes into Your Life
Contact LIFARS For a Free 30 minutes Consultation
|
<urn:uuid:31351ac8-10d1-4163-b11c-d5fd7100aea9>
|
CC-MAIN-2022-40
|
https://www.lifars.com/2020/01/making-these-changes-to-your-tech-habits/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00256.warc.gz
|
en
| 0.904404 | 590 | 2.53125 | 3 |
Uranus surrounded by hydrogen sulphide clouds
Oxford university researchers have discovered that the planet Uranus surrounded by hydrogen sulphide clouds that smells like rotten eggs or someone passing gas.
Study lead author Patrick Irwin from Oxford University, said, suffocation and exposure in the -2000 C atmosphere, made of mostly hydrogen, helium, and methane.
While, Irwin and his colleagues studied Uranus air using the Near-Infrared Integral Field Spectrometer (NIFS), an instrument on the 8 meters Gemini North telescope in Hawaii. NIFS scrutinized sunlight reflected from the atmosphere just above the Uranus cloud tops and spotted the signature of hydrogen sulfide.
Discovery of hydrogen sulfide
The discovery of hydrogen sulfide in Uranus cloud deck stands out pointedly from the inner gas monster planets, Jupiter and Saturn. The greater part of Jupiter and Saturn’s upper clouds comprised of ammonia ice, but it seems this is not the cause for Uranus.
These distinctions in atmospheric composition shed light on inquiries about the planet formation and history. While, the results set a lower limit to the amount of hydrogen sulfide around Uranus.
Dr. Glenn Orton of NASA’s Jet Propulsion Laboratory, said, hydrogen sulfide gas impacting the millimetre and radio spectrum of Uranus for some time, but we unable to attribute the absorption expected to identify it positively. Now, that part of the puzzle is falling into place as well.
The new findings demonstrate that the climate may be disagreeable for people. This remote world is fertile ground for examining the early history of our solar system and maybe understanding the physical conditions on other vast, frosty worlds orbiting the stars beyond our Sun.
|
<urn:uuid:cbad4466-d08c-4364-818b-21512468c39d>
|
CC-MAIN-2022-40
|
https://areflect.com/2018/04/25/oxford-reveals-the-seventh-planet-surrounded-by-hydrogen-sulphide-clouds/?amp
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00256.warc.gz
|
en
| 0.896067 | 349 | 3.375 | 3 |
Big Data – Productivity, Innovation And Competitiveness
NoSQL databases, MapReduce & Hadoop
Big data refers to datasets that are so large, diverse, and fast-changing which need advanced and unique storage, management, analysis, and visualization technologies. According to McKinsey, Big Data is “the next frontier for innovation, competition and productivity”. The right use of Big Data can increase productivity, innovation, and competitiveness for organizations. Inhi Suh, IBM vice president of big data, stated that businesses should place a greater emphasis on analytics projects. In fact, big data analytic is an important step to extract knowledge from a huge amount of data. It is a competitive advantage for most companies.
According to Gupta and Jyoti (2014), “Big data analytics is the process of analysing big data to find hidden patterns, unknown correlations and other useful information that can be extracted to make better decisions”.Agrawal et al. (2011) described the multiple phases in the big data analysis which are Data Acquisition and Recording; Information Extraction and Cleaning; Data Integration, Aggregation, and Representation; Data Modeling and Analysis; and Interpretation. All these phases are crucial and high accuracy in each of these steps will lead to effective big data analytic. In this way, the promised benefits of big data will be achieved.
A wide variety of analytical techniques and technologies can be used to extract useful information from large collections of data. Such information helps companies to gain valuable insights to predict customer behaviour, effective marketing, increased revenue and so on. Maltby (2011) reviewed several literatures on big data analytics and introduced several techniques, such as Machine learning, Data mining, Text analytics, Crowdsourcing, Cluster analysis, Time series analysis, Network analysis, Predictive modelling, Association rule, and Regression, that can be used to extract information from a data set and transform it into an understandable structure for further use . In fact, using data analytic techniques depends on the research objectives/ questions, nature of the data, and the available technologies.
In addition, there are a wide variety of software products and technologies to facilitate big data analytics. EDWs, Visualization products, NoSQL databases, MapReduce & Hadoop, and cloud computing are examples of the more common technologies used in big data analytics. All these techniques and technologies cannot be used for every project or organization. Needs and potential of each organization should be evaluated in order to choosing the appropriate tools for big data analytic.
Studies indicates that data analysis is considerably more challenging than simply locating, identifying, understanding, and citing data. Many researchers believe that the most of the challenges and concerns with data is related to volume and velocity. However, a recent survey conducted by the creator of open source computational database management system on more than 100 data scientist indicates that variety of data sources (not just data volume & velocity) is the main challenge in analysing data. Furthermore, results of this study indicated that Hadoop cannot be a viable solution for some cases that require complex analytics. It would seem that data analysis is a clear bottleneck in many applications. In line with this idea, Agrawal and his colleagues (2011) reported common challenges in big data analysis: Heterogeneity and Incompleteness of data, Scale, Timeliness, Privacy, error-handling, lack of structure, and visualization. It is recommended that the highlighted challenges should be addressed for effective data analysis.
By Mojgan Afshari
Mojgan Afshari is a senior lecturer in the Department of Educational Management, Planning and Policy at the University of Malaya. She earned a Bachelor of Science in Industrial Applied Chemistry from Tehran, Iran. Then, she completed her Master’s degree in Educational Administration. After living in Malaysia for a few years, she pursued her PhD in Educational Administration with a focus on ICT use in education from the University Putra Malaysia. She currently teaches courses in managing change and creativity and statistics in education at the graduate level.
|
<urn:uuid:f218e651-941f-4f01-acb6-1361355127d4>
|
CC-MAIN-2022-40
|
https://cloudtweaks.com/2014/07/nosql-databases-mapreduce-hadoop/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00256.warc.gz
|
en
| 0.917722 | 826 | 2.75 | 3 |
What is a malicious URL?
A malicious URL is one that was created with the intent to distribute malware like ransomware. They are often contained within spam, phishing, and spearphishing emails. Often times they are disguised by URL shorteners such as Bit.ly or a modified hyperlink.
How do I identify a malicious link?
- Hover over URL – the link will be displayed, if it is long and you don’t recognize the domain, don’t click
- Never click shortened URLs in email – clicking links using Bit.ly and other shortening services are risky since you cannot hover over shortened URLs to see where they go. They can easily be hiding a malicious website.
- Look at the email overall – were you expecting it? Do you recognize the sender?
- Do the email seem to focus on the link – If the email is simply a greeting a link, there is a high probability it is malicious
- If it is a password change link or similar, did you request it? Call the trusted source directly to verify it is legitimate
Why are malicious links becoming more common?
Many people click without thinking – they receive a link and click it, no other considerations taken beforehand.
They bypass most prevention systems – people offsite checking email on their phone or laptop are likely not protected by URL filtering and other services within the network.
They are easy to disguise – since most people don’t hover over links and see where they lead, it is effective for the bad actors to simply rename the links.
How do I stop malicious links from infecting my organization?
Advanced email security – today’s cutting edge email security systems dynamically scan URLs and determine whether they are safe to open. This will dramatically reduce the chance of a successful attach through the clicking of a URL.
Security Awareness Training – training users is key as they are the ones that click the links, they need to be put through real world training as well as virtual classroom training so they can quickly recognize threats
Next-gen firewall with up to date subscriptions – URL filtering can block most bad links as long as they are not unknown
|
<urn:uuid:f65300e0-3a79-45ec-9a62-c739344d77b3>
|
CC-MAIN-2022-40
|
https://www.clearnetwork.com/malicious-urls/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00256.warc.gz
|
en
| 0.944186 | 438 | 2.609375 | 3 |
If you thought we’d seen the last of the Wannacry ransomware, think again. Recently, a new threat has been discovered that targets Linux users.
It should be noted up front that “SambaCry” is not a variant strain of the aforementioned ransomware, but rather, a security flaw in Linux that mirrors the one Wannacry used to exploit Windows-based systems. The vulnerability, officially named CVE-2017-7494. was dubbed SambaCry because of those similarities.
Normally, Linux users avoid the kinds of security issues that plague Windows-based machines, but this is a bit of a different case, and here’s why:
There’s a Linux service called Samba Server Service which provides SMB/CIFS capabilities in Linux and Unix-based systems. While it’s true that Linux can use any number of file sharing protocols, Samba is often used in environments featuring a mix of Linux and Windows PCs, because Windows PCs have a hard time dealing with Network File System Shares coming from machines running other OS’s.
When a Linux server is running Samba, some folders (called CIFS Shares) will appear as a network folder to Windows users.
The security flaw allowed a remote user to send executable code to the server hosting the share, including code which could encrypt a file system and hold it for ransom.
As you might expect, the Linux crowd treated this as a top priority and has already moved to patch the flaw.
The long and the short of it is simply that if you’re running a Linux server and using Samba, you’re probably vulnerable unless you’ve downloaded and applied the latest security patch. If you haven’t, you should do so immediately.
While Linux users have been fortunate to have suffered relatively fewer critical security flaws, this is a painful reminder that as good as the OS is, it’s not bullet proof.
|
<urn:uuid:425e59fa-bd27-4fb0-afd2-c215abe32c35>
|
CC-MAIN-2022-40
|
https://dartmsp.com/linux-gets-its-own-wannacry-like-variant/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00456.warc.gz
|
en
| 0.926208 | 406 | 2.609375 | 3 |
Traditional water heating equipment uses fuel as an energy source to heat a rod-like element in the water tank. The element then warms the water.
If you’ve ever reheated a hot beverage by placing it in the microwave, you’ll have an idea as to the alternatives. Microwave ovens heat liquids by moving molecules so that the molecules hit each other. This action creates heat — a bit like rubbing your hands on a cold day.
ISI’s Heatworks Model 1 water heater, a project looking for funding on Kickstarter at the moment, pitches itself somewhere in between these two existing technologies.
ISI’s product uses electronics to directly energize water molecules instead of using a heating element rod, the company says. In other words, the appliance is designed to use water’s natural resistance to heat itself.
Graphite electrodes — rather than a traditional convective heating element — are installed in an efficient, tankless water heater. Existing tankless water heaters are generally more efficient than water heaters with tanks because only the required water is heated; tanks heat more than is needed.
The acoustically quiet graphite electrodes in ISI’s tank produce two gallons of hot water per minute, and tanks can be installed in series.
The tank measures 12 1/2 inches long by 6 inches in diameter and weighs about 16 lbs.
A WiFi module is planned that will allow for remote measurement and control of power and temperature.
Tagline: “Your next water heater.”
ISI Technology currently has more than 300 backers for its water heater project pledging more than US$80,000 of a $125,000 goal. The funding period ends Feb. 16, 2014.
A pledge of $225 gets you a Heatworks Model 1. Note that a $395 retail price is proposed for the final product. Estimated delivery is in May 2014.
Theoretically, this equipment should provide efficiencies — not least because minerals in water bind themselves to the heating rods in classic heaters, thus reducing rod life. If there’s no rod, the heater should last longer.
The arguments in favor of this product are compelling, and if you’ve ever had to call a plumber to replace a failing domestic water heater for no apparent reason — the rods have in fact gotten gunked-up with plating deposits, or have burnt out — you’ll know what we mean.
This water heater promises to introduce previously unknown longevity into this arena. Plus, the traditional water heater hasn’t changed much, in technological terms, since the days of pot-on-fire.
What we haven’t seen before is a replacement for the mechanical flow switch — which ISI promises to deliver through microprocessors — or new, state-of-the-art electrodes that should deliver instant, super-accurate temperature adjustments. Then, too, there’s smartphone interactivity, which this product also promises.
The creator needs to be a bit clearer about exactly how its product “directly energizes and heats the water molecule,” as opposed to simply heating water ultra-super-efficiently through use of graphite and microprocessors with split-second accuracy. Or is this just marketing gobbledygook?
No annual maintenance and a 3-year warranty is a lofty promise.
While we don’t have any reason to doubt ISI’s claims, prototyping and testing, we’d like to see some more concrete numbers before ripping out our existing water heating kit and replacing it with this gear — seductive though it is.
Graphite electrode technology is commonly used in arc furnace steel manufacturing. It’s an efficient, responsive technology that can provide high levels of heat along with good electrical conductivity.We look forward to hearing how this rapidly financing Kickstarter project plays out in real-world use.
|
<urn:uuid:20201254-cbfb-4375-90a2-bb4688927275>
|
CC-MAIN-2022-40
|
https://www.ecommercetimes.com/story/with-graphite-electrodes-heatworks-gets-water-to-heat-itself-79733.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00456.warc.gz
|
en
| 0.915207 | 825 | 3.0625 | 3 |
Encryption is the process of encoding messages or information in such a way that only authorized parties can read it. It can be used to transform data that you send across the internet into a format which is only readable when in possession of a decryption key, which provides the code to decipher the encryption.
Think of sending a letter to someone in a secret language that needs a special dictionary to translate it.
The secret language = the encryption
The dictionary = the decryption key
Only when someone has both can they then read that message—providing of course that the secret language is sophisticated enough to not be broken without the key. Make sense?
Dashlane uses AES-256 encryption. Short for Advanced Encryption Standard, it was the first publicly accessible and open cipher approved by the National Security Agency (NSA) to protect information at a “Top Secret” level. It is now widely accepted as the strongest encryption there is—and used by governments, militaries, banks, and other organizations across the world to protect sensitive data.
Remember we mentioned that the “secret language” needs to be complicated so it is tough to crack? Well, AES is just that. It’s based on a system of encoding called the Rijndael cipher, developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen. It divides your data into blocks of 128 bits (the smallest unit of computer data) each, and then uses the encryption key to scramble them beyond all recognition using 14 different rounds of encryption.
You need the specific key to decrypt the data. The number of possible keys this system allows is 2 to the 256th power—that’s a number that is 78 digits long. Imagine the computing power necessary to reveal the correct key and decrypt the data were it to be intercepted.
Has AES ever been cracked?
No. A Microsoft research paper published in 2011 suggested that it was theoretically possible to recover an AES key (the “dictionary” that translates the “secret language”) using a technique called a biclique attack. But even breaking a 128-bit key (far less complex than Dashlane’s 256-bit system) would take billions of years with current computing power—and require storing about 38 trillion terabytes of data, which is more than all the data on all the computers on the planet.
Dashlane encrypts all your data locally (on your device) before sending it to our servers. Your key is never transmitted on the internet, so in the unlikely event that your data is somehow intercepted, the encryption means that no one will be able to decipher it.
Visit our online safety hub for the latest breach report and a complete guide to staying secure on the internet.
|
<urn:uuid:7ca96363-3985-46da-abf5-e738e42a6d66>
|
CC-MAIN-2022-40
|
https://blog.dashlane.com/dashlane-explains-encryption/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00456.warc.gz
|
en
| 0.928579 | 564 | 3.65625 | 4 |
What is Two Factor Authentication?
The most common form of multi-factor authentication is two-factor authentication (2FA). We will get into the benefits of two factor authentication after you learn what it is. As its name suggests, two-factor authentication combines two different methods to confirm a user’s identity.
In order of security levels, 2FA will ask users for validation by asking for proof of:
- Something they know – a PIN, address or answers to secret questions
- Something they have – a card, email, FOB, iPhone, App or USB drive
- Something they are – a fingerprint, iris scan, or voice
A common example, in order to use a credit card online or over the phone, you may have to scan the card (#2 something they have) and enter a PIN (debit) or a billing zip code (credit) (#1 something they know). Adding a code sent to your email or mobile phone (#2 something they have) to authorize the purchase adds another level of security.
Good passwords aren’t good enough
Most websites, subscriptions, and/or apps only require a username and a password. While this is extremely convenient, it creates a data security risk. In a prior article, we covered the importance of good passwords in depth and why small business owners MUST know their logins, but good passwords and good data security practices aren’t always enough.
- You may give your password to someone, and they share it.
- You may reuse your passwords, so when one site is hacked, criminals try using those same credentials on bank account and investment sites.
Once someone is in your account, you’ve lost control. It’s like letting a burglar in the house. You can lock the doors after they leave, but it’s too late – they already have all your sensitive data.
Benefits of two factor authentication security levels
The more data security factors you have in place, the more protection you have. But the quality of the factors themselves are also important. Criminals can easily find out your mother’s maiden name. Getting your fingerprint is much harder.
The most common two-factor authentication step right now is to send a code to your mobile phone via email or text. You’ll receive an alert if someone TRIES to get into your account. While this process takes another 15 seconds to validate your credentials, it should be your default setting for:
- Anywhere you have superuser or admin credentials
- Bank, financial and investment sites
- Sites that store sensitive client information
- Email logins. Once criminals get into your inbox, they often find a treasure trove
Benefits of two factor authentication will majorly out-weigh the minor inconvenience of using two devices.
Managing two factor authentication
Your IT Policies and Procedures should require employees to use two-factor authentication for all websites and/or apps that contain sensitive or financial data. If you manage a website for your clients, a software development company (like us) can help you offer this added layer of security for your users.
Another idea is to send links to files instead of attaching files directly to emails. That way the addressee must be credentialed on the system before they can view the file. This will also help your computer files stay organized!
What is two factor authentication?
Two-factor authentication is an extra layer of security to prevent someone from logging in, even if they have the password. You will have to verify your identity with a 6 digit code sent to your phone every time you log in. This makes sure that only YOU can log in to your account.
How do you set up 2FA?
Most sites include 2 factor authentication somewhere in their settings. It’s going to vary for each site, but check under settings and security to enable 2FA.
Can two factor authentication be hacked?
Hackers usually don’t have access to your login information and your phone messages. Two-factor authentication is extremely reliable in preventing attacks from hackers.
|
<urn:uuid:359d9ef6-950c-4954-b080-a1e66d1f2fb0>
|
CC-MAIN-2022-40
|
https://eclipse-online.com/news/benefits-of-two-factor-authentication/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00456.warc.gz
|
en
| 0.910796 | 839 | 3.15625 | 3 |
We create data almost every hour of every day. Be it through mobile use, social networks or ecommerce sites, every digital move we make generates new data. And brands are greedy for it. This acquisition and use of consumer data has been thrust under the spotlight recently, cast as a nefarious practice rooted solidly in the dark side of internet use.
Our very laws have changed in response to problematic data use. As GDPR and the NIS Directive (opens in new tab) settle into effect, we are gaining more transparency and more control over the way our personal data is used. Now, more than ever, we have the power to completely restrict the collection, storage and use of our data.
But does the fact that we can stop sharing our personal data mean that we should? Howard Williams, marketing director at Parker Software (opens in new tab), investigates the light and dark sides of sharing our data.
Transparency and control
The new legalities around data hope to reduce, if not abolish, the secrecy behind how our data is used. We must now explicitly give our consent about, how, when, why and what data about us is stored and used. We can stop sharing our data with the businesses and websites we interact with.
The question is how far to exercise this new right. Just as software, computers, and the internet are not, in and of themselves, exclusively part of the digital dark side, your shared data isn’t either. Data is a tool like any other (opens in new tab), and it can be used beneficially. Legislation is aiming to ensure that our data is only used ethically – not eliminate its use altogether. But the efforts might be too little, too late.
Although we now have greater data transparency, that doesn’t take away the distrust that’s already started to fester (opens in new tab). When we don’t know how, when, or why our data is being collected and used, it’s easy to feel as though our privacy has been violated.
In fact, only 1 in 5 (opens in new tab) consumers have confidence in the way that their data is being handled. Given the choice, it only makes sense that we want to share less.
The dark side of sharing our data
The increase in cybercrime (opens in new tab), the continuous drip of data misuse scandals, and the companies behind schemes to peddle our private data have hardly warmed us to the idea of willingly sharing it. There’s no doubt that sharing data has a dark side.
Data is a powerful tool that’s easy to misuse. Our data lets brands push invasive adverts on us. It enables companies to discriminate (opens in new tab) against us. It can expose our weaknesses and vulnerabilities for businesses to shamelessly target with offers and marketing strategies.
As an example, you need only look at the Facebook Cambridge Analytica scandal (opens in new tab). This saw the misuse of personal data allegedly influencing the 2016 US election and ‘cheating’ the Brexit vote (opens in new tab). Sharing our data can have heavy consequences when it ends up in the wrong hands.
Plus, there is a personal risk that comes with being free with our data. As cybercrime rates soar, sharing data can potentially lead to devastating repercussions, such as identity theft and financial loss. Even with new safeguards and security procedures in place, if we can decrease the risk of falling victim to cybercriminals, why wouldn’t we?
The light side of sharing our data
Companies need to have our trust before we share our data. The new data regulations and legislations should serve as a way to support this trust building – not as a reason to withdraw as much of our data as possible. Our data, when used right, can do a lot of good for us, and the rest of the public.
Firstly, it benefits research. The information made available by sharing our data can be invaluable to researchers and scientists. Our data is a valuable resource – it can give people the power to help us and others. Patient data has helped advancements in cancer research, and when Facebook made some of the data it collected available to researchers, it led to new realms of understanding.
Sharing our data has helped (opens in new tab) identify depression in new mothers, and inspired the creation of AI that can recognise mental health signifiers for depression and anxiety online. In short, sharing our data can help generate positive outcomes for the wider public. Secondly, and perhaps most obviously, sharing our data enables more effective service. With so much of our spending and time moving online (opens in new tab), customer service has needed to follow suit. Now businesses can offer products and services we like, attuned to past purchases and our online activity. They can help us find the best deals to suit our budgets, and they can recognise when we need support, so we don’t have to reach out (opens in new tab) first. Our data can help businesses give us a great, intuitive experience that saves us time and effort.
Striking a balance
Thanks to new legislation, we can identify when our data is used for reasons that aren’t ethically ambiguous. So, is there enough harm in sharing our data to outweigh the benefits? The control we now have over our data not only means we can stop sharing it far easier, but paradoxically, that there could be less of a need to.
When consumers have the right to withdraw their data at any time – and demand that it be deleted – there’s less risk involved with sharing. Companies want to be able to use your data to improve their services and keep your custom; they aren’t going to deliberately rock the boat when you can see what they’re doing.
Plus, with the increased transparency that GDPR and the NIS Directive require of businesses regarding personal data, consumers should have a better understanding of when and where they are comfortable sharing information. Transparency means that consumers can know when their data is safe, what their privacy risks are, and how to tailor their data sharing accordingly. The dark side of data sharing is becoming less ominous.
It’s your data
You have more control than ever before, and thanks to the new data legislation in effect, there is less need to be fearful of sharing your data. The consequences of data misuse are huge for companies, and not remotely in their best interests.
From a more outward looking perspective, the benefits of safely sharing the data you’re comfortable sharing can spread to the wider public, as well as your own service experiences. Ultimately, the decision to continue, or stop, sharing our data comes down to every individual. It’s your data, it’s your decision. With the light and dark side of data sharing highlighted here, maybe your decision will be a little bit easier.
Howard Williams, Marketing Director at Parker Software (opens in new tab)
Image Credit: The Digital Artist / Pixabay
|
<urn:uuid:f06466db-40ad-4d40-8f98-1730d6f097c0>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/features/should-we-really-stop-sharing-our-data/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00456.warc.gz
|
en
| 0.945699 | 1,442 | 2.515625 | 3 |
When something new comes out, many people have the tendency to take a step back for a while before trying to understand it. Instead of accepting what is new, some refuse to accept it, choosing instead to focus on what they have gotten used to. This is why there is a need to challenge people to open their minds and allow themselves to change their thinking process.
We live in the digital age, which falls in the “new” concept category. Not a lot of people understand or are comfortable with the word “digital”. For them, it is like a subject matter that is off limits to anyone who has no IT degree. This kind of attitude is what makes digital disruption a major issue for a lot of organizations.
What is Digital Disruption?
When Uber started becoming the preferred choice of people looking for a cab to take them to their destination, it disrupted traditional taxi businesses. Here was a company that allowed commuters to simply tap a few buttons on their smartphones in order to take a ride home. Uber basically changed the concept of the taxi industry. That is what digital disruption is all about.
Simply put, digital disruption is a threat to certain goals, business or otherwise, with the aid of digital technology like your smartphone or a mobile application. While it is often confused with disruptive technology, it is not similar to or related to it. Digital disruption affects only a particular sector or area of business. Disruptive technology, on the other hand, pertains to revolutionary tech that affects an entire nation and causes society to change the way it thinks, acts or behaves.
The best example of disruptive technology is the personal computer, which caused a lot of people to change the way they worked, studied and even the way they played. Disruptive technology is not something that you can experience today and then do away with tomorrow.
Digital disruption, however, can be countered by instituting significant changes in the business process or in the way things are managed. This is why many of today’s enterprises are embracing digital transformation.
There are several factors involved when a company decides to switch to digitization. Foremost among these is the economic aspect. Because of the many financial tragedies that the world has experienced, companies are more careful now in their investments. Every single financial step that they make is evaluated several times.
Factor number two is the way people have diverted their attention to one-of-a-kind experiences while taking for granted products that are mass-produced.
The third factor, of course, is technology, which is constantly evolving and improving, not just in one industry, but also in the entire business world.
These are all essential factors that business leaders need to address in the best way possible. And these are the factors that push them to cater to digital transformation. However, there are major challenges that stop the advantages of digitization from taking shape.
In a research on digital transformation facilitated by Arthur D. Little, it was revealed that only around 17% of enterprises actually have a set of strategies for digital transformation. Additionally, 50% of the companies included in the study know only very little about digital knowledge.
Some organizations have already gone digital and have gotten so good in such technology. However, there are quite a few that are not yet digitally mature and, therefore, hesitate to make something good out of their digital initiatives.
These may be simple things to the ordinary person, but for someone who understands the value of digital transformation, these are challenges that need to be addressed in order to push businesses towards the digital path they should take.
The Other Challenges
Here are other challenges that might make the digitization process quite complicated:
- Challenge number one is the ability to understand the customer. As technology changes, so do the way people think, act and do pretty much everything. Nowadays, instead of brands expecting customers to share their needs and wants, it is the customers who make the demand, as they are confident that brands are better able to understand them now. Therefore, companies need to have people who are willing to research and learn significant information about customers. Moreover, digital programs should be provided an input – focusing on what will satisfy the customers.
- Challenge number two is the concept of a 24/7 business. Gone are the days when enterprises would open only on weekdays. Today, majority of organizations operate 24 hours a day, seven days a week. In other words, business is always on. In relation to this, once a brand is digitized, customers will expect more from it. They will be expecting quality products and services anytime and anywhere they want them.
- Challenge number three involves the proper adoption and implementation of the digital process. Before anything else, companies need to draw up strategies and create plans. This will help pinpoint the most essential purposes for digital transformation. A good example of a digital management model is one used to study customer preferences so that a resolution will be reached, thereby offering them satisfaction like no other.
- Challenge number four is all about the way companies use data. There has to be a certain process for IT and management leaders to talk about data and then to carefully analyze it. It is important, however, to first identify which ones are of greater value, especially to customers.
- The final challenge is one of the biggest barriers of digital transformation: limited knowledge about the whole digitization concept. Sure, businesses know that digital is better and can deliver faster results. What they do not understand is how valuable it is to companies that want to leverage success by ensuring efficiency, productivity, customer satisfaction and steady profit.
Many company leaders – and employees – also need to be given a 101 on digital transformation for them to be fully aware of its value.
Other important challenges that should be considered are technology, company culture and leadership initiative. Companies need to adapt to the regular developments in technology. Company leaders, meanwhile, need to follow and embrace the best business practices to encourage effective and innovative leadership. Finally, leaders should have the initiative and the passion to lead, because only then will an enterprise be fully able to embrace digital transformation.
In order to effect digital transformation, organizations need only to have the right digital tools. They should also be able to apply techniques into practice. Because, in reality, what really challenges companies from completely embracing digital transformation is not technology or anything of its class. Rather, it is the willingness and capacity of leaders to embrace change.
If you want to better understand digital transformation and all the factors that come with it, check out FourCornerstone and ask for a FREE consultation now!
Photo courtesy of marycat879.
|
<urn:uuid:c3f8b240-c7cf-4826-97b1-f640370b75ac>
|
CC-MAIN-2022-40
|
https://fourcornerstone.com/disrupt-first-transform-things-consider-embracing-digital-transformation/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00456.warc.gz
|
en
| 0.962702 | 1,354 | 2.65625 | 3 |
With the number of climate-change-related issues resulting in more and more deaths each year, the challenge before governments and energy solution providers is to bring out a sustainable mode of renewable energy.
One of the biggest threats that humanity currently faces is that of climate change. With the number of climate-change-related issues resulting in more and more deaths each year, the challenge before governments and energy solution providers is to bring out a sustainable mode of renewable energy.
With key industry players constantly looking for means to cater to the burgeoning demand for clean, cheap and reliable energy, emerging technologies like artificial intelligence and machine learning have become the solution to this industry woe.
In this article, we look at how renewable energy providers are currently using AI and ML to improve their functioning.
One of the main challenges which have been attributed to renewable energy sources like wind and solar energy has been the intermittence in connection leading to unsteady power connections. Be it lack of sunlight due to a cloudy day or drop in the wind speed, this could substantially affect energy generation.
To overcome these challenges, companies are tapping to AI to develop models and software to predict change in weather patterns.
Google’s DeepMind recently announced that it is working in this field. According to the company, by training its neural network with the widely available weather forecast, combined with turbine data, to improve the efficiency of wind energy by 20 per cent.
By doing so, the system could predict wind power output 36 hours ahead of the actual generation. further, they trained the system to make optimal hourly delivery commitments to the power grid a day in advance based on the predictions.
“Although we continue to refine our algorithm, our use of machine learning across our wind farms has produced positive results. To date, machine learning has boosted the value of our wind energy by roughly 20 per cent, compared to the baseline scenario of no time-based commitments to the grid,” the researchers said in a blog post.
Monitoring The Health Of The AI Systems
The future of renewable energy will be shaped by autonomous and robotics technologies. These emerging technologies are increasingly being used to automate operations and to boost the efficiencies of devices like solar panels and wind turbines.
DNV GL – Energy. one of the leading players in the field is now looking at leveraging AI to improve their product offerings. According to the company, autonomous drones with real-time artificial intelligence can be used to carry out effective and efficient inspections of wind turbines and solar panels. While robotics can play a vital role in remote inspection, and proving to be more beneficial in maintenance and troubleshooting.[…]
|
<urn:uuid:5f447ddd-dbd1-498b-bbdd-11457bd2213b>
|
CC-MAIN-2022-40
|
https://swisscognitive.ch/2019/03/08/ai-has-become-a-key-contributing-factor-in-the-renewable-energy-sector/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00456.warc.gz
|
en
| 0.943207 | 536 | 3.484375 | 3 |
CAPTCHA is an acronym for 'Completely Automated Public Turing test to tell Computers and Humans Apart'. The technology was developed more than two decades ago for interactive websites to fight bot-driven attacks. Over the years, it has steadily lost ground to the opponent, as it failed to keep pace with the evolution in bot technology. Instead of obstructing bots, it adds friction and adversely impacts good user throughput.
Although CAPTCHAs went through several iterations – such as reCAPTCHA and its multiple versions – they fall way short of the level of protection that today’s digital businesses need against an adversary that is technically much superior. Today, bots have reached advancement levels so high they can mimic human behavior with fairly high accuracy. It has become easy for fraudsters to bypass the reCAPTCHA enterprise – and its multiple versions – at scale using bots or low-cost human click farms.
Need for an Effective reCAPTCHA Alternative
Digital businesses can no longer rely on this obsolete technology for bot detection because in addition to financial losses they risk upending user experience, damage to brand equity, and customer churn. They need alternate solutions that are developed by deep investments in technology and can adapt to the advancing capabilities of bots and their attack tactics for long-term protection.
Here we list the top five considerations when looking for reCAPTCHA alternatives to get the best protection without disrupting user experience.
- Data-backed decisioning: AI-driven decisioning not only helps eliminate guesswork but also adapts to the evolving attacks, thereby determining the best response strategy for an attack signature. Since machine learning-based models can dynamically adapt to the changing fraud tactics and decide the best response strategy based on the risk level, it eliminates the need for manual adjustments of risk scores. Furthermore, real-time insights and data exchange can help power intelligent and transparent decisions, with the added benefit of flexibility of controlling the enforcement pressure.
- User-centric protection: For digital businesses today, user experience is front and center and there is no room for a trade-off between fraud prevention and user-centricity. Therefore, a no-block approach where good users are never blocked is the way forward. This will have an immediate impact on false-positive rates, without compromising security.
- Compliant and accessible: In addition to complying with the GDPR that mandates businesses to protect the privacy of consumer data, digital businesses must also comply with the laws of the land where they operate. Solutions that are accessible to a wide range of demographics across locations of operation, while remaining compliant to multiple regulations, are the ones that digital businesses should opt for.
- Ease of use: Multiple tech stacks that operate in silos only add to technical debt and information overload. This can cause delay in decisioning, providing fraudsters with the much-needed time to attack and escape. Opt for a solution that can fit in with any existing infrastructure. Today, data APIs allow seamless integration with on-premises or third-party solutions that can be set up in a matter of hours.
- Customer support: A security vendor that backs up the solution with 24/7 available SOC experts can empower fraud teams by offloading their burden and working as a true partner.
Fighting bots does not always need to be a losing battle. As a true partner, Arkose Labs enables digital businesses to face – and defeat – complex bots, as well as human-driven attacks, thereby protecting them long-term.
7 Things To Consider when Choosing a Fraud Vendor
Bankrupt the Business Model of Fraud with Targeted Friction
Our zero-tolerance to fraud approach enables businesses to deter fraud – that goes beyond just remediation. The use of targeted friction causes bots and scripts to fail instantly, regardless of how advanced their capabilities may be. Real-time risk assessment of every incoming user informs the enforcement mechanism to present appropriate 3D puzzles, which engages malicious human actors in a manner that wastes their time, effort, and resources. This erodes the returns from an attack to such an extent that the attack loses its economic worth and the attacker is forced to abandon it.
In addition to meeting the five aforementioned crucial considerations of an efficient reCAPTCHA alternative, Arkose Labs provides its partners with the benefit of managed services and a guaranteed mitigation SLA. Arkose Labs is the only security vendor that backs its solution up with an industry-first $1M warranty.
Learn more about complex bot-attacks across industries and how to detect and remediate them here.
|
<urn:uuid:fe29b75c-73f0-4352-a7ae-6268be617501>
|
CC-MAIN-2022-40
|
https://www.arkoselabs.com/blog/top-5-considerations-when-looking-for-recaptcha-alternative/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00456.warc.gz
|
en
| 0.936037 | 931 | 2.640625 | 3 |
Every nation faces threats, whether cyber or traditional. A core responsibility of governments is identifying potential risks, making prompt and informed decisions, and enacting the right response, preferably before there’s any damage to citizens or national assets.
In recent years, our increasingly digitized lives have blurred the boundaries between the physical and the virtual. To keep nations safe, local and national intelligence teams must collect, sort, analyze, and act on incredibly large amounts of structured and unstructured data. But there’s an ongoing shortage of skilled personnel such as data analysts, especially in the Western world. Another challenge is the lack of automated access to the deep and dark web – causing analysts to try and get access through manually created dummy accounts, requiring significant time and HUMINT resources.
In this highly challenging setting, OSINT has become an invaluable tool for successful national security policies, supporting and helping solve different public sector use cases, including:
Counter-terrorism efforts – terrorist groups come in all shapes and sizes, from both international and domestic origins. Whether they’re fueled by a fundamentalist religious ideology, conspiracy theories, or extreme politics, these groups pose serious security threats. OSINT tools can help identify propaganda, member recruitment, online and offline financing routes, and even threat planning. This data not only helps prevent future terrorist events but can also shed light on the modus operandi of such groups.
Dealing with disinformation – unintentional misinformation and deliberately engineered disinformation can both have an impact on national security. As users are bombarded with so many posts, links, news, and opinions, it can become highly difficult to separate what’s real from what’s mis- or disinforming. Monitoring the online world with OSINT tools can help mitigate the spread of false propaganda and protect democracy.
Cybersecurity efforts – it can be a lone-wolf threat actor, an organized crime group, or even a nation-state that’s behind a cyberattack. Regardless of the source, such attacks can be devastating for the country, politically, financially, or both. In the technology race between these rogue groups and governments, the latter must be more proactive and faster than ever before in order to win. They must also be able to reach all corners of the surface, alt-tech, deep, and dark web platforms to be able to identify early indicators. OSINT can help achieve all these feats.
Transportation networks security – from airports to seaports to roads, citizens commute and travel every day from many different Point A’s to many different Point B’s. If a transportation network is compromised, serious damage can occur and lead to loss of property, or worse, human lives. OSINT tools can deliver early indicators as to the intended location of an attack, help identify the gaps in airport and seaport security systems, and provide increased digital and physical security.
Addressing international crises – to help alleviate the impact of natural disasters, pandemics, and other health crises, and manmade debacles, intelligence, security, and other teams from different countries may need to extensively collaborate. OSINT tools can help improve this collaboration by providing ground-truth information about the what’s and how’s of the unfolding situation, leading to a quicker, better organized, more informed response.
Cobwebs specialized OSINT solution is an AI-powered, automated, user-friendly platform that continually searches surface, deep and dark web to generate mission-critical information and actionable insights.
|
<urn:uuid:8060c691-4173-4cf0-adb3-ba7460e09d1b>
|
CC-MAIN-2022-40
|
https://cobwebs.com/connecting-the-dots-osint-for-the-public-sector/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00656.warc.gz
|
en
| 0.921771 | 717 | 2.96875 | 3 |
What is Social Engineering?
Social engineering is the fraudulent practice of tricking social media users into revealing sensitive personal data or sending money to an unintended recipient.
Social engineering attacks use emotion and familiarity to trick users into doing something they otherwise wouldn’t do. An example of this is when someone calls you up pretending to be your boss asking you to do something important. This is because people tend to trust their bosses more than other people. So if you get called by someone who says he/she is your boss, you'll probably do what they say without thinking too much about it.
Social Engineering Techniques
When malware creators use social engineering techniques, they can lure an unwary user into launching an infected file or opening a link to an infected website. Many email worms and other common types of malware are spread via social engineering schemes.
Social engineering is the act of exploiting human weaknesses to gain access to personal information and protected systems - and is used by criminals to get the information they want. Hackers use social engineering to get passwords, but it is harder to do this than to hack them. Thus, cybercriminals use social engineering to cleverly trick people into giving them personal information or money.
Cybercriminals like to take advantage of the fact that humans are the weak links in the security chain. We can be fooled by people who are not what they seem. We should always check credentials before letting someone into our homes or businesses - and of course, this means being very aware and careful regarding which emails, texts, and other forms of communication we open and respond to.
How Does Social Engineering Work?
Social engineers use a variety of techniques to perform attacks. First, they do research and reconnaissance on the targets. For example, if the target is an enterprise organization (such as a financial institution), they might gather intelligence about the organizational structure, internal practices, common lingo used by employees, and potential business partners. Once they've done this, they'll try to gain initial access to the system. Next, they'll try to get into the systems of the people who have initial access to the system, such as a security officer or receptionist. Then they'll try to learn more about how the company operates and what they're doing. Finally, they'll try to exploit any weaknesses they find.
How to Spot Social Engineering Attacks
Social engineering attacks often come from people who want to get into your personal information. You must be aware of what you're doing online and offline. Don't give out any personal information without thinking about the consequences. A suspicious email address could be an attempt by hackers to get you to open a malicious attachment or download malware. Be careful about opening attachments that appear to be from friends or coworkers. Ask the sender if they sent the email. Human error is the weak link in a websites' security.
Types of Social Engineering Attacks and Scams
Phishing: An attacker sends emails pretending to be legitimate companies or institutions. Users respond with sensitive data, allowing the attacker to steal private information. The attacker may even pretend to be a charity. In addition to spelling and grammar, suspicious attachments, poor layout, and inconsistent formatting are additional indicators of potential phishing attacks. These are all red flags that indicate that there could be malicious activity taking place.
Vishing: is a social engineering attack that leverages voice communication techniques. VoIP technology makes it easy to spoof caller ID, which can exploit people's misplaced trust in the safety of phone services. VoIP also makes it easy to broadcast audio content to an unsuspecting victim.
Smishing: is a form of Social Engineering that exploits SMS messages.
Text messages can include links to such things as websites, emails, etc. When clicked, this may automatically open a browser, email, or other application. Users may be tricked into clicking these links and falling victim to malicious activities.
Common Phishing Attack Examples
There are many giveaways regarding phishing.
Suspicious sender's address: The sender's address may imitate a legitimate business, and thereby fool someone into thinking it is real. Cybercriminals often use an email address that closely resembles one from a reputable and popular company by altering or omitting a few characters. Even if they use the logo of a legitimate business, look at the return email address to check for misspellings.
Generic greetings and signatures: Both a generic greeting such as “Dear Valued Customer” or “Sir/Ma'am” and a lack of contact information in the signature block are strong indicators of a phishing email. We have all received humorous emails, complete with bad grammar, from distant parts of the world indicating that you have been left a huge amount of money from a member of nobility - only requiring a small payment on your behalf to "unlock" and release the funds to you. A trusted organization will normally address you by the name you provide for transactions on that particular website and provide their contact information.
Spoofed hyperlinks and websites: Try and put your cursor over any links in the body of the email, and you will discover that the links do not match the corresponding text. Malicious websites may look identical to a legitimate one, but the URL will use a variation in spelling or a different domain (e.g., .com vs. .net), which is very easy to overlook. Additionally, cybercriminals may use a URL shortening (such as Bit.ly) service to hide the true web destination/address of their malicious link.
Spelling and layout: Phishing will often contain poor grammar and sentence structure, misspellings, and inconsistent formatting - all of which are other indicators of phishing attempts. Reputable institutions have dedicated professionals that produce, verify, and proofread customer correspondence before sending it to customers.
Suspicious attachments: An unsolicited email requesting a user download and open an attachment is a common delivery technique for sending malware. Cybercriminals often use a false sense of urgency ("You Have Been Selected!" "Act Now to Save 50%") or importance ("Urgent Response Required") to help persuade a user to download or open an attachment without giving it a good examination first.
Educate your employees on how to avoid social engineering scams
Since humans are the target for social engineering scams, employees need to be educated on how to defend themselves from these attacks. The best form of prevention against social engineering attacks is employee training. Teaching your employees how to recognize the previously listed social engineering tactics and avoid them is of the utmost importance.
While machines can be tricked, people are highly susceptible to falling for many manipulative tactics. Using trusted antivirus software to flag suspicious messages or websites is vital, as well.
Don't open any emails promising you prizes or notifications of winning.
Scrutinize any email attachment before opening.
Don't give out personal/business information over the phone unless you have called the valid and previously company phone number.
Use Multi-Factor Authentication (MFA)
Be careful about downloading apps from unknown sources - Spam emails can be very dangerous.
Contact IT if you're unsure about anything.
Intel 471's range of intelligence products can help security teams defend against threats such as social engineering and mitigate risks from the underground.
Intel 471’s Adversary Intelligence provides security teams with visibility into the cybercrime underground, including insight into actor tactics, techniques, and procedures (TTPs), motivations, and operations.
Users also can monitor for compromised credentials proactively via Intel 471's Credential Intelligence service, track weaponized malware via our Malware Intelligence and determine patch prioritization of vulnerabilities via our Vulnerability Dashboard.
|
<urn:uuid:7715bdaa-aced-442e-8edc-2ed9b8603c4c>
|
CC-MAIN-2022-40
|
https://intel471.com/glossary/social-engineering
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00656.warc.gz
|
en
| 0.934582 | 1,572 | 3.3125 | 3 |
Several vulnerabilities were identified in the Apache Guacamole remote access system. Lots of companies employed Apache Guacamole to let administrators and employees access Windows and Linux devices remotely. The system became well-known during the COVID-19 crisis for making it possible for people to connect to their company’s system and work from home. Apache Guacamole is integrated into different network access and security solutions for instance Fortress, Quali, and Fortigate. It is a notable tool on the market that has achieved Docker downloads above 10 million.
As a clientless service, Apache Guacamole doesn’t require remote employees to install any application on their devices. Accessing their company device can be done using a web browser. The software program will just be installed on a server by the system administrator. The system configuration determines the established connection by utilizing SSH or RDP as Guacamole functions as a link that sends communications from the web browser to the user’s gadget.
Check Point Research looked at Apache Guacamole and discovered in version 1.1.0 and preceding versions several reverse RDP vulnerabilities. The same vulnerability was also found in Apache’s free RDP implementation. Attackers can remotely take advantage of the vulnerabilities to get code execution, permitting them to hijack servers and get sensitive data through bugging communications having remote sessions. The researchers noted that in case all employees are doing work remotely, exploiting these vulnerabilities could result in having complete control of the entire organizational system.
Check Point Research explained the two ways to take advantage of the vulnerabilities. A hacker who already got access to a compromised desktop PC and the network could take advantage of the vulnerabilities in the Guacamole gateway the moment a remote employee attempts to log in and use the device. The attacker could manipulate the gateway and the remote systems. A malicious insider could furthermore exploit the vulnerabilities and gain access to other employees’ computers in the network.
The vulnerabilities could allow Heartbleed-style information disclosure and the attacker gets read and write access to the weak server. Check Point Research bundled the vulnerabilities, brought up privileges to the administrator, and acquired remote code execution. The researchers made a report of the bundled vulnerabilities, which are CVE-2020-9497 and CVE-2020-9498, to the Apache Software Foundation. The patches were available on June 28, 2020.
The researchers additionally identified the vulnerability CVE-2018-8786 in FreeRDP, which could be taken advantage of to manipulate the gateway. All versions of FreeRDP prior to January 2020, version 2.0.0-rc4, employ FreeRDP versions having the CVE-2020-9498 vulnerability.
All organizations that have employed Apache Guacamole need to ensure that the newest version of Apache Guacamole is installed on their servers.
|
<urn:uuid:a3b07145-350a-4342-81a2-50b0d80e6712>
|
CC-MAIN-2022-40
|
https://www.calhipaa.com/serious-flaws-found-in-apache-guacamole-remote-access-software/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00656.warc.gz
|
en
| 0.938953 | 578 | 2.578125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.