text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Glenfiddich is finding other uses for its unused Scotch whiskey rather than selling off leftover spent grains from the malting process for the purpose of cattle feed.
Reuters reported Glenfiddich is transforming its delivery trucks to operate on low-emission biogas and it’s coming from waste products found in the whiskey distilling process.
The Scotland-based distillery’s parent company, William Grant & Sons, first developed the technology and Glenfiddich is implementing it at a distillery in Dufftown.
The process involves converting production waste into an Ultra-Low Carbon Fuel gas. Bacteria break down organic matter to produce biogas in an oxygen-free vessel in what is called anaerobic digestion. Such a gas does well to limit harmful emissions.
Glenfiddich says biogas is used in three trucks which are from Iveco and traditionally run on liquefied natural gas. The trucks transport Glenfiddich spirit from the Dufftown distillery to bottling and packaging. This covers four William Grant & Sons sites.
According to Glenfiddich, biogas decreases carbon dioxide emissions by over 95% in comparison to diesel.
It also said biogas cuts harmful particulates and greenhouse gases up to 99% and one truck will take place of approximately 250 tons of carbon dioxide per year.
Distillery director at William Grant & Sons Stuart Watts said they could eventually use the technology utilized in Glenfiddich’s trucks. It could also fuel trucks of other companies in the future.
By 2040, the Scottish whiskey industry plans to reach carbon net zero targets.
|
<urn:uuid:6cf457b0-f4f4-4f8a-ad39-fe82de1ff753>
|
CC-MAIN-2022-40
|
https://www.mbtmag.com/video/video/21578807/glenfiddich-fueling-trucks-with-leftover-whiskey
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00623.warc.gz
|
en
| 0.918656 | 336 | 2.765625 | 3 |
You might have heard about Python. Now you want to learn this and select this in your career. This is a high-level programming language that is interpreted, which is also one of the best interpreted programming language.
For the past five years, Python is the fastest-growing programming language for various reasons. It also has versatile features and fewer code, which has raised the popularity in the market. As popularity has increased, Python skilled developers, need also has increased.
Many small start-ups are becoming a large global enterprise due to Python’s popularity. Software developers, software engineers, data scientists, and many other job titles use Python for their operations regularly. Their main work is only to solve anything in Python.
Here you will get all type of Python-related questions like Python strings, threads, loops, sockets, etc. there will not be much general question because those answers you may get in another post, so we make this article with only related to the python interview questions.
So, this page guides you on how to crack python programming interview with confidence. Reading this guide you will also know that selecting for python for your development is correct decision.
This is a high-level programming language that can build an aby type of application by using the right tool or libraries. Moreover, this is acts like a support object, threads, modules, exception-handling, automatic memory management, etc. which helps to build the application to solve the real-world problem.
Since it is a general-purpose programming language, it is straightforward, and easy to learn. It emphasizes readability, which reduces the cost of maintenance. This language is capable of scripting, open-source support, third-party packaging, code-reuse, etc.
This is a high level of data structure combined with dynamic binding and typing. This attracts the huge community of developers to do the rapid application development and deployment.
What all are the dynamically typed language?
Before we discuss about dynamically typed language, we need to know all about typing. It refers to type-checking in any programming language. This is a very strongly typed language like Python, which also “1”+2 gives typo error, and this language does not allow “type-coercion”. It is a weakly-typed language that will give output “12”.
It has two stages, which includes Static and Dynamic. In static data has to get the check before the execution, and dynamic data, type gets checked during the execution.
Python an interpreted language it executes every statement line by line by doing the type-checking during execution. It is a Dynamically Typed Language.
What is all about interpreted language?
What is the PEP 8, and why is it so important?
In the very first, you need to know PEP stand for Python Enhanced Proposal. This is the official design document that provide every piece of information to the Python community. It also describes new features to process Python. PEP 8 is very important because this document has the style guideline, which is Python Code. All Python open-source communities need to follow these guidelines strictly and sincerely.
How the memory gets manage in Python?
Memory management handle by Python Memory Manager. Manage to allocate this memory with the help of private heap space, which is dedicated to the Python. Every object regarding Python is stores, and it being private, which is inaccessible for the programmer. However, python provides an API function to work with the personal heap space. Python also has in-built garbage collection, which it can recycle as unused memory to get the proper private heap space.
What all are the Python namespaces? Why are they used?
By the name Python itself indicates that the object name is unique and it can be used without any conflict. Python implements the “namespaces as dictionaries” that correspond with a ‘object as value’. It allows multiple namespaces by using the same name, and you can map it with a separate object. Here you will get few examples of the namespaces, those are below:
- Local Namespace: The name itself you can indicate that it inside a function, and it is temporarily created when the function call, and it also gets cleared when the process gets returns.
- Global Namespace: This name comes from the different type of imported packages, that are getting used for the current project. This gets created when the package is imported and lasts till the execution happens in the script.
- Built-in Namespace: Here, name comes from different exceptions, and it includes a built-in function from the core of Python.
If we talk about the lifecycle of a namespace, then it depends on the objects which are mapped. When the object gets to end, the lifecycle of the namespace automatically comes to the endpoint. However, it is not possible to access the inner namespace objects to another outer namespace.
What is the scope in Python?
For anything Python always works inside its scope. This scop is a block of code where the Python object remains the same or relevant. It is unique and identifies the objects which are inside the program. However, this namespace has the scope definer where you can use their object without any prefix. Here you will get few example where the python creates by scope and those are following below:
- Local shop: A local shop always refers to local objects which available for the current scenario.
- Global scope: It refers to the object available through the execution of code with their inception.
- Module-level scope: In this module-level scop global objects is accessible and it is available in the current module.
- Outermost scope: This is one type of callable program which comes under the built-in name. This object is the scope that get searched in last and gets the name as the reference.
What are pickling and unpickling?
Pickle is a separate module, accepted by the Python object and it converted into the string representation and it dumps into the file by following dump function. This unique process is called pickling. When this process is retrieving that time, original Python objects get store in the string representation that time, it calls unpickling.
Discuss the five benefits of using Python?
You will get five significant benefits by using Python. Those are below:
- It comprises a huge standard library to all internet platforms such as Email, HTML, etc.
- Python does not explicit memory management because they can interpreter itself where they allocate the memory with new variable and make them free automatically.
- Since it uses square brackets, it has easy readability.
- Python is very easy to learn for beginners.
- It has built-in data that save the time and efforts of the programming to declare the variables.
This is very important programming language that is becoming very essential for companies and the developers. Here you discover these lists where you get the top Python interview questions. These may not contain the all possible scenario, but this can be your excellent starting point to do good in your interview.
|
<urn:uuid:b6698197-0e01-48ee-88c7-2376da653306>
|
CC-MAIN-2022-40
|
https://ethicalhackersacademy.com/blogs/ethical-hackers-academy/python-interview-questions
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00623.warc.gz
|
en
| 0.930688 | 1,511 | 2.6875 | 3 |
August 11, the 223rd day of the year in the Gregorian calendar (224thin case of leap years) marks as an important date in the History of the world. Several notable incidents and events took place on this date over the centuries, which included battles, uprisings, assassinations, expeditions, natural disasters, and events of political, technological, and scientific significance, among others. Some important events that occurred on this day include Watts Rebellion Begins, Russia Food Shortage, Philippines B29 Bombers, Federal prisoners’ land on Alcatraz, Germans begin to evacuate Sicily, Reagan jokes about bombing Russia, Last U.S. ground combat unit departs South Vietnam, and Weimar Constitution adopted in Germany.
Let’s discuss a few major Historical events in Today’s History i.e., August 11.
1919: Weimar Constitution adopted in Germany
On August 11, 1919, Friedrich Ebert, a member of the Social Democratic Party and the provisional president of the German Reichstag (government), signs a new constitution, called the Weimar Constitution, into law, officially creating the Germany first parliamentary democracy.
Even before Germany acknowledged its defeat at the hands of the Allied powers on the battlefields of the World War I, discontent and disorder ruled on the home front, as the exhausted and hunger-plagued German citizens expressed their frustration and anger with large-scale strikes among factory workers and mutinies within the armed forces.
1965: Watts Rebellion begins
In the Black Watts neighborhood of Los Angeles, racial tension reaches a breaking point after two white policemen scuffle with a Black motorist suspected of drunken driving. A crowd of spectators gathered near the corner of Avalon Boulevard and 116thStreet to watch the arrest and soon grew angry by what they believed to be yet another incident of racially motivated abuse by the police.
An uprising soon began, spurred on by residents of Watts who were embittered after years of isolation of economic and political. The rioters eventually ranged over a 50-square-mile area of South Central Los Angeles, looting stores and torching buildings as snipers fired at police and firefighters. Finally, with the help of thousands of National Guardsmen, the violence was quelled on August 16.
1972: Last U.S. ground combat unit departs South Vietnam
The last U.S. ground combat unit in South Vietnam, the 3rd Battalion, Twenty-First Infantry, departs for the United States. The unit had been guarding the U.S. airbase at Da Nang. This left only 43,500 advisors, airmen, and support Army left in-country. This number did not include the sailors of the 7th Fleet on station in the South China Sea or the air force personnel in Thailand and Guam.
|
<urn:uuid:e269218b-f34c-4b02-81eb-104f08a34b7c>
|
CC-MAIN-2022-40
|
https://areflect.com/2020/08/11/today-in-history-august-11/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00623.warc.gz
|
en
| 0.942716 | 593 | 3.125 | 3 |
What is web design? What is web development? Web design and development are two words used interchangeably, but they are different. One person can be a designer who also builds a website, while another can be a programmer who specializes in user interface (UI). This article will explore the differences between web design vs. web development
What is Web Design?
Web design is the process of conceptualizing a visual interface for a website. It involves creative thinking about how the website will look and feel to the user. It also includes deciding on keywords, image, color, and the site’s overall content.
What is Web Development?
Differences between Web Design and Web Development
A good web designer should have strong visual skills. They should be able to think creatively regarding how the website will look. They should be able to decide on the color scheme for the site, compose keywords and write content for it appropriately. Qualified web designers are usually creative individuals keen on the internet and computers. Web designers can come from various backgrounds, including graphic design, photography, or psychology.
A web designer uses vector graphic editor software to layout the site’s features. These programs enable easy navigation between layers and objects. The most common vector software are Inkscape, Corel Draw, and Adobe Illustrator. A web developer uses HTML and CSS editors such as Sublime Text, Dreamweaver, Espresso, or Atom.
Conceptualization of Website
SEO and Site Security
Web designers are required to know SEO, Google Analytics, and Google Webmaster Tools. Web development requires knowledge about site security and ethical hacking to prevent DDoS attacks.
Nature of Projects
Web designers work on website projects either as part of a team or as individuals. They work for various companies worldwide, either in their own office or remotely. They have a team of designers that work together to make the project succeed.
Web developers generally work on different projects as independent contractors, and they may be based out of the country they are in. Web developers establish the needs of websites and then decide on software languages to create things like flowcharts, wireframes, and mock-ups based on that information. The web developer also decides how to create a website within a CMS (content management system).
A web designer will oversee and manage the project to ensure that the website meets the client’s needs, while the developer will directly focus on coding. The web design process can be iterative; adding new features can fundamentally alter aspects of a previously completed site. These alterations are possible due to changes in design and strategy after all visual elements have been completed.
What to consider when choosing the best web design and development company
When choosing a web design and development company, there are many factors to consider. They must be able to build the website you need and do it on time and within budget. The best web design and development companies will have the following qualities:
• Highly experienced. They have at least 10 years of experience, and the person you will be working with should have a blog on the website where they discuss their work.
• Their portfolio should showcase their work, which is original and creative.
• They should also be responsive to questions via email or chat, and they must respond quickly during business hours in your time zone. This means they are available to contact if a problem occurs with your website.
• They should be able to guide you through the process of building a website and should be able to explain technical terms if you do not understand them.
• They must be able to solve problems with your website, and they can fix bugs that occur due to updates from Google or WordPress.
• Their service package should include SEO, site security, and CMS education for the development of your site.
• The website design and development company should give you a free estimate for the project and explain how their pricing works before the project begins.
Web design is the art of creating the elements of website, and web development or web programming is the process of developing that website into an actual website.
A combined effort of web design, graphic design, and web development is required to create a successful website. An effective user experience can be created by considering both web design and development. With an effective website developed by experts with high-quality standards, businesses can be made more competitive in the global market rather than being entirely offline.
Website Design and Development in Austin, TX
If you are interested in working with a reliable and experienced website design company that can assist your business with website design and web development in San Marcos and Austin, drop tekRESCUE a line! We have built all kinds of websites for a wide variety of clients and businesses. Check out our portfolio here!
|
<urn:uuid:e02669ce-44aa-4b2a-b6b2-e4ccf22b99e8>
|
CC-MAIN-2022-40
|
https://mytekrescue.com/web-design-vs-web-development-key-differences/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00023.warc.gz
|
en
| 0.940525 | 1,213 | 3.140625 | 3 |
What is Data Science?
Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured, like data mining. Put simply, it’s a way to make sense of all your data.
Why use Data Science?
Whatever your goals within your organization, chances are it will require doing more with your data. So along with a powerful in-memory database and clear data strategy, having data scientists to help you get the most value from your analytics will be of huge benefit to you.
Latest Data Science Insights
There is always something new to learn in software development. Our technical expert has put together a reading list for all enthusiastic developers.
Being able to predict future trends and needs with precision is key to success. So how can you solve the complex analytic challenges that are coming along with it? And how can you save money on your analytics tools?
Big data science is revolutionizing the way businesses create value from data. The ability to do high-performance in-database programming is the fundamental building block of big data science applications.
Interested in learning more?
Whether you’re looking for more information about our fast, in-memory database, or to discover our latest insights, case studies, video content and blogs and to help guide you into the future of data.
|
<urn:uuid:b4597d2d-d3d0-40c5-9230-b5c67a595eed>
|
CC-MAIN-2022-40
|
https://www.exasol.com/glossary-term/data-science-definition/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00023.warc.gz
|
en
| 0.905158 | 284 | 2.828125 | 3 |
There are two ANSI American National Standards Institute SQL standard types supported in Exasol for characters: CHAR and VARCHAR.
The CHAR(n) data type has a fixed and pre-defined length n. When you insert a shorter value than the pre-defined length, then spacing characters (padding) are used to fill the space. The length is limited to 2000 characters and you can use ASCII or UTF-8 (Unicode) to define the character set.
VARCHAR(n) can contain any string of the length n or smaller. These strings are stored in their respective length. The maximum allowed length is 2,000,000 characters and you can use ASCII or UTF-8 (Unicode) to define the character set.
If no character set is defined in both CHAR(n) and VARCHAR(n), by default UTF-8 will be considered.
|Exasol Type (ANSI type)||Character Set||Note|
1 ≤ n ≤ 2,000 (default = 8)
1 ≤ n ≤ 2,000,000 (default = 128)
|
<urn:uuid:14da294e-6ecc-464e-bf19-931dcd57a2ff>
|
CC-MAIN-2022-40
|
https://docs.exasol.com/saas/microcontent/Resources/MicroContent/DataTypes/string-data-types.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00023.warc.gz
|
en
| 0.756001 | 263 | 2.859375 | 3 |
In this blog, we will cover the basics of AWS Virtual Private Cloud (VPC), NAT Gateway, NAT Instances and explain the working of a High Availability version of NAT instance deployment.
What is Virtual Private Cloud or VPC?
VPC is a virtual network on AWS that is similar to an on premise network and provides the same level of control, security and usability but abstracts the complexities of setting up an on premise network.
Network configuration can be setup as per our requirements and we can define IP address spaces, routing tables and subnets, hence have control over the configuration os our servers to be internet facing or stay isolated and secure within our VPC. Thus controlling all the Ingress (Incoming traffic) and Egress (Outgoing traffic) completely. All the other AWS resources such as EC2 instances, Databases, Storage Buckets are deployed within VPC’s to secure them and control their interaction with the internet and between our own deployed services.
What are the features of AWS VPC?
The features and structural components of AWS VPC are:
Subnets: These are used to segregate the VPC and span the VPC into multiple Availability zones.
Routing Tables: It is used to manage and control the Egress traffic.
Internet Gateway (IGW): Entry point to the internet from within the VPC.
Availability Zone Management: Manage and create multiple Availability Zones from the VPC.
NAT Gateway: It is used to enable the resources within a private subnet to get access to the internet.
Network Access Control Lists (NACL): It is a stateless component that controls and manages access to each subnet within the VPC.
What is NAT Gateway and how does it work?
NAT gateway is used to enable instances within a private network to connect to the internet. It is used in order to secure the instance and prevent the internet from initiating a connection with them. Hence it allows only Egress traffic and blocks all Ingress traffic.
Image Source: AWS Docs
Considering a scenario in which we have a VPC. It contains a Public Subnet and a Private Subnet. The Public Subnet as its name suggests has access to the internet via that Internet Gateway(IGW) and contains the application and servers that need to be internet facing such as Web Servers, Web Application and public facing API Servers etc. Whereas Private Subnet contains internal services EC2 instances and other resources that need to be secured and are used internally in coherence with other resources such as Databases, Datapipeline servers etc. Each subnet contains a Route Table that hold the Destination-Target mapping between these subnets.
NAT Gateway is set up in an EC2 instance inside Public Subnet. In order to access this NAT Gateway of the Public Subnet, the Route Table of the Private Subnet that contains the local route is updated and a route is added that points to the NAT Gateway(0.0.0.0/0 -> nat-gateway-id).
The NAT gateway has an Elastic IP Address that is assigned to the EC2 instance on which it is setup. The Public Subnet already has access to the Internet Gateway(IGW), hence this NAT Gateway is also connected to the IGW by adding a route to IGW(0.0.0.0/0 -> igw-id). Any request for internet access the originates from a resource or EC2 instance that lies inside the Private Subnet is routed to the NAT Gateway inside the Public Subnet that is completely secure and in turn the NAT Gateway makes that requests to the internet via the IGW Gateway thus creating a secure layer of abstraction between our private resources and the internet.
Note: If we have multiple subnets in different availability zones then AWS will not automatically set up NAT gateway in all Availability Zones and we would need to set up multiple NAT Gateways for different Availability Zones.
What are NAT Instances?
Apart from using the AWS NAT Gateway, we can create our own NAT AMI and run it on an EC2 instance on a Public Subnet in the VPC and this can be used to enable the Private Subnet to initiate Egress traffic to the internet, while keeping it secure from any Ingress traffic from the internet.
Image Source: AWS Documentation
The working of Egress request from Private Subnet to the internet is very similar to the one used in NAT Gateway.
Private Subnet (EC2 Instance) -> Public Subnet (NAT Instance) -> Internet Gateway(IGW)
We can create our own AMI by customizing an existing amazon AMI to run as an NAT instance.
Reference Link: Creating Amazon EBS-backed AMI’s
Reference Link: Create NAT Instance
Note: These approaches of creating NAT Instances are useful and cost-effective as compared to using a dedicated NAT Gateway however this approach is not nearly as scalable, resilient or fault-tolerant as an NAT Gateway as mainly scripts are used to manage failover between instances. The maintenance and workload in this approach is higher as compared to an NAT Gateway but it is very cost effective and worth the use in multiple cases.
Reference Link: Comparison reference between NAT Instances and NAT Gateway
How to deal with the scalability and availability issue on NAT Instances?
The is the situation in which GlobalDots and Terraform community comes to the rescue. Here at GlobalDots, we created a module that provisions High Availability NAT instances by launching autoscaling groups with NAT instances in the specific Public Subnets to allow outbound internet traffic i.e Egress from the Private Subnets. Each instance in this runs AWSnycast for route publishing.
Working of this module?
The module uses the approach of removing an NAT Instance from the route table if it becomes unavailable. When one NAT instance has been terminated then ASG spins up a new one attaching a proper ENI to it.
Usage of this Module?
The Input, Output and Usage of this module is explained properly in the GitHub repository.
Repository Link: Globaldots/terrafrom-aws-nat-instances-ha
Licensing of this module?
This module is licensed used Apache 2 license and is based on tf_aws_nat from the Terraform community.
|
<urn:uuid:bd3d21e8-98e9-4753-92ae-60c77b4c911b>
|
CC-MAIN-2022-40
|
https://www.globaldots.com/resources/blog/aws-nat-gateway-and-high-availability-nat-instances-with-auto-scaling/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00023.warc.gz
|
en
| 0.916211 | 1,307 | 2.9375 | 3 |
Anna P., a student in Learning Tree's course 8420, Querying Data with Transact SQL, asks "What good is an SQL Server user with no login?". No good at all, if you think of users just as people. Clearly, people need to login before they can use SQL Server. However, users can also be thought of as entities that have permissions to perform specific tasks on SQL Server. (Note how database folks love to use the word "entities". Sounds much more intellectual than "things".) This different way of thinking about users, as things with permissions, can be used in a very important way in the management of SQL security.
As you already know, the EXECUTE AS statement can be used to change the execution permission context for TSQL batch code, stored procedures, and functions (except in-line user-defined functions. EXECUTE AS can be used in other ways as well, including server-level code, but here we are considering the most common application, batch and stored procedure code executing at the database level.
There are four options. The first, EXECUTE AS CALLER, is the default and specifies that the stored procedure code executes at the same level of privilege as whoever invoked the stored procedure. SELF specifies execution at the level of the individual who created the stored procedure. OWNER may be applied if the developer creating the procedure needs the procedure to run as the owner, most likely "dbo". Our interest right now is the last option, EXECUTE AS 'username'.
It's important to note that 'username" cannot be a role or group or built-in account or anything like that. It must be the name of a database user. Using the name of an actual human database user would be a poor idea for several reasons. People get sick, they quit, they get fired. Sometimes they get promoted. We do not want to embed into our code a strong dependency on an unpredictable entity. Another reason not to use the name of an actual human user is the unlikelihood that a person has the exact set of permissions you want to grant the stored procedure. This is where the user without login comes in.
USER WITHOUT LOGIN
As its name implies, a user without login cannot log into SQL Server. However, the "user" exists as a database object and may therefore be granted or denied permissions as may any other user. These permissions, of course, will be exactly those required by your stored procedure code. No more, no less.
The same Management Studio dialog used to create "normal" users can be used to create a user without login.
You could type the T-SQL in the same time it would take to find the SSMS graphical interface
CREATE USER TheWhateverApp WITHOUT LOGIN;
Once the user has been created, you can assign permissions to that user like any other. Users without logins can also be assigned to database roles, if that makes things easier.
Users without logins are very valuable when used in conjunction with the EXECUTE AS statement. The combination permits administrator to fine-tune the permissions granted to T-SQL code in batches, stored procedures, and many user-defined functions.
|
<urn:uuid:bf6b4880-1ffa-4ac6-a9c4-a045ee76af2d>
|
CC-MAIN-2022-40
|
https://www.learningtree.ca/blog/sql-server-users-without-logins/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00023.warc.gz
|
en
| 0.909454 | 677 | 2.6875 | 3 |
16 Sep Preparing for the Future of Blockchain in Data Centers
There have been many stages in history where technological advancements have completely and utterly changed how we live our lives and the opportunities we have access to as ordinary people. In the modern age, we have already built the infrastructure to move great distances at high speeds. The new frontier is in the digital space, and the new ideas here will be the impetus to move us forward. One such advancement came with the innovation of bitcoin and other cryptocurrencies as well as the infrastructure and software layers they run on such as blockchain.
Blockchain technology is in use everywhere today and if often not even seen. Some of the many applications for blockchain include, Smart Contracts, Decentralized Finance (DeFi) , decentralized applications (dApps) and Central Bank Digital Currencies (CBDC), but there are many more and the possibilities are growing and expanding all the time.
How Does the Blockchain Work?
A true revolutionary modern technology, blockchain creates a decentralized record of each transaction or block on the ledger (or the chain), which is sent to multiple locations with computers to create a verification system. Because the blockchain system works this way, two things happen as a result. Not only does blockchain increase access to transparent data on the internet, but it also serves as a form of cybersecurity. The decentralized nature of a blockchain means that a hacker or bad actor would find it extremely difficult to manipulate the data unless they controlled more than 51% of the entire network, something that is increasingly difficult to do. A “false” change in one log will make it invalid when checked against the millions of other copies stored elsewhere around the world.
Creating blocks on the chain can be resource-intensive, but in the world today, cybersecurity and network integrity are paramount. This aspect is why many large companies are funding work on blockchain applications outside the cryptocurrency market. Equally as the resource intensive “proof of work” blockchains move towards “proof of stake” systems, as Ethereum is doing, the technology is becoming carbon neutral to carbon negative. Key to the success of both systems though relies on secure, reliable, and consistent connectivity, power, and other related services as offered today in the data center industry.
Providing Blockchain Infrastructure
The idea is that this investment in blockchain technology will create systems in the future in which blockchains could be used for cloud services as the user base and usage per user increase. This will change the digital infrastructure of data centers which are now used as hosting services for cloud-based systems.
Hosting Cryptocurrency Mining
This change in digital infrastructure will come from the increased popularity of cryptocurrencies and blockchain-based applications. Subsequently, the demand for very high-availability computing capability to reliably and cost effectively “mine” cryptocurrency will increase accordingly. As the popularity of blockchain continues to increase, the need to prepare for the coming storm of blockchains is imperative for data centers and colocation facilities. These locations are already hosting companies that, as this technology evolves, will want to begin implementing it and will need to be ready for the increasing demands for reliable power and high-performance equipment to process the creation and writing of blocks accurately, quickly and efficiently.
The resource intensiveness of a “proof of work” blockchain is because the mining of each block requires a mathematical puzzle that needs to be solved by the computer. This computation requires sufficient power to complete a complex puzzle and demands special attention to the equipment in a data center. To accomplish tasks like bitcoin mining, as we see today, there is specialized high-performance hardware required that can solve the computational algorithms intrinsic in the protocol. For “proof of stake” systems the demands lie in reliability, always on connectivity, ultra-fast and low latency processing and ensuring all systems are available 24x7x365.
For proof of work systems, not only does the hardware need to be state-of-the-art, but there are cooling systems that need to be put in place to deal with the heat that this intense computation will produce. For this, specialized centers and those preparing for the revolution in data usage have implemented immersion cooling systems, a liquid cooling technique for high-density hardware. For proof of stake networks, while they are low power, they still need extremely high availability and consistency in order to operate correctly so the task is no less daunting.
This preparation is essential to be ahead of the technological curve. This is truly the new frontier of information technology. The best way to make sure that you are ready for what may come is to work with a data center provider implementing these advanced technologies today.
One solution to this is to work with Digital Fortress for reliable, state-of-the-art, and secure hosting for the blockchain infrastructure and cryptocurrency mining servers. companies can benefit from the low energy cost and ultra-high performance that Digital Fortress data centers provide.
|
<urn:uuid:310f00eb-532d-4d4a-a32a-c1ccc3d21a36>
|
CC-MAIN-2022-40
|
https://digital-fortress.com/preparing-for-the-future-of-blockchain-in-data-centers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00023.warc.gz
|
en
| 0.9456 | 993 | 2.765625 | 3 |
Americans’ trust in government is a quarter of what it was in the 1960s.
In a June 9 column in the Washington Post, longtime federal Insider Joe Davidson put it plainly: Uncle Sam isn’t a trustworthy dude. The column appeared just about a week before the country marked the 50th anniversary of the Watergate break-in, a scandal that forced the resignation of a president, that is taught in schools and universities—even in Florida and Texas, at least for now—that affected our politics and our governmental system and that shaped citizen attitudes towards government ever since. Watergate represented a demarcation in the nation’s history—between a time in which citizens trusted their government and a period in which trust was broken. And it has yet to be restored. Davidson's column focused on a report from the Pew Research Center released that week in early June.
That report features a chart that tracks the decline of trust between citizens and government. The graphic begins in 1958, near the end of the Eisenhower administration—Eisenhower's vice president was Richard Nixon. Then, 73% of Americans and majorities from both parties said they trusted the government to do what is right "just about always" or "most of the time." Trust peaked at 77% in 1964, shortly after Lyndon Johnson ascended to the presidency, after the assassination of John Kennedy.
And then Johnson declined to run in 1968, the Fall of Saigon ended the Vietnam War in 1975, both Martin Luther King and Robert Kennedy were assassinated in 1968, Watergate occured in 1972 and Nixon was impeached and then resigned in 1974.
By the end of that year, after Nixon had flown off to exile in San Clemente, California, just 36% of Americans said they trusted their government. The recent report released by Pew showed that public trust has fallen to a "disturbing" and "near historic low" of just 20%.
Alarms have sounded from multiple good government groups and leaders concerned about this decline. Trust in government is higher not only when government works better, but also when people have a better understanding of what government is doing, according to Teresa Gerton, President of the National Academy of Public Administration. President Biden's Management Agenda addresses the trust issue in its focus on improving citizen services as well as government performance.
Max Stier, CEO of the Partnership for Public Service, believes the negative slide in trust can be turned around if the government communicates better and promotes its successes and "the great work of career civil servants," according to Davidson’s column.
Former OMB official and now Grant-Thornton executive, Robert Shea, pointed to evidence-based policymaking as a key factor in rebuilding trust, when speaking with the Technology Policy Institute.
For Rajiv Desai of 3Di, writing in Route Fifty, the key ingredients are transparency, efficiency and accountability, orTEA. His argument resonated with me because it contained an acronym, which we all know is at the heart of the work of government. So who is right? Or are they all right? If only some are, what should be done to rebuild and restore? Let's explore the topic a bit more.
On July 5th, another venerable public polling organization,Gallup, reported that just 27% of Americans expressed confidence in their institutions—the lowest level of trust since the questions were first asked over 50 years ago. And that lack of confidence was widespread across U.S. institutions—Congress, the presidency, the Supreme Court, the military, business, police, media, churches, schools and more—14 institutions in all. The average confidence level—27%, as noted above—has declined from 46% in 1989. Americans also report having more animosity towards one another than they used to.
In 2019 political scientists Nathan Kalmoe and Lilliana Mason found—based on the Cooperative Congressional Election Study—that nearly half of registered voters think that the opposing party is not just bad but "downright evil"; nearly a quarter concur that, if that party's members are "going to behave badly, they should be treated like animals." So the issue of "trust" is not reserved for just government. Levels of trust in this country—in our institutions, in our politics and in one another—are all in decline. And while opinions of individual institutions do vary among groups, the overall distrust of institutions is universal, with little variation by gender, age, race, education or even party.
Explanations are hard to come by. One factor may be economic stagnation. Social scientists tell us that 90% of Americans born in 1940 could expect to make more than their parents; for those born in the 1980's, the rate has dropped to only 50%. Across the developed world, the poorer and less educated you are, the less trusting you tend to be. Another factor identified by Professor Benjamin Ho of Vassar College is an increase in ethnic diversity. In the U.S., he suggests, the prospect of a nonwhite majority in a country that once enslaved Black people may be intensifying tribalism. Tribalism can promote trust internally and mistrust externally—high trust within certain groups or clans and very low trust among them. And technology has made it easier for media outlets to cater to niche audiences. Doesn't it make sense in fact to place more trust in news and news sources that confirm what you already "know"?
The partisan rancor that is so common in our country today actually makes it much harder to measure trust. Survey questions that have been part of surveys for decades (e.g., an approval rating of the president) seem much less useful nowadays, when public sentiment hinges almost entirely on partisanship. One has to wonder how trustworthy our indicators of trust are. Finally, in another take on the impact of technology, author Rachel Botsman argues that advances in IT have created a new paradigm—that of "distributed trust". In Who Can You Trust?, she suggests that the old hierarchical model in which trust was transmitted from institution to individual—think the media: CBS Evening News with Walter Cronkite—has been replaced by a lateral model in which trust flows from individual to individual.
So what can we learn from the varied research on declining trust in government and how it might be restored? My conclusion is that while there are many good reasons to make government work better and to focus on improving citizen service—and we should continue to strive to do so—there isn't much evidence that we should count on any of these initiatives to change the decline noted by Pew and Gallup.
In fact, you can trust me on this.
Alan P. Balutis is a former distinguished fellow and senior director for North American Public Sector with Cisco Systems’ Business Solutions Group.
|
<urn:uuid:e8987617-1542-47fc-802d-3f55a6c1cdaa>
|
CC-MAIN-2022-40
|
https://www.nextgov.com/ideas/2022/07/uncle-sam-isnt-trustworthy-dude-so-what-america-do/374530/?oref=ng-next-story
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00023.warc.gz
|
en
| 0.96655 | 1,371 | 2.703125 | 3 |
The fundamentals of Memory Forensics
What is it?
Memory Forensics is a procedure taking place in real time, which captures the memory dump, and sorts and analyzes the information on systems. It is a method of numerical analysis that is used to collect volatile components of evidence in real time.
The limits of traditional investigation
Hackers continually develop new ways of accessing IT systems. More and more sophisticated malware and the injection of code directly into the RAM (floating code) make the task of the cyber investigators increasingly arduous.
In parallel to these tricks used by the authors of computer fraud, the technology continues to evolve (just think of the increase in the capacity of hard disks). This makes “traditional” Digital Forensics, which requires bit-by-bit copying of data from a hard disk and memory, less effective.
As a consequence, the analysis in real time of the RAM of systems makes it possible to obtain crucial information during a cyber investigation.
Pushing the limits of analysis
This technique has a lot of advantages for cyber investigators, who then have access to data directly from the RAM of a system, that is to say on what is happening in real time. Among the positive points, three advantages stand out.
1- It gives access to a part of the computer where suspicious software activities can be identified more effectively than on the hard disk drive, by making it possible:
- To study a system’s configuration while it is running;
- To identify contradictions present in the system (entropy principle) between what is happening in the memory and on the hard drive;
- To disclose methods and tools of obfuscation used by the packer, binary obfuscators and rootkits designed for this purpose.
2- It can analyze and track recent activities on a system by making it possible:
- To identity all the activities in progress in their context;
- To trace the profile of the user or pirate, according to the activities.
3- It collects evidence that cannot be found otherwise or which could disappear during a reboot, for example:
- Malware which resides in the memory only (code injection);
- Communications via chatting software,
- Internet browsing activities.
Carrying out a Memory Forensics investigation will require in-depth knowledge of the most recent trends in the field. Here are the six main stages that an investigation should cover:
- Identifying the rogue processes posing as legitimate processes by heuristic methods
- Detecting anomalies in the treatment of objects being processed (DLLs, Registers, Threads, etc.),
- Examining the network artifacts and the communication ports used by the processes of the system in memory to determine the suspect elements;
- Searching for evidence of code injection and methods of obfuscation;
- Searching for signs of the presence of a rootkit by the hooking detection method;
- Making a copy of the process in memory and the drivers of the suspect system.
Of course, investigations carried out in real time require very extensive technological expertise, especially in view of preserving the integrity of the evidence collected. Our team is trained to run Memory Forensics type investigations. If you get stuck trying to obtain results with traditional investigation methods, contact us.
|
<urn:uuid:908b8e42-5bda-4e32-9dd6-64b3cd1119b7>
|
CC-MAIN-2022-40
|
https://forensik.ca/en/the-fundamentals-of-memory-forensics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00224.warc.gz
|
en
| 0.914355 | 668 | 3.28125 | 3 |
A Firewall is a network security device that monitors and filters incoming and outgoing network traffic based on an organization’s previously established security policies. At its most basic, a firewall is essentially the barrier that sits between a private internal network and the public Internet. A firewall’s main purpose is to allow non-threatening traffic in and to keep dangerous traffic out.
Firewalls have existed since the late 1980’s and started out as packet filters, which were networks set up to examine packets, or bytes, transferred between computers. Though packet filtering firewalls are still in use today, firewalls have come a long way as technology has developed throughout the decades.
Back in 1993, Check Point CEO Gil Shwed introduced the first stateful inspection firewall, FireWall-1. Fast forward twenty-seven years, and a firewall is still an organization’s first line of defense against cyber attacks. Today’s firewalls, including Next Generation Firewalls and Network Firewalls support a wide variety of functions and capabilities with built-in features, including:
A small amount of data is analyzed and distributed according to the filter’s standards.
Network security system that protects while filtering messages at the application layer.
Dynamic packet filtering that monitors active connections to determine which network packets to allow through the Firewall.
Deep packet inspection Firewall with application-level inspection.
A Firewall is a necessary part of any security architecture and takes the guesswork out of host level protections and entrusts them to your network security device. Firewalls, and especially Next Generation Firewalls, focus on blocking malware and application-layer attacks, along with an integrated intrusion prevention system (IPS), these Next Generation Firewalls can react quickly and seamlessly to detect and react to outside attacks across the whole network. They can set policies to better defend your network and carry out quick assessments to detect invasive or suspicious activity, like malware, and shut it down.
Firewalls, especially Next Generation Firewalls, focus on blocking malware and application-layer attacks. Along with an integrated intrusion prevention system (IPS), these Next Generation Firewalls are able to react quickly and seamlessly to detect and combat attacks across the whole network. Firewalls can act on previously set policies to better protect your network and can carry out quick assessments to detect invasive or suspicious activity, such as malware, and shut it down. By leveraging a firewall for your security infrastructure, you’re setting up your network with specific policies to allow or block incoming and outgoing traffic.
Network layer or packet filters inspect packets at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set where the source and destination of the rule set is based upon Internet Protocol (IP) addresses and ports. Firewalls that do network layer inspection perform better than similar devices that do application layer inspection. The downside is that unwanted applications or malware can pass over allowed ports, e.g. outbound Internet traffic over web protocols HTTP and HTTPS, port 80 and 443 respectively.
Firewalls also perform basic network level functions such as Network Address Translation (NAT) and Virtual Private Network (VPN). Network Address Translation hides or translates internal client or server IP addresses that may be in a “private address range”, as defined in RFC 1918 to a public IP address. Hiding the addresses of protected devices preserves the limited number of IPv4 addresses and is a defense against network reconnaissance since the IP address is hidden from the Internet.
Similarly, a virtual private network (VPN) extends a private network across a public network within a tunnel that is often encrypted where the contents of the packets are protected while traversing the Internet. This enables users to safely send and receive data across shared or public networks.
Next Generation Firewalls inspect packets at the application level of the TCP/IP stack and are able to identify applications such as Skype, or Facebook and enforce security policy based upon the type of application.
Today, UTM (Unified Threat Management) devices and Next Generation Firewalls also include threat prevention technologies such as intrusion prevention system (IPS) or Antivirus to detect and prevent malware and threats. These devices may also include sandboxing technologies to detect threats in files.
As the cyber security landscape continues to evolve and attacks become more sophisticated, Next Generation Firewalls will continue to be an essential component of any organization’s security solution, whether you’re in the data center, network, or cloud. To learn more about the essential capabilities your Next Generation Firewall needs to have, download the Next Generation Firewall (NGFW) Buyer’s Guide today.
|
<urn:uuid:9aa3d319-31de-46e5-8175-455abe4fa8b5>
|
CC-MAIN-2022-40
|
https://www.checkpoint.com/cyber-hub/network-security/what-is-firewall/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00224.warc.gz
|
en
| 0.920794 | 965 | 3.3125 | 3 |
Touchscreens and multitouch technology make up a significant majority of Apple's research into future user interface improvements, and the iPhone introduced some of those UI paradigm shifts into our increasingly mobile computing. Since almost all interaction with the iPhone—and presumably the hopefully imminent Apple tablet—involves a touchscreen, Apple hopes to improve on touchscreen technology by using each individual LCD pixel as a touch sensor.
Apple has filed a patent application, published today, for a "display with dual-function capacitive elements." By mixing display and sensing functions into each individual pixel, it would make touchscreens thinner, lighter, and brighter than they currently are today.
The way current touchscreens found on most smartphones work is by overlaying a touch-sensitive panel on top of a traditional LCD panel. The touch-sensitive panel is essentially a grid array of capacitors, most commonly made from the transparent conductor indium tin oxide (ITO). When your fingertip comes in contact with the small magnetic fields present in the capacitors, it causes the voltage along those capacitors to fluctuate. A processor translates these fluctuations into touch positions.
The need for additional layers covering the LCD screen means it is thicker, and despite the fact that ITO is transparent, the touch layer does block some light coming from the LCD display underneath. Apple's solution involves using each individual pixel as a capacitive sensor, eliminating the need for an additional layer for a separate touch sensor.
Part of the magic of Apple's patent relies on forming an IPS LCD display using a low temperature polycrystalline silicon instead of the more common amorphous silicon. Materials engineering nerds may want to look at the patent for a more detailed explanation, but suffice it to say that the poly-Si allows for a much faster switching frequency for driving the individual pixels. (For those unaware, the individual pixels in an LCD panel switch on and off at a rate much faster than we can perceive—it's this same switching that can cause eye fatigue from staring at your screen all day.)
Apple's idea takes advantage of the faster switching of poly-Si to drive the pixels one instant, and use the capacitive properties of the individual pixels as touch sensors the next. The switching happens fast enough to give a clear, bright display, as well as responsive touch sensing. The elimination of the separate touch-sensing layer also makes for a thinner, lighter, brighter, and simpler touchscreen unit.
Apple proposes its solution for mobile devices, making references to iPhones, iPods, and even MacBooks, but don't be surprised if such an innovation also makes its way into an Apple tablet.
|
<urn:uuid:84921084-52cf-4c23-a142-159fc60e6fc9>
|
CC-MAIN-2022-40
|
https://arstechnica.com/gadgets/2010/01/apple-use-pixels-as-touch-sensors-for-brighter-thinner-screens/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00224.warc.gz
|
en
| 0.916354 | 532 | 2.625 | 3 |
It would be impossible nowadays to separate our everyday lives from technology. We travel well-worn, comfortable paths online and engage in digital activities that work for us. But could those seemingly harmless habits be putting out the welcome to cyber criminals out to steal our data?
It’s a given that our “digital-first mindset” comes with inherent risks. With the work and learn from home shift looking more permanent and cybercrime on the rise, it’s imperative to adopt new mindsets and put new skills in motion. The first step with any change? Admitting your family may have a few bad habits to fix. Here are just a few to consider.
7 Risky Digital Behaviors
1. You share toooo much online. Too Much Information, yes, TMI. Oversharing personal information online is easy access for bad actors online. Those out to do harm online have made it their life’s work to piece together your personal details so they can steal your identity—or worse. Safe Family Tips: Encourage your family not to post private information such as their full name, family member names, city, address, school name, extracurricular activities, and pet names. Also, get in the habit of a) setting social media profiles to private, b) regularly scrubbing personal information on social profiles—this includes profile info, comments, and even captions that reveal too much c) regularly editing your friends lists to people you know and trust.
2. You’ve gotten lazy about passwords. It’s tough to keep up with everything these days. We get it. However, passwords are essential. They protect your digital life—much like locks on doors protect your physical life. Safe Family Tips: Layer up your protection. Use multi-factor authentication to safeguard user authenticity and add a layer of security to protect personal data and all family devices. Consider adding comprehensive software that includes a password manager as well as virus and malware protection. This level of protection can add both power and peace of mind to your family’s online security strategy.
3. You casually use public Wi-Fi. It’s easy to do. If you are working away from home or on a family trip, you may need to purchase something, meet a deadline, or send sensitive documents quickly. Public Wi-Fi is easy and fast, but it’s also loaded with security gaps that cybercriminals camp out on. Safe Family Tip: If you must conduct transactions on a public Wi-Fi connection, consider McAfee Total Protection. It includes antivirus and safe browsing software, plus a secure VPN.
4. You have too many unvetted apps. We love apps, but can we trust them? Unfortunately, when it comes to security and privacy, apps are notoriously risky and getting tougher to trust as app technology evolves. So, what can you do? Safe Family Tips: A few things you can do include a) Double-checking app permissions. Before granting access to an app, ask yourself: Does this app need what it’s asking me to share? Apps should not ask for access to your data, b) researching the app and checking its security level and if there have been breaches, c) reading user reviews, d) routinely deleting dormant and unused apps from your phone. This is important to do on your phone and your laptop, e) monitor your credit report for questionable activity that may be connected to a malicious app or any number of online scams.
5. You’ve gotten too comfortable online. If you think that a data breach, financial theft, or catfish scam can’t happen to you or your family, it’s a sign you may be too comfortable online. Growing strong digital habits is an ongoing discipline. If you started strong but have loosened your focus, it’s easy to get back to it. Safe Family Tips: Some of the most vulnerable areas to your privacy can be your kids’ social media. They may be oversharing, downloading malicious apps, and engaging with questionable people online that could pose a risk to your family. Consider regularly monitoring your child’s online activity (without hovering or spying). Physically pick up their devices to vet new apps and check they’ve maintained all privacy settings.
6. You lack a unified family security strategy. Consider it: If each family member owns three devices, your family has countless security gaps. Closing those gaps requires a unified plan. Safe Family Tips: a) Sit down and talk about baseline security practices every family member should follow, b) inventory your technology, including IoT devices, smartphones, game systems, tablets, and toys, c) make “keeping the bad guys out” fun for kids and a challenge for teens. Sit and change passwords together, review privacy settings, reduce friend lists. Come up with a reward system that tallies and recognizes each positive security step.
7. You ignore updates. Those updates you’re putting off? They may be annoying, but most of them are security-related, so it’s wise to install them as they come out. Safe Family Tip: Many people make it a habit to change their passwords every time they install a new update. We couldn’t agree more.
Technology continues to evolve and open extraordinary opportunities to families every day. However, it’s also opening equally extraordinary opportunities for bad actors banking on consumers’ casual security habits. Let’s stop them in their tracks. If you nodded to any of the above habits, you aren’t alone. Today is a new day, and putting better digital habits in motion begins right here, right now.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
|
<urn:uuid:b02e17c6-d117-4912-b27b-4a055814c8fc>
|
CC-MAIN-2022-40
|
https://www.mcafee.com/blogs/family-safety/7-common-digital-behaviors-that-put-your-familys-privacy-at-risk/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00224.warc.gz
|
en
| 0.931652 | 1,195 | 2.5625 | 3 |
Customers with an MPLS connection between sites can use this article as a guide for allowing communication over the LAN when the MPLS connection is not intended for accessing the Internet. Alternatively, if the MPLS connection is the primary WAN link for the location and needs to be implemented with VPN failover, refer to the guide on configuring site-to-site VPN over MPLS.
In the example below, two sites exist. Each site has an independent connection to the Internet and an MPLS circuit between the two sites. Site A has a local subnet of 10.0.2.0/24, and Site B has a local subnet of 10.0.1.0/24.
In order for both sites to communicate with each other, a static route must be configured on each MX for the subnet of the remote site, pointing to the local MPLS router (connected to the MPLS CIRCUIT) as the next hop. The MPLS router, generally owned by the ISP, will then pass the traffic to the remote site. If a client at Site A wants to talk to a client at Site B, the traffic will be forwarded over the MPLS link.
The screenshot below shows the Routing section of the Security & SD-WAN > Configure > Addressing & VLANs page in Dashboard for Site B. The first entry describes the local network at Site B. The second entry describes the static route to reach Site A over the MPLS link.
|
<urn:uuid:dd67c298-36af-47c4-9e76-eeebf3e125b0>
|
CC-MAIN-2022-40
|
https://documentation.meraki.com/MX/Networks_and_Routing/Integrating_an_MPLS_Connection_on_the_MX_LAN
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00224.warc.gz
|
en
| 0.912601 | 308 | 2.921875 | 3 |
Singapore’s Bluetooth-based contact tracing app TraceTogether was the first of its kind, intended to log potential exposure events without violating the privacy of participants. As with all of the efforts of this nature, voluntary adoption by the public was key to success. TraceTogether struggled in this area due in no small part to technical issues that hampered the usability of phones. The government has gone back to the drawing board and come up with a new answer: a contact tracing wearable that remains offline as it logs close contacts, only making that data available when a medical professional makes a coronavirus diagnosis and requests access to the device.
The government has indicated that it will provide these portable wearable devices to the public for free, though there has been no word as to whether they will be mandatory in certain public spaces. That possibility has raised privacy concerns, but if done right the contact tracing wearable concept has potential to create a model that solves the problems that Bluetooth app systems are currently struggling with around the world.
Singapore’s contact tracing wearable plan
The new contact tracing wearables would use Bluetooth, but would not be connected to the internet or any sort of private government network while users are out and about in public. The devices would log contacts with known diagnosed parties locally, with this data only available to the government if you test positive for Covid-19. The data would only be handled by a medical professional capable of making the diagnosis.
While the device is not connected to the internet, it would be “always on” in terms of logging Bluetooth contacts. This has raised concerns from privacy advocates, coalescing around a Change.org petition has collected nearly 50,000 signatures as of this writing.
Though still available, the TraceTogether app appears to have unofficially been given up on. The app is estimated to have been used by only about 1.5 million of Singapore’s population of 5.7 million since it became available in April; given that most epidemiologists estimate that 60% to 80% of a country needs to adopt an app for it to be more effective than manual contact tracing, it is fair to say that the app has been a failure.
Public resistance to it was due in part to privacy concerns (even though it was anonymized), but likely even more due to an unwieldy design that was demanding on phones. The app needed to be active at all times with no other apps running in the background, and would stop working if the phone went into a sleep mode or the lock screen came up. In addition to hampering general use of the phone, this mandated constant use of the screen and the Bluetooth hardware which in turn meant that batteries depleted more rapidly. This was a particular problem during Covid-19 social distancing shutdowns when a number of public power outlets were no longer available for an extended period.
Though the contact tracing wearable has not assuaged the privacy concerns, it would solve the usability and convenience issue. It is still unclear what the contact tracing wearable will look like, but it is being referred to as the TraceTogether Token and has been described by the government as a “dongle.” Concept art displayed in a Straits Times video report depicts it as an actual token, similar in design to a casino chip. It appears that it will be self-powered and will not be required to connect to phones or devices to function.
The government has contracted with electronics manufacturer PCI to manufacture an initial run of 300,000 of these contact tracing wearables, which are slated to be rolled out sometime in late June. The first batch will be prioritized to citizens who do not have phones capable of running the TraceTogether app. Vivian Balakrishnan, minister in charge of the Smart Nation initiative and minister for foreign affairs, has said that the distribution process will be similar to the way in which the government issued cloth masks in recent months (primarily via community centers and public vending machines). Dr. Balakrishnan also publicly stated that there would be no GPS chip or mobile connectivity in the tokens.
The data collected will only be available to the Ministry of Health and will be secured with a variety of privacy and safety measures, such as tokenization to replace individual identity information with random values and digital watermarks on collected files to track down any leaks that might occur. Contact logs of those who have tested positive will also only be retained for 25 days.
Are public worries realistic?
One of the main sources of public outcry that the petition cites is the ability of the government to “switch on” some sort of tracking device after the contact tracing wearables have been distributed. However, unless the government is not telling the truth about their capability, this shouldn’t be possible. The only communication hardware on the electronic tags is Bluetooth, which transmits radio signals over a very short range and is not picked up by cellular towers.
A more realistic concern is that private businesses might require patrons to display a token (or contact tracing device) in order to enter. However, the government’s similar SafeEntry program is already in operation and present at about 16,000 locations in the country.
Probably the most realistic concern is that these small tokens will be much easier to lose than a phone, though it does not seem that personal information could be extracted from one if this happened. The TraceTogether app will continue to be supported, and citizens are encouraged to continue using it instead if they have any such reservations about the new contact tracing wearables.
|
<urn:uuid:6c9cf629-4214-47bc-bd48-339f365e6156>
|
CC-MAIN-2022-40
|
https://www.cpomagazine.com/data-privacy/in-response-to-technical-and-adoption-issues-with-tracetogether-app-singapore-makes-a-second-effort-with-an-always-offline-contact-tracing-wearable/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00224.warc.gz
|
en
| 0.964094 | 1,125 | 2.625 | 3 |
5 Deceptively Simple Ways to Prevent Brute-Force Attacks
Brute-force attacks are something the NCSD (National Cyber Security Division) works hard to educate the public about, in part because they can be a leading cause of stress for technology solution providers. They are one of the most common forms of attack for hackers looking to get their hands on passwords, credentials, and other sensitive data.
Automated software allows would-be data thieves to make consecutive guesses and, through trial-and-error, crack into passwords and other encrypted data. We’re not talking guesses by the dozen, or even hundreds…software like this can produce millions of guesses in seconds.
Fortunately, there are 5 simple—and incredibly effective—steps you and your team can take to combat this kind of cyberthreat. Here are the five steps, care of ConnectWise.
|
<urn:uuid:446166f5-6851-4e2d-b1c3-2be3143dcb50>
|
CC-MAIN-2022-40
|
https://www.channele2e.com/influencers/5-deceptively-simple-ways-prevent-brute-force-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00224.warc.gz
|
en
| 0.925648 | 178 | 2.671875 | 3 |
When it comes to your technology resources, you’re security conscious and understand the need for a password. Even though you want your password to be secure, it can be hard to keep track of all the password requirements these days. One of the simplest things you can do to help ensure password security is to make sure that you’re using at least eight characters.
Different systems may or may not allow you to use symbols (like *, %, #, etc), but most modern security will allow you to enter eight (or more) characters. By doing so, you greatly reduce the chances that password cracking programs (aka, “brute force” attacks) will be able to “guess” your password.
Each additional character exponentially increases the complexity that must be overcome for a brute force attack to succeed. There is a technical explanation for why this is, but understanding it is beyond the interest and/or comprehension of most users. Since we all have enough to try to remember when it comes to computers, I’ll spare any further explanation on that in this article.
For those who are interested in understanding the ‘mechanics’ behind the concept, Wikipedia has some good information about it (http://en.wikipedia.org/wiki/Password_strength). For the rest of us, just make sure your password has eight or more characters and leave the in-depth understanding to IT professionals like those helping you every day at BVA!
|
<urn:uuid:0d73aa56-4e67-4c3e-b9ab-6e991f47a63c>
|
CC-MAIN-2022-40
|
https://www.bvainc.com/2014/07/18/passwords-intricate-better/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00424.warc.gz
|
en
| 0.927908 | 303 | 3.359375 | 3 |
According to the U.S. General Services Administration (GSA), less than half the power used by a typical data center powers its IT equipment. The other half is used to support infrastructure that includes cooling systems, uninterruptible power supply inefficiencies, power distribution losses, and lighting.
The complex technology housed in data centers and the continuous uptime demands on the facilities make reliability of heating, ventilation, and air conditioning (HVAC) systems critical. Choosing energy efficient and sustainable cooling technologies that address rising energy costs and heat load densities is a priority for many data centers.
There are many HVAC solutions available that provide greater energy efficiency and sustainability, while also offering a high level of reliability. Keeping in mind a few key factors and best practices when designing and selecting a chilled water cooling solution will help ensure reliable and efficient performance.
KEY ISSUES FOR DATA CENTERS
Several key issues and trends impact mission critical cooling system selection and design. As server technology advances, more powerful equipment is being used in data centers. This often results in more heat being produced per square foot, causing the heat load density in server rooms to increase. Over the last 10 years electrical and mechanical system design and operating procedures have evolved, which has allowed for the HVAC industry to provide higher system reliability.
As data center equipment evolves and is able to operate at higher temperatures, some data centers are increasing building temperatures in response to changing industry guidelines. As part of ongoing data center efficiency improvements, the GSA recommended raising data center temperatures from an average of 72°F to 80°F. Based on industry best practices, the GSA estimated it could save 4% to 5% in energy costs for every 1° increase in server inlet temperature, according to the association’s 2011 Data Center Consolidation Plan.
The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) has also raised the upper end of its recommended operating temperature range for data center servers, from 77°F to 80.6°F.
Finding ways to improve efficiency also helps data centers achieve a lower power usage efficiency (PUE, also referred to as power usage effectiveness). PUE is an industry metric that measures how effective a data center is in using input power. It is determined by dividing the total amount of facility energy by the amount of energy used to run the IT equipment within it. The larger the number, the less efficient the energy utilization — with overall efficiency improving as the number decreases toward 1.0.
For example, according to the GSA, improving PUE from 2.0 to 1.6 for a data center with a 2.5 MW IT load yields a 20% energy savings — or more than $800,000 in annual savings at $0.08/kilowatt hour. Considering these potential savings, data centers want equipment and solutions that will result in the lowest possible PUE.
WAYS TO IMPROVE EFFICIENCY
There are various approaches to data center cooling. Keeping key critical factors in mind when selecting and designing cooling systems will help improve overall data center efficiency.
One option to consider is to include an airside economizer in the air handler design. In various climate regions there are times of the year when a system can use outdoor conditions to cool the process using the standard cooling components to distribute its cooling effect. The most prevalent technique is an air economizer, which reduces or eliminates mechanical cooling for much of the year in many climates.
In climates or building applications where an air economizer is not practical, a waterside economizer can reduce compressor run hours and energy use. With a waterside economizer, the supply air of a cooling system is cooled indirectly with water that is itself cooled by heat or mass transfer to the environment without the use of mechanical cooling.
In addition, some equipment is designed to react quickly and adapt to system changes or to operate at broader temperature ranges. These factors are important when considering chiller technology for data center applications.
IMPORTANT CONSIDERATIONS IN CHILLER SYSTEM DESIGN
When selecting and designing a chiller system, reliability and energy efficiency are the two of the most important factors. This is especially true in tier three and four data centers where 100% reliability is critical.
A simple chiller design contributes to greater reliability and less risk for unplanned downtime. Chiller plant efficiency is improved by using lower flow, equals higher delta T, and chilled water setpoints that range between 60°F to 65°F.
Simplicity in chiller design also helps save time and money on maintenance. Direct-drive compressor technology is an example of simplified design that can provide these benefits. With this technology, the motor is directly coupled to the centrifugal compressor, resulting in only one moving part. Direct-drive technology also offers higher efficiency because it is optimized for variable speed operation.
Rapid restart technology is another key advancement that contributes to chiller performance. After a power interruption, a chiller designed with rapid restart can quickly regain full operational capacity. This allows mission critical applications to continue with minimal interruption.
Design and modeling tools are available to help determine which technology and solutions are best suited for data center applications. Energy modeling software and CAD software help predict outcomes and reliability — aiding in the chiller selection process.
BENEFITS OF AIR COOLED OPTIONS
In climates where conditions are favorable, the least expensive way to cool a data center is with outside air that doesn’t require mechanical cooling. However, this is not always feasible in warmer climates where air pollution or other factors are a concern or where outside air is not readily available.
In these situations, compressor technology may be the preferred solution. For data center applications that use compressor technology, choosing an air cooled chiller provides benefits for energy efficiency while still offering the high reliability necessary for the industry.
On an annual basis, air cooled chillers can be equally or even more efficient than a water cooled system depending on the geographic climate. In most climates air cooled chillers are more efficient at night and in the non-summer months because the outside air cools the condenser. On days when the temperatures are hotter, water cooled systems are more efficient because the compressor energy is reduced. Because air cooled chillers do not have a condenser water pump, the difference in total compressor and heat-of-rejection energy compared to a water cooled system makes air cooled chillers more efficient when the ambient temperatures are below 68°F.
Additionally, air cooled systems do not require portable water, water treatment, or sewage costs, since no water evaporation is being used for cooling. Water cooled systems may require a 72-hr supply of water stored onsite for operation and regular water treatment. The increased number of moving parts and control points all introduce a reliability factor into the system.
The resulting total life cycle cost benefits may favor air cooled systems in some applications.
Efficiency improvements can also result from incorporating advanced system controls. Web-enabled, scalable building automation systems (BAS) help optimize energy efficiency and deliver reliable and sustainable system performance over the life of the data center.
Installing properly sequenced controls is not the end of optimization. Regular and ongoing monitoring of equipment is the key to keeping a cooling system maintained and performing as designed.
A few manufacturers have proven unit control algorithms built into the chillers to ensure operation under distressed conditions. These unit operations are designed to adapt during a stressful condition to provide as much chilling capacity as possible while also producing external alarm notification. This adaptive control technique is preferred over total unit shutdown or failure that could occur when controls do not use adaptive technology.
In addition, consider a manufacturer or service provider that offers secure, 24/7 system monitoring, tech support, and response capabilities to help keep mission critical systems optimized.
Selecting the most appropriate chiller system — and considering technologies like rapid restart, adaptive controls, and waterside economizers — helps optimize data center operation and improve efficiency. In addition, ongoing system monitoring by product experts can identify operating trends that help uncover additional energy optimization opportunities or characteristics of impending issues to ensure continued efficient operation with maximum uptime.
|
<urn:uuid:86a3f761-55ea-4ad1-a01c-d200f82be776>
|
CC-MAIN-2022-40
|
https://www.missioncriticalmagazine.com/articles/88128-data-center-cooling-keep-your-cool
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00424.warc.gz
|
en
| 0.920663 | 1,677 | 2.921875 | 3 |
Fully Qualified Domain Name (FQDN)
Learning Center | Glossary | Fully Qualified Domain Name (FQDN)
What is FQDN?
An FQDN is the most basic unit of the hierarchical, word-centric labeling system used to map memorable pieces of language on top the Internet’s primary resource identifier and addressing system. For example, it is much easier to remember an identifier such as Aviatrix.com or Salesforce.com rather than 188.8.131.52, the Domain Name Service operates in singular fashion for mapping names to these addresses.
Not many folks talk about domain names in terms of whether or not they are “Fully Qualified”, most network engineers just say domain name, but we shall explore for a moment. For a domain name to be fully qualified, ( let’s say that you bought one, let’s call it VirtualRoutersAreTotallyCool.com ), it needs to work. And by working, I mean that when you type the name into the browser, you should get an HTTP 200 response that returns the index.html of your web application. The short answer is that it points to a resource that can be appropriately mapped through the global DNS service that has been properly informed by your domain name registrar that supplies an authoritative DNS server with the IP address of your web server.
The naming convention of the FQDN that allows it to locate a resource are threefold: A top level domain, ( com, net, org, etc. ) a second-level domain, ( usually referred to as the domain name ) and a dot notation separator ( which actually represents the root domain of the entire internet ). This the minimum requirement for the DNS naming convention to operate. To create logical separations between different parts of digital assets or web application functions, sub-level domains ( usually third level domains ) were instituted as another form of resource identifier within a given hosting filesystem. Requiring a canonical name record entry to route to an identifier like “login.aviatrix.com”, it is not necessary for the DNS system to function.
With respect to public cloud services (AWS, Azure, Google) FQDN is often used when referring to filtering. Organizations – either for security reasons or to meet regulatory compliance – often like to inspect and control (egress) traffic leaving their VPC. For example, a business might allow a resource to communicate with another AWS service or with Salesforce.com, but not with other domains.
For the past 25 years, most of the world has been under a spell that makes them think that for the internet to work properly, a ‘www’ is required in the third level domain position to route packets and render HTML. This is simply not so. The term ‘www’ is simply an old Unix convention for a folder for web content that was considered to be temporary or transient. How it came about exactly, or who put it there are questions that are better suited to the likes of Linus Torvalds or Sir Tim Berners-Lee.
|
<urn:uuid:ab78ee35-3c23-4ca3-80ab-9cd08a7ac917>
|
CC-MAIN-2022-40
|
https://aviatrix.com/learn-center/glossary/fqdn/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00424.warc.gz
|
en
| 0.915748 | 648 | 3.09375 | 3 |
New Paradigm-Intelligent Manufacturing
Since the 20th century, manufacturing has evolved into a process that is more sophisticated. Back in the days, a large number of workers manufactured products which paved their way along the assembly lines or largely called shop floors. It was only in the ‘60s that industrial robots were introduced in the manufacturing industry for routine tasks where no or less skills were required, like welding.
There have been several technological advancements in the manufacturing processes since then. Earlier, bots used to take a lot of time for performing a single task. However, the industry is already witnessing its next automation advancements like Internet of Things (IoT) and Artificial Intelligence (AI). This advancement helps make production decisions fast and in real-time.
AI and Manufacturing
Artificial intelligence plays a vital role in the manufacturing industry. Self-driven vehicles are being increasingly used to facilitate fresh prospects in different material flow processes. The intelligence offered by these self-driven vehicles helps realize precise material handling.
Nonetheless, several process innovations, and foundational technologies are needed to incorporate artificial intelligence in the manufacturing. To form an intelligent creation engine, it requires linking of data from design teams, supply chains, quality control, and production lines.
How it works?
It is expected that robots having more intelligence than strength will form the pillars of the 4th industrial revolution. These intelligent robots are driven by the advanced innovations in the artificial intelligence, which makes them ideal for manufacturing processes carried out in highly automated environments. This has helped factories in becoming more organized, and perform and operate efficiently. How does artificial intelligence work? To explain this, let us take an example.
Suppose, there is a shortage of fuel nozzles. In such case, the software automatically raises a request to produce new fuel nozzles. Once the request is raised, the production process begins. The process involves a number of intelligent robots. These robots analyze and monitor the part at every stage with the help of sensors. This data is then fed to the AI and analytics software on the cloud. If the robots find a defect in the part at any stage, new part is ordered for the process.
If it is required to design a new part altogether, then this request is sent to the design team or teams involved. Designing, prototyping and testing the newly produced part takes a few hours, thus saving a lot of valuable time.
Manufacturing and Internet of Things (IoT)
As said before, technological advancements are the key to huge transformations in the manufacturing. However, today these advancements are not into physical automation, but intelligence.
Several manufacturers are able to manage their production process more efficiently, thanks to the Industrial Internet of Things (IIoT). It will also allow analysis of a large amount of mission-critical information to be automated, thus helping realize real-time decision making.
Artificial intelligence has helped robots to work on a vast variety of tasks semi-autonomously. Starting in a work cell, the robots that have built-in “smarts” draw from a cloud-based “lessons” database and information to:
· Identify the different parts and equipment contained in a work cell, take necessary actions and provide “auto-complete” suggestions. For example, identify a piece of equipment or a tool correctly and then use it appropriately.
· Make use of the pattern matching for suggesting error handling best practices.
· Use a database of curative suggestions, which is capable of helping the task designers. It would help the designers in finding an effective way to make amendments to a task or a work cell, when the fault is detected.
It is anticipated that similar to smart phones and several other IoT devices, which receive software updates, the robots will also achieve more functionalities and features. This will help them increase their abilities and optimize the production work cell level. However, this is just the beginning of development. In a few years, after seeing a few more advancements, robots will be capable of sharing insights and information. Thus, the overall performance, both factory-wide, as well as worldwide, would be improved with a capability to:
· Learn from self and others
· Correct self and others
· Collect the insights from the data gathered on the factory floor. This data is collected, analyzed, and shared from robots present in other locations.
The aforementioned points make it very clear that artificial intelligence and smart manufacturing are going to rule the future. The manufacturing revolution is expected to be led by Big Data and the new technologies that drive the intelligent production management systems.
|
<urn:uuid:4134add1-1633-48f9-9240-c59ef6f7db13>
|
CC-MAIN-2022-40
|
https://contact-center.ciotechoutlook.com/cxoinsight/new-paradigmintelligent-manufacturing-nid-3849-cid-54.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00424.warc.gz
|
en
| 0.951863 | 936 | 3.46875 | 3 |
To those who are well-versed in the world of technology, the term VoIP is one that has a lot of implications for the future of telecommunications. VoIP, which is an intriguing acronym for Voice over Internet Protocol, is a method for taking voice audio data signals and converting them into digital signals that can be transmitted over the Internet. Did you get all that? In layman’s terms, it is taking your phone line and turning it into an “internet” line so that phone calls can essentially be made online. It is a type of technology that almost sounds too good to be true doesn’t it? And yet it’s as true as the rotation of the Earth and eventually will become as much as part of our daily lives as that rotation.
The world’s traditional phone line system is slowly and quietly being turned on its ear through the gradual implementation of VoIP technology. As a matter of fact the FCC is in the midst of an extensive and very serious study of the potential implications that such technology would have on the economies of the world. But for now, people have begun taking full advantage of the availability of VoIP services starting with their cell phones. Apps like Viber and WhatsApp offer free downloads of their product onto consumer SmartPhones which allow for free text messaging and free voice calling to any other phone anywhere in the world that has the same app.
But this technology goes beyond just cell phones. It is one that is being implemented in the business world to an ever-increasing degree as more and more companies are seeing the advantages of VoIP technology and either incorporating it into their current telecommunications system or replacing their systems with it altogether. Small businesses in particular benefit tremendously from VoIP because while larger corporate entities can afford to pay the relative drops-in-the-bucket for a complex telecommunications infrastructure, small businesses are by nature always looking to cut costs wherever they can.
So what advantages does VoIP offer small business?
1) VoIP technology offers (aside from the obvious economic upside) the elimination of bulky phone lines and hardware. Every small business owner knows what a hassle it is to deal with the closet full of wires, cords, and power boxes that take up valuable space, not to mention the cost for maintenance, management, and support for all of that equipment.
2) VoIP also offers vastly superior customer support. Traditional phone lines are part of a system that is well over half a century old and the business model used to support these systems is not much farther ahead. While these companies are finding it more and more challenging to keep up with the pace of small business and its needs, VoIP vendors provide next-generation service and support that is cutting edge and which can be done remotely, cutting the response time for service calls significantly.
Like the rest of the world, small business really has nothing to lose and everything to gain by implementing VoIP technology into its communications plan. And unless the FCC finds something horribly detrimental about the technology, it looks as though it is only going to become increasingly that public VoIP and Small Business VoIP usage trends will continue to increase.
For more information about our VoIP solutions, contact us.
|
<urn:uuid:012607dd-cc47-40fb-bd13-5c4786653de5>
|
CC-MAIN-2022-40
|
https://www.infiniwiz.com/small-business-voip-future/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00424.warc.gz
|
en
| 0.962943 | 651 | 2.734375 | 3 |
What You Need To Know About Harsh Climates and SPDs
Surge protection starts with protecting your outdoor devices from damaging surges. When most people think about which devices need surge protection, they imagine a lightning strike hitting their security camera or light fixture and destroying it, perhaps even sending a massive surge along the network and damaging or destroying other essential devices. Though lightning strikes are a less common source of surges, considering the weather is a necessary part of choosing a surge protective device (SPD) to ensure the safety of your outdoor devices.
When purchasing surge protection, make sure that you check a reliable weather service to determine if your area has a higher incidence of lightning or a higher density of strikes; and choose a device appropriate to the likelihood of getting struck. If you live in an area with a higher chance of lightning strikes, you may require a more robust surge protective device that can handle larger surges without self-sacrificing.
In general, surge protection that services outdoor electronics should be stronger and more rugged than surge protection for indoor devices—not only because of the possibility of lightning strikes, but because of the general wear and tear from exposure. Your surge protection needs to be robust enough not just to survive a surge, but also to resist damage from rain, wind, dirt, winter storms and other environmental concerns that may create wear and tear on your systems.
Some environmental concerns that you may need to take into consideration are temperature, wind, rain, and dust. While most surge protective devices are designed to function in a wide range of temperatures, ingress should be your first concern when ensuring that your surge protector can endure harsher environments. Should water or dust make its way into the device, it may cause corrosion or humidity issues that can cause the device to fail, putting your electronics in danger. The National Electrical Manufacturers Association has an enclosure rating that creates an easily identifiable standard for the protection of electronics when completely and properly installed. A NEMA rating of 3R or higher is sufficient to protect your surge protectors, but look for a rating of NEMA 4X or higher for additional reliability.
When looking to install surge protection to keep your outdoor devices safe, ensure that your surge protectors also have adequate protection from environmental concerns. SPDs are designed to help your other devices survive lightning strikes, so ensure that you choose a device robust enough for the amount of lightning your area is expected to see. In addition, protecting your devices from ingress by checking NEMA ratings and choosing an adequately rated device will prevent environmental damage and help keep your electronics running in the event of a damaging surge event. While surge protection is designed to protect other devices, ensure that your SPDs are also protected from the environment.
Learn more about surge protection in our comprehensive application guide.
|
<urn:uuid:a9e09af7-63d1-40bc-94bb-1ee87f8407ba>
|
CC-MAIN-2022-40
|
https://www.diteksurgeprotection.com/blog/what-you-need-to-know-about-harsh-climates-and-spds
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00424.warc.gz
|
en
| 0.932231 | 560 | 2.765625 | 3 |
There are quite is few services which may use both TCP and UDP protocol while communicating. The primary reason is based on type of request/response which needs to be furnished. Before we further drill into detail of protocol type used in DNS, lets have a quick run through TCP and UDP protocols.
What is TCP?
TCP is a connection-oriented protocol where the devices in communication should establish a connection before they can start with data transmission. The same stands true for termination of connection . Notable is that TCP is reliable and it guarantees delivery of data to the destination device.
What is UDP?
UDP is a connectionless protocol where there is no establishment of connection before data transmission. Further, there is no overhead related to opening, maintaining and terminating a connection. A key aspect of UDP is that delivery of data to the destination is not guaranteed.
While considering between UDP or TCP protocol for any application, another key aspect to note is that UDP packets are smaller in size and cannot be greater then 512 bytes. Hence, any application needs where data to be transferred is greater than 512 bytes will require TCP protocol.
Example Scenario: When does DNS use TCP or UDP?
Lets take scenario of UDP protocol requirement in DNS – A Client queries for a record from DNS server. Even if the DNS server response is lost or becomes corrupt, its not a major challenge since client can ask for it again. Considering such use case, it is rational to use UDP when communicating with DNS for translation of domain name.
So, when does DNS use TCP? In order to maintain a consistent DNS database between DNS Servers. Hence, a transfer of DNS records (Zone transfer) between Primary and secondary DNS Servers is required which uses TCP protocol. The requirement here is that TCP, due to its reliability makes sure zone data is consistent across DNS servers. When a client doesn’t receive a response from DNS, it re-transmits the query using TCP after 3-5 seconds of interval.
Considering the above scenarios, it becomes essential that DNS server operators/providers must provide DNS service over both UDP and TCP. The same understanding stands true for network operators. We may encounter operational challenges when TCP protocol is blocked for communication of DNS service.
Are you preparing for your next interview?
Please check our e-store for e-book on DNS Interview Questions. All the e-books are in easy to understand PDF Format, explained with relevant Diagrams (where required) for better ease of understanding.
|
<urn:uuid:57aea1bc-38b3-4604-9dd3-27a43a9ec2f3>
|
CC-MAIN-2022-40
|
https://networkinterview.com/when-does-dns-use-tcp-or-udp/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00424.warc.gz
|
en
| 0.927099 | 516 | 3.65625 | 4 |
Routers can be configured as Frame Relay switches to be used mostly in service provider or LAB environments. Lets see how you can configure frame-relay switch to use your in studies.
On a Frame Relay switch, frames from a Frame Relay PVC arriving on an incoming interface are switched to a Frame Relay PVC on an outgoing interface. The switching paths taken by the frames are based on the static Frame Relay route table.
In the following example two routers (Router1 & Router2) are connected via a router acting as a dedicated frame-relay switch (FRSW) .
- Enable frame relay switching on FRSW using the global command frame-relay switching.
- Configure the frame-relay switch interfaces to act as a DCE interface using the command frame-relay intf-type.
- Configure a frame relay PVC on FRSW to switch packets coming from DLCI 100 on Serial1/0 to DLCI 200 on Serial1/1.
- Configure typical frame relay on both routers (Router1 & Router2).
- Verify and check the operation.
Frame-relay switch configuration:
FRSW(config-if)#description Connection to Router1
FRSW(config-if)#frame-relay intf-type dce !–act as a switch connected to router
!–specify a static route PVC switching.
Router1 & Router2 configuration:
This is a basic frame relay configuration using main serial interfaces and all defaults.
Router1(config-if)#ip address 126.96.36.199 255.255.255.0
Verification and troubleshooting:
The following show commands were taken from the frame relay switch “FRSW” and Router1
|FRSW#sh frame-relay route
Input Intf Input Dlci Output Intf Output Dlci Status
Serial1/0 100 Serial1/1 200 active
Serial1/1 200 Serial1/0 100 active
FRSW#sh frame-relay pvc | in DLCI
Verification from Router1 point of view “same is done on Router2”
Type escape sequence to abort.
Router1#show frame-relay map
Note: L2 to L3 resolution is accomplished dynamically using inverse arp.
|
<urn:uuid:fe080ca0-97e8-4e29-9c1b-77482ee7c02a>
|
CC-MAIN-2022-40
|
https://www.networkers-online.com/blog/2008/07/how-to-configure-frame-relay-switching/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00424.warc.gz
|
en
| 0.707223 | 639 | 3.015625 | 3 |
Linux might sound scary for first-time Linux users, but actually, it isn’t. Linux is a bunch of open-source Unix operating systems based on Linux Kernel. These operating systems are called Linux distributions, such as Fedora, Debian, Ubuntu, and Mint.
Since its inception in 1991, Linux has garnered popularity for being open-source. People can modify and redistribute Linux under their own brand. When using a Linux OS, you need a shell to access the services provided. Also, it’s recommended to run your Linux OS through a CLI or command-line interface. CLI makes time-consuming processes quicker.
This article presents a guide to 7 important Linux commands for every Linux user to know. So, let’s begin.
cat is the shortened form of “concatenate”. It’s a frequently used multi-purpose Linux command. This command is used to create, display, and copy a file content on the standard output.
cat [OPTION]... [FILE]..
To create a file, type:
cat > <file name> // Enter file content
To save the file created, press Ctrl+D. And to display the file content, execute:
cat <file name>
The cd command is used to navigate through the directories and files in Linux. It needs either the entire path or the directory name depending on the current directory.
cd [Options] [Directory]
Suppose you’re in /home/username/Documents. You want to navigate to a subdirectory of Documents which is Photos. To do that, execute:
To move to an entirely different directory, type cd and then the directory’s absolute path.
The above command will switch to /home/username/Movies. Apart from this, the commands, cd.., cd, and cd- are used to move one directory up, to go to the home folder, and to go to the previous directory respectively.
Reminder: Linux’s shell is case-sensitive. So, make sure you type the name’s directory as it is.
The echo command displays a line of text or string passed as an argument. It’s used for the purpose of debugging shell programs in the Linux terminal.
echo [Option] [String]
Other examples of the echo command are:
echo "String": This displays the string within the quotes.
echo -e "Learn nBy nDoing": Here the ‘-e’ tag allows the echo command to understand the backslash escape sequences in the argument.
sudo stands for “SuperUser Do”. The sudo command helps you perform tasks that require root or administrative privileges.
Reminder: It’s not advisable to use this command daily because an error might occur if you did something wrong.
The sudo command can be used with -h, -V, -v, -l, or -k options used to help, version, validate, list, or kill respectively.
Another example is, suppose you want to edit viz.alsa-base.conf file that needs root privileges. For this the command would be:
– sudo nano alsa-base.conf
To enter the root command-line, type:
Then enter your user password.
After using Linux for some time, you’ll notice that it’s pretty easy to run hundreds of commands every day. The history command shows all the previously used commands within the bash terminal. With history, you can review the commands you have entered earlier.
Now try running history and check all the Linux commands you have entered so far.
The ping command helps check if your connection to a server is well established. Ping is a computer administration software utility that checks the host reachability on an IP (Internet Protocol).
ping [option] [hostname] or [IP address]
Suppose, you want to test if you can connect to the Google server and come back. For this, simply type:
If the above command pings the Google server, you can be sure that the internet connection is fine.
Reminder: Use Ctrl+C to stop pinging. Otherwise, it will continue sending packets.
The locate command helps to search a file by its name. Its functions are very similar to the find command. The only difference is that, the locate command searches the file within the database; whereas, the find searches it in the file system. Also, locate works faster than find. Keep your database updated to apply the locate command on it.
locate <file name>
In this article, you have learned about 7 important Linux commands. Hope my article helps you execute your tasks quickly and efficiently.
|
<urn:uuid:decd1c7c-0732-406a-9710-b55e42dc0091>
|
CC-MAIN-2022-40
|
http://dztechno.com/7-important-linux-commands-for-every-linux-user/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00424.warc.gz
|
en
| 0.848756 | 1,054 | 3.265625 | 3 |
SIP 101 – What’s it for?
SIP is a term that has lost its true technical meaning in the hands of many non-technical writers. Most of the time, this doesn’t matter. Who cares if someone says “SIP carries voice” in the same way that someone talking about the iconic sight on the London skyline, “Big Ben”, intends to reference the clock tower and not the bell. Be still, learned scholars; we know what they meant!
However, I felt it might be helpful to empower the non-technical among us by demystifying some of the technical terms and uncover their true meanings.
Let’s take the biggest offender first – Session Initiation Protocol or SIP for short.
This term, often referred to as SIP Trunking, is sometimes used quite generically to refer to the transmission of voice communication over IP-based networks (like the Internet) or VoIP. VoIP, however, should be used as the more generic, non-technical term because SIP is something quite specific.
The clue is in its full name. SIP can be considered a set of instructions two parties exchange to route (find a path between the parties) and maintain communication. It’s often called signalling, which negotiates how the audio or video data will be communicated, rather than the stream itself.
Those who know me appreciate that I love analogies. I’ve always used one for SIP that is even more topical in our geopolitically fractious world – the international meeting. Let’s consider two trade delegations from different countries, each with its distinct language and cultural traditions.
They agree on a protocol so that communication can be as successful as possible. The parties might set rules such as the language and currencies used, the number of participants from each state, dress code, food and drink served, and even what NOT to mention. This most of what SIP does, rules are negotiated and agreed to set a clear path for audio (or video) transmission. Much like the diplomatic analogy, where the negotiation protocol is separate from the actual content of the discussion.
Unlike the world of international politics, this is done in fractions of a second and unnoticed by the user, so the conflation of SIP, VoIP, RTP, Media and other terms is perhaps excusable. To help clear up some of the confusion, I’ve provided some of those definitions below.
This is a hangover from the old world of telecoms, where “trunk” referred to a bundle of phone lines used by a PBX (more on that later) or collection of telephones shared with its users. Today, a “SIP Trunk” is often bought when connecting a PBX to a VoIP provider to replace traditional ISDN or Analogue lines. It usually allows several “channels”, which are the total number of concurrent calls allowed to take place between the PBX and the outside world.
SIP URI (Session Initiation Protocol Uniform Resource Identifier)
Much like in the traditional world of telephony, where you have a telephone number that identifies another person or service you want to call, the SIP URI is an identifier that allows one user to contact another. You can think of it as a bit like a “SIP phone number”. It often looks a bit like an email address ([email protected]). Those with skills and control over their IT systems often make their SIP URI, and Email address appear to be the same, although they do very different jobs.
From a typical user perspective, this is an endpoint on a network, that is to say, usually a physical or sometimes software-based phone that sends and receives SIP messages. There are two potential functions of a “User Agent”. Firstly, the UAC (User Agent Client) sends SIP requests, such as a request to accept a call. Secondly, the UAS (User Agent Server) receives such requests and returns a reply, often accepting, rejecting, or setting parameters for a call.
When parties agree to participate in a VoIP call, they set parameters for the call, such as quality, source and destination IDs or any restrictions. These attributes are usually maintained for the following call unless updated by either side. The information about call parameters is retained as a session; that is, an agreement about a set of attributes held over some time (a call).
It would waste bandwidth and processing power if every packet of Real-Time Media (more on this later) required a complete set of call parameters to be continuously retransmitted. Instead, a “session” is given a Call ID for the overall call period. Each party involved stores the complete session attributes locally, only referring to the session by a shortened identifier after the initial agreement. This makes ongoing communication faster and more efficient.
PBX (Private Branch eXchange)
PBX usually refers to the telephone system used, primarily by businesses or other large organisations, to manage internal and external telephone calls. Historically, vast amounts of cabling connected internal telephone handsets to this system, which in turn managed connections to the outside world via phone lines provided by telecoms providers.
Today the concept is the same, but externally, multiple traditional phone lines have been superseded by Internet connections and SIP Trunks. Internally, it’s increasingly common for Wireless transmission to be deployed to connect phones and computers.
The vPBX (or virtual PBX) allows the traditional power-hungry appliances often found in basements to be replaced by smaller computer servers running multiple applications. It’s even possible today to move PBX Systems into public and private clouds entirely, thanks to modern VoIP and Telecoms providers.
RTP (Real-time Transport Protocol)
The digital representation of voice and video is transmitted across networks in RTP “streams”. The packets of data in these streams need to be transmitted and received extremely quickly due to the nature of human communication. We all know how jarring it can be when making international phone calls. Even 1-2 seconds of delay can lead to confusion and a poor general experience. Of course, email or text-based chat doesn’t require almost instant transmission due to how our brains work, so voice and video needed a special protocol.
RTP is typically used along with SIP, where the latter sets the rules discussed earlier and makes/cancels a call request, but the former carries the actual voice and video data involved from each party in the call.
In our context, this term is relatively generic. It refers to the communication content between parties, such as the voice or video. It is often used to differentiate an area of technology being discussed, especially in troubleshooting situations. For example, an engineer might suggest that a problem is either related to signalling or media, that is to say, SIP or RTP, respectively, to follow the examples above.
Codec is a combination of the words coder/decoder. For voice or video to be transmitted across computer networks, the raw audio or visuals need to be digitised – converting natural analogue to digital. This process requires digital encoding on the side making a transmission, for example, from a microphone’s electrical signals, followed by decoding on the side receiving the call to be converted back into electrical impulses and ultimately vibrational sound through a speaker.
When setting up a VoIP call, a part of the SIP negotiation agrees on the quality and bandwidth implemented during the call. The quality and bandwidth used are dictated by the “Codec” choice. Simply put, some Codecs are better at different things, be that efficient use of network bandwidth, higher quality or a balance of both. It’s worth remembering that developers often license their Codec software, so the hardware or software provider often pays for this when implementing the technology.
This one sounds worse than it is, although it is thankfully used sparingly in conversation. Simply put, it is the reference number for the complete technical definition of what SIP is, in the form of a document written for the IETF (Internet Engineering Task Force). This working group defines the standards that make most of our modern technologies work today. RFC means “Request for Comments”. This prefix suggests a working document that is occasionally updated by the skilled engineers involved in this prestigious working group.
I hope this post has dismissed many misunderstandings about SIP and has gone some way to improve understanding of the opaque terminology found within the technology. Of course, there is much more to SIP and VoIP in general, but I’ll leave those articles for our experienced engineers in the future.
|
<urn:uuid:507ca1e4-2ae4-49ec-8bc1-f87799f9ab85>
|
CC-MAIN-2022-40
|
https://ns1.netaxis.be/2021/08/04/sip-101-whats-it-for/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00624.warc.gz
|
en
| 0.935898 | 1,831 | 2.734375 | 3 |
The average office worker receives over 120 emails per day.
So much of our personal and professional communication these days is online, and you’ll want to make sure you’re safe while accessing your email account.
Hackers have become increasingly savvy in recent years, and their attempts to access your information have become more sophisticated and covert.
So can opening an email really get you hacked? Here’s what you need to know.
Can Opening An Email Get You Hacked?
Yes. There are some types of emails that can cause damage immediately upon opening, but if you know what to look for, you’ll usually be able to avoid them.
This typically happens when an email allows scripting, which allows the hacker to insert a virus or malware directly into the email.
What puts me at risk of being hacked?
Opening Or Downloading Attachments
The thing that puts you at the biggest risk of being hacked is opening an attachment in an email message. Hackers can hide viruses, ransomware, and other types of malware in these pieces of media. This malware can damage your systems and even compromise sensitive information like your passwords, bank account information, location, and more. Keep in mind that images are also attachments and can contain malware.
Clicking A Link
Clicking a link in an email from a hacker can also have serious consequences. These links can take you to a website that results in an involuntary malware download or some other form of digital tracking. These links can also take you to a site that mimics a popular social media platform or financial app. These sites will often trick you into providing your username and password for these platforms, which they can use to steal your identity.
Replying To The Email
You also put yourself at risk by replying to emails from people you don’t know or trust. Hackers have gotten incredibly creative with phishing scams in recent years, and it can sometimes be difficult to tell what is a scam and what is real. These hackers will often pose as a person or organization in need of support and manipulate you into providing their personal information.
More sophisticated hackers will often pose as a website or app that the recipient already interacts with on a regular basis. They then mislead you so you will provide your password, phone number, or other personal information.
How Common Are Email Attacks?
Email attacks are much more common than many people realize. In 2020 alone, roughly 75 percent of organizations received some kind of phishing email, although most were not successful.
And this is just the data on phishing attacks – 92 percent of all malware is delivered via email. Perpetrators have come up with a wide variety of strategies to gain access to online accounts.
In fact, cybercrime went up significantly during the COVID-19 pandemic. With so many people working from home, email communication became even more important than it was previously. Many cybercriminals started sending out emails posing as the CDC or WHO with malicious attachments or links about current case numbers, vaccine information, and other information that would be relevant to the recipients.
As email technology improves, hackers have learned to adapt quickly. It’s unlikely that email attacks will go away anytime soon, which means that we will need to be vigilant to protect your personal data.
Consequences of Email Attacks
The consequences of an email attack can be very serious and aren’t something to be ignored. Email hacks can quickly get out of control if you don’t take action right away.
The first thing that hackers will usually do is gain access to your email contacts. They will use this information to send scam emails to your contact list in an attempt to hack them as well. If you use the same passwords for your social media accounts as you do for your email, they may also gain access to these and start posting as you.
Through a suspicious email, the hacker can put malware on your computer or mobile device. This malware can track you and gain access to even more of your personal information. In particular, the malware will look for access to your bank account and credit cards, which they can use for identity theft.
When hacking corporate accounts, they will also look for access to secure business information, which they could then use as part of a ransomware attack. An attack on your work computer or phone isn’t just dangerous for you – it could also compromise the security of your entire company.
Types of Email Attacks
There are many different types of email attacks to watch out for. As technology has changed and security software has gotten better, cybercriminals have developed new strategies and new types of attacks. Here are some of the most common types of email attacks to watch out for.
Most people will receive phishing emails at some point, even if they aren’t successful.
In a phishing email, the hacker will pretend to be a reputable organization or person. They will then use this unearned trust to manipulate the recipient into willingly sharing their personal information.
When looking at a phishing email, there’s usually some sign that the sender isn’t who they claim to be – this could be an abnormal email address, uncharacteristic spelling mistakes, or links that seem out of place, for example.
However, phishing attacks have become increasingly sophisticated in recent years as hackers have learned to better mimic reputable organizations and come up with new strategies. This is why it’s so important to err on the side of caution with questionable emails.
There are some types of phishing attacks that you’re more likely to encounter at work. Spear phishing is a specific type of attack where the sender will pretend to be someone inside your organization and use personal details to gain your trust. If you are in a C-suite position, you may also experience whaling, in which the hacker specifically targets high-level individuals.
A questionable email with attachments may contain spyware. Spyware is often hidden in attachments that contain legitimate software downloads, or in photo or video attachments that look harmless.
Spyware puts trackers on your computer and sometimes in your web browser. These trackers monitor the websites you visit and the people you communicate with to find account passwords, credit card information, and more.
Adware is a specific type of malware that places unwanted ads on your computer or mobile device. In addition to being very irritating, these ads can install spyware to track your online activity. Adware is usually placed in spam emails. While many spam emails are harmless, they are a perfect vehicle for attacks because they contain so many links and photos.
Attachments in suspicious emails can also contain ransomware. Ransomware is a type of malware that will capture secure information from your computer and then demand money to give that information back. Cybercriminals often use ransomware to target organizations rather than individuals. This is because companies often have a large amount of secure customer information that is very valuable.
How To Avoid Getting Hacked Via Email
The best way to avoid getting hacked via email is just to use common sense and be cautious before opening any new email. When you get an email, check to make sure it is from someone you know and trust before clicking any links or opening any attachments. Here are some other things you can do to avoid potentially dangerous emails.
- Choose platforms with multifactor authentication. This requires you to confirm your identity on another device before you can log into your account. This extra layer of security is very effective in keeping hackers out.
- Use a strong and unique password that isn’t easy to guess. There are plenty of excellent password generator tools that can help you find a good one, such as LastPass.
- Double-check the spelling of the email sender’s name. If it’s a hacker sending the email, chances are there will be something slightly off about it.
- Double-check the spelling of the sender’s domain name. Hackers typically won’t have access to secure domain names, so they will choose something that is slightly off.
- Double-check the top-level domain (TLD) of the email. For example, a hacker might use .co rather than .com.
How To Know If Your Email Has Been Hacked
You won’t necessarily notice if your email has been hacked right away. Here are some signs to watch out for.
- You can’t open your email account. Hackers will often set a new password and security questions to ensure you cannot get back into your account.
- Your contacts tell you about strange emails or social media messages coming from your account. You may also notice these strange emails in your outbox.
- Your computer is running slowly. If you’ve opened an email that contains some sort of malware, it could cause your computer to run slowly or act strangely.
Email Attacks: Final Thoughts
In general, just opening an email isn’t going to get you hacked. However, clicking on links or attachments in an email can be very dangerous for you and your company. While exercising caution can help you avoid most email attacks, it’s also very important to make sure you’re using a reliable online security system to protect you even further.
|
<urn:uuid:8fae4541-7efe-4053-8dbd-02b501201092>
|
CC-MAIN-2022-40
|
https://parachute.cloud/can-opening-email-get-you-hacked/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00624.warc.gz
|
en
| 0.950428 | 1,918 | 2.6875 | 3 |
History is a subject that possesses the potentialities of both a science and an art. It does the inquiry after truth, thus history is a science and is on a scientific basis. Also, it is based on the narrative account of the past; thus it is an art or a piece of literature. Physical and natural sciences are impersonal, impartial, and capable of experimentation. Whereas, absolute impartiality is impossible in history because the historian is a narrator, and he looks at the past from a certain point of view. History cannot remain at the level of knowledge only. History is a social science and art. In that lie, its flexibility, its variety, and excitement.
Let’s discuss a few major Historical events in Today’s History.
1346: Battle of Crecy
Battle of Crecy, a battle that resulted in a victory for the English in the first decade of the Hundred Years’ War against the French. The battle at Crecy shocked European leaders because a small but disciplined English force fighting on foot had overwhelmed the finest cavalry in Europe.
Edward III of England, having landed some 4,000 men-at-arms and 10,000 archers (longbowmen) on the Cotentin peninsula in mid-July 1346, had ravaged lower Normandy west of the Seine and gone as far south as Poissy, just outside Paris, when Philip VI of France, uncertain of the direction that Edward meant ultimately to take, advanced against him with some 12,000 men-at-arms and many other troops. Edward then turned sharply northeastward, crossing the Seine at Poissy and the Somme downstream from Abbeville, to take up a defensive position at Crecy-en-Ponthieu. There he posted dismounted men-at-arms in the center, with cavalry to their right (under his son Edward, the Black Prince) and to their left (under the earls of Arundel and Northampton) and with archers on both wings. Italian crossbowmen in Philip’s service began the assault on the English position, but they were routed by the archers and fell back into the path of the French cavalry’s first charge. More and more French cavalry came up, to make further thoughtless charges at the English center; but while the latter stood firm, the archers wheeled forward, and the successive detachments of horsemen were mowed down by arrow shots from both sides. Those few who managed to reach the English lines died in fierce fighting. Some 15 or 16 further attacks continued throughout the night, each one mown down by the English archers.
By the end of the day, Philip’s brother, Charles II of Alencon, and his allies King John of Bohemia and Louis II of Nevers, Count of Flanders, as well as 1,500 other knights and esquires were dead. Philip himself escaped with a wound from the disaster. Edward went on northward to besiege Calais.
1957: The Soviet Union announces its First Successful launch of an Intercontinental Ballistic Missile
On this day in history, Tass, the official Soviet news agency, announced that the USSR had successfully launched a “super long-distance intercontinental multistage ballistic rocket (ICBM).” Then, on October 4, the Soviets used the ICBM to blast into orbit the first artificial Earth satellite, a bundle of instruments weighing about 184 pounds called Sputnik, an acronym from a combination of words meaning “fellow-traveler of the Earth.”
Sputnik was followed a month later with Sputnik II, weighing some 1120 pounds and carrying a dog named Laika.
Senator Lyndon B. Johnson of Texas, who was at his ranch in Texas the night of October 4 when he heard the news about Sputnik, called for a full inquiry into the state of national defense, opining that “soon [the Russians] will be dropping bombs on us from space like kids dropping rocks onto cars from freeway overpasses.” He wasn’t the only one whipping up public fear and paranoia for partisan advantage.
But Eisenhower had important national security reasons for keeping satellite and military information secret and did not defend his Administration as vigorously as he could have. In 1960, as “The Atlantic Magazine” reported, Kennedy campaigned hard against the Republican “negligence” that had allowed the Soviet Union to overtake the United States in producing missiles. But as early as July 1960, the then-Senator Kennedy had gotten intelligence briefings about Soviet missile capabilities. (Johnson received these as well.) The intelligence told Kennedy and Johnson that there was no gap and that the United States was not lagging behind the Soviet Union in deployed ballistic missiles but instead was significantly ahead. Once Kennedy won the election, he used this knowledge to negotiate with Moscow from a position of strength.
|
<urn:uuid:d689477a-11f6-4143-9971-e287e137d61f>
|
CC-MAIN-2022-40
|
https://areflect.com/2020/08/26/today-in-history-august-25/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00624.warc.gz
|
en
| 0.966108 | 1,025 | 3.265625 | 3 |
Anthraquinone, also known as anthracenedione or dioxoanthracene, is a fragrant characteristic compound. It is all around suggested one explicit kind of isomer i.e., 9,10-anthraquinone with International Union of Pure and Applied Chemistry (IUPAC) name anthracene-9,10-dione. Anthraquinone set up the greatest gathering quinones, which are found in purple or red shade. This explicit substance is created using two exceptional strategies. The essential procedure joins oxidizing anthracene inside seeing a chromic destructive or gathering benzene. The second technique fuses oxidizing phthalic anhydride and drying out the thing. Anthraquinone is a crystalline solid which is yellow in shading. Regardless of the way that it is deficiently dissolvable in water, it is easily dissolvable in various hot characteristic solvents.
Anthraquinone is a champion among the most basic segments used in the midst of assembling of hydrogen peroxide. It is similarly a principle component used in the midst of production of various colors and is also used in dying mash for papermaking. Anthraquinone is in like manner used as flying creature repellent on farms and as a gas originator in satellite inflatables. In New Zealand, Anthraquinone is mixed with a substance called lanolin and used as a wool sprinkle to guard sheep keeps running against kea strikes. Anthraquinone can cause tumors in individuals whenever expended or ingested orally. It can similarly cause dermatitis, skin disturbance, and unfavorably susceptible responses when associated with skin. It is found in a organisms, few plants, and creepy crawlies and adds to their shading color.
|Analysis Period||2018 - 2026|
|Forecast Data||2019 - 2026|
|Segments Covered||By Type, By Application, and By Geography|
|Regional Scope||North America, Europe, Asia Pacific, Latin America, and Middle East & Africa|
|Key Companies Profiled||LANXESS, Wuhan Seraphic Technology Co., Ltd., Nanjing Aidelong Chemical Co.,Ltd., Jiangsu Yabang Dyestuffs Co., Ltd., Zibo Yongxin Export-Import Co., Ltd., and Others.|
||Market Trends, Drivers, Restraints, Competitive Analysis, Player Profiling, Regulation Analysis|
|Customization Scope||10 hrs of free customization and expert consultation|
Certain components, for example, expanding interest for items which are more eco-accommodating and less injurious to living beings are in charge of driving the business forward. Alongside the factors responsible for the demand of such medications which are useful in restoring a few diseases (diabetes, malignant growth, and so on.), however these medications contain less measure of anthraquinone. Extreme dose of anthraquinone can cause harm to aquatic animals and may cause long haul changes in aquatic environment, thus influencing the entire environment contrarily. It can likewise cause malignant growth and eye irritation. A high portion of anthraquinone can cause gastrointestinal aggravations and demonstrate some lethal side effects such as sickness, regurgitating, ridiculous loose bowels, dermatitis, wooziness, intense stomach torment and cramping. In extreme conditions, it can also lead to a damaged kidney. These are major aspects which can hamper the growth of the market.
The need to make drugs which can be utilized for treatment of gastrointestinal, unending liver, kidney ailments and type 2 diabetes will make open doors for the anthraquinone business. The development in textile and fashion industry will prompt the growth of integrated dye industry which will keep anthraquinone popular. Furthermore, rising manufacturing of such medications which can fix a few sicknesses like malignant growth and diabetes and which use anthraquinone as a key component will boost the market growth in near future. The market research study on “Anthraquinone Market (Type: 9,10-Anthraquinone, 1,2-Anthraquinone, 1,4-Anthraquinone, Others; Application: Paper Industry, Dyestuff, Chemical, Medicine, Other) – Global Industry Analysis, Market Size, Opportunities and Forecast, 2018 - 2025” offers a detailed insights on global Anthraquinone market, its type, application, and major geographic regions. The report covers basic development policies and layouts of technology development processes. Secondly, the report covers global Anthraquinone market size and volume, and segment markets by type, application, and geography along with the information on companies operating in the market. The Anthraquinone market analysis is provided for major regional markets including North America, Europe, Asia Pacific, followed by major countries. For each region, the market size and volume for different segments has been covered under the scope of report. The players profiled in the report include LANXESS, Wuhan Seraphic Technology Co., Ltd., Nanjing Aidelong Chemical Co.,Ltd., Jiangsu Yabang Dyestuffs Co., Ltd., Zibo Yongxin Export-Import Co., Ltd., and Others.
The global Anthraquinone market is segmented as below:
Market By Type
Market By Application
Market By Geography
Anthraquinone market is segments into type, application and geography.
There are 3 types are included in this anthraquinone market report, Which are followings- 9,10-Anthraquinone; 1,2-Anthraquinone and 1,4-Anthraquinone.
Paper Industry, Dyestuff and Chemical Medicine this following application are dominate the global anthraquinone market in the coming years 2026.
The major companies operating in the global anthraquinone market are LANXESS, Wuhan Seraphic Technology Co., Jiangsu Yabang Dyestuffs Co. and others
Anthraquinone used as flying creature repellent on farms and as a gas originator in satellite inflatables.
The anthraquinone market analysis is provided for major regional markets including North America, Europe, Asia-Pacific, Latin America, and Middle East & Africa.
Anthraquinone is a crystalline solid which is yellow in shading.
|
<urn:uuid:b621613d-1afb-4c13-990b-527008b43938>
|
CC-MAIN-2022-40
|
https://www.acumenresearchandconsulting.com/anthraquinone-market
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00624.warc.gz
|
en
| 0.889351 | 1,334 | 2.5625 | 3 |
How to Setup Safe Computing Environments
Our computers house a lot of our personal details and this makes it all the more essential to set up protective means and maintain a safe overall computing environment.
Here we will discuss some of the important steps that will ensure that your computer remains safe from numerous unsolicited activities going around in the online world.
- Keep the computer patched properly
An unpatched computer will have vulnerabilities that may be exploited by hackers. Keep the automatic updates on and have the machine patched.
You must update your web browser on a regular basis to be one step ahead of the spammers. By updating the browser, you can make sure that you are using the most updated version of the browser, equipped with the most up-to-date security patches.
Choose strong passwords
Use a combination of letters, special characters, and numbers while creating passwords and create unique passwords for each of the accounts. Also, have an effective password management system to track your passwords. These systems are inbuilt in many browsers such as Mozilla Firefox and works across a number of devices such as tablets, computer, and smartphones.
It is another good idea not to use actual answers while setting password security questions so that hackers cannot tack get to your password.
Make use of anti-virus software and spam filtering tools
Anti-virus software and spam filtering tools can scan every download you have or any email you have received for malware. If at all any malware is present, these tools will quarantine it and will prevent you from opening it. Thus, it keeps your computer free from being infected by these malwares. Hence, while selecting your antivirus, always opt for the feature that deciphers email contents.
Use secure connections and secure sites
When we are connected to the internet, our data is more vulnerable, while in transit. Use secure file transfer options as much as possible.
Also, it is advisable to use secure sites always – sites that have a prefix of https:// instead of HTTP:// at the start of the URL. At the address bar, always search for a green lock while accessing bank sites or while entering your credit card details. Be doubly careful while shopping at a site that ships items to you from overseas locations. Visit the sites directly rather than going through links received from unsolicited sources.
You should be very much careful while handling your profiles on the social media as well. Divulging of seemingly innocent information on such platforms may also lead to an identity theft as well.
Resist the temptation for improbable deals
Some unscrupulous sites tend to lure customers with improbable or seemingly unpredictable deals that are fake in nature. Once they can track your credit card information, they can create havoc with your hard-earned money. Always remember the thumb rule- if an offers seems too good to be true to you, there is every chance for it to be so.
Safe Use of email
In most cases, the automated email filter marks some emails as ‘spam’ and redirect these to the ‘spam’ folder. These emails usually include tempting offers, advertisements, information about cheap prescription drugs etc. We must ensure that we scrutinize the contents of these emails before deciding on to open attachments or to click on the hyperlinks provided. We should not try to download email contents that are blocked by our email service providers.
You should never respond to spam emails as through your reply the spammer will get to know that your email address is working, and he will try to target your email constantly in future.
Have your firewall checked
Do have in mind to check whether your firewall is ‘connected’ or ‘on’- whether you use PCs, tablets or other devices. Having a proper firewall is a major step towards keeping criminals at bay. Also, do not make provisions for your folders and files to be visible or accessible to other machines. Have your media and file sharing completely disabled.
- Have a proper backup of your data
Having a proper back up helps to protect your critical documents in the event of a possible attack by ransomware ( that may encrypt your sensitive data), a severe electrical outage or a computer crash. There are two major solutions to a proper backup- external hard drives and online storage.
External hard drives
If you want to have immediate access to large files or you do not want to spend a hefty monthly amount – the option of an external hard drive is just the perfect one for you.
But, external hard drives are as vulnerable to theft or fire as your computer is. Hence, we would always recommend you to have a hard drive in addition to a cloud-based storage for the important data that you just cannot afford to lose.
Most prominent cloud-based storage services provide a lot of offers if you have got a large storage to make. If you have important photos that you do not like to lose, cloud-based storage is an excellent option. After signing into one of such services, you should put your important photos and documents into a specified folder on the desktop or hard drive. This folder will then be synced to the cloud storage allotted for you. With a tablet, pc or phone, you can have access to your files from any corner of the world. You may also have your files synced between devices so that you get the dual benefit of an external hard drive and a cloud-based storage in conjunction.
Most of these services facilitate encryption of files during the process of transferring files from your PC to their designated servers, where the files will remain in encrypted form on the server. Some may even provide a program to encrypt files before the process of uploading starts as a measure of additional protection.
|
<urn:uuid:5cfbbced-aef6-4a71-b719-2f7859f10602>
|
CC-MAIN-2022-40
|
https://www.askcybersecurity.com/setup-safe-computing-environment/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00624.warc.gz
|
en
| 0.941561 | 1,157 | 2.84375 | 3 |
Fiber optic cables are designed for long distance and high bandwidth (Gigabit speed) network communications. Bulk fiber optic cables carry communication signals using pulses of light. While relative expensive, these cables are increasingly being used instead of traditional copper cables, because fiber offers more capacity and is less susceptible to electrical interference. So-called Fiber to the Home (FTTH) installations are becoming more common as a way to bring ultra high speed Internet service (100 Mbps and higher) to residences.
Copper cabling uses electricity to transmit signals from one end to another, bulk fiber optic cable uses light pulses to accomplish the same purpose. The fiber optic cable is made of a transparent glass core surrounded by a mirror like covering called cladding. Light passes through the fiber optic cable, bouncing off the cladding until it reaches the other end of the fiber channel – this is called total internal reflection. As fiber-optics are based entirely on beams of light, they are less susceptible to noise and interference than than other data-transfer mediums such as copper wires or telephone lines. In today’s high speed networks, Graded Index Multimode fiber or Step Index Single mode fiber cable is used to improve light transmission over long distances. Multimode fiber optic cable has a larger core like large core fiber and is typically used in short runs within buildings. Single mode fiber optic cable has a smaller core and is used in long distance runs typically outside between buildings.
While fiber optic cables have so many advantages and widely used in today’s communication. You should take in mind that fiber optic cables are fragile. Fiber cable can be pulled with much greater force than copper wire if you pull it correctly. Just remember following rules:
Do not pull on the fibers. The fiber optic cable manufacturers give you the perfect solution to pulling the cables, they install special strength members, usually Kevlar cutter or a fiberglass rod to pull on. Use it! Any other method may put stress on the fibers and harm them. Most cables cannot be pulled by the jacket. Do not pull on the jacket unless it is specifically approved by the cable manufacturers and you use an approved cable grip.
Do not exceed the maximum pulling load rating. On long runs, use proper lubricants and make sure they are compatible with the cable jacket. On really long runs, pull from the middle out to both ends. If possible, use an automated puller with tension control or at least a breakaway pulling eye.
Do not exceed the cable bend radius. Fiber is stronger than steel when you pull it straight, but it breaks easily when bent too tightly. These will harm the fibers, maybe immediately, maybe not for a few years, but you will harm them and the cable must be removed and thrown away!
Do not twist the cable. Putting a twist in the cable can stress the fibers too. Always roll the cable off the spool instead of spinning it off the spool end. This will put a twist in the cable for every turn on the spool! And always use a swivel pulling eye because pulling tension will cause twisting forces on the cable.
Check the length. Make sure the cable is long enough for the run. It’s not easly or cheap to splice fiber and it needs special protection. Try to make it in one pull, possible up to about 2-3 miles.
|
<urn:uuid:19ed8e95-4b22-4fd8-bf7e-a106e9737cbc>
|
CC-MAIN-2022-40
|
https://www.fiber-optical-networking.com/tag/large-core-fiber
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00624.warc.gz
|
en
| 0.91889 | 685 | 3.484375 | 3 |
The bigger the software footprint, the more bugs and vulnerabilities. Given this, it makes sense that a monolithic operating system like Linux would contain more vulnerabilities than a microkernel-based operating system like the QNX® Neutrino® Real-Time Operating System. A 2018 study by Simon Biggs, Damon Lee and Gernot Heiser analyzed the critical security bugs in Linux and concluded that “96% of critical Linux compromises would no longer be critical with a microkernel-based design” and that at least 29% of the critical vulnerabilities would be eliminated entirely.
What is a Monolithic Architecture?
A monolithic kernel runs all operating system components in kernel space; it includes all device drivers, file management, and networking and graphics stacks. Only user applications run in user space.
Although a monolithic design protects a kernel from errant user code, it doesn’t protect it from errant kernel code. A single programming error (or successful exploit) in a file system, protocol stack or driver can crash a monolithic operating system.
Most software code is buggy, and unfortunately highly complex kernel code is no exception. Biggs et al. determined that Linux likely had 13,000 bugs at the time of the study, based on its multi-million source lines of code (SLOC) and an optimistic estimate of bug density of 0.5/kSLOC.
Smaller Kernel, Fewer Vulnerabilities
Kernel code has special privileges, specifically, access to the entire system. Bugs in kernel space create vulnerabilities for malicious actors to exploit. A smaller kernel reduces the amount of privileged code, which improves system security, functional safety and reliability: fewer lines of potentially buggy code have privileged access.
The Biggs study presents a stark confirmation of the arguments in favor of a microkernel architecture over a monolithic kernel architecture. The relative sizes of a Linux kernel and the QNX microkernel are an indication of a dramatic difference in the amount of privileged code each contains. In fact, in January 2020, the Linux kernel had around 27.8 million lines of code in its Git repository; with about 100 thousand lines of code the QNX Neutrino RTOS is 99.7% smaller.
Advantages of a Microkernel Architecture
A microkernel operating system embodies a fundamental innovation in the delivery of OS functionality: modularity. The tiny kernel is a side effect. With a microkernel OS, the microkernel works with a team of optional cooperating processes that provide higher-level OS functionality. Critically, unlike with a monolithic kernel, these processes run in user space; that is, outside privileged kernel space.
The microkernel architecture is based on the concept of least privilege. Only the kernel is granted access to the entire system. A microkernel OS like the QNX Neutrino RTOS encapsulates each application and OS service in its own isolated process space. The microkernel protects and allocates memory, and gives drivers and other OS services only the minimum privileges they need to perform their functions.
Fault containment through isolation and least privilege prevents errors and exploits from affecting other parts of the system. The only thing a component can crash is itself. Such crashes can be easily detected, and, since the kernel is unaffected, the faulty component can be restarted while the system is running with minimal impact on performance. In short, in the event of a kernel crash in a monolithic kernel system the only response is to reboot the system, while with a microkernel OS the system can usually repair itself to provide a much better mean time between failures (MTBF).
In summary, the security advantages inherent in a microkernel architecture include:
- Less code running in kernel space reduces the attack surface.
- Fault isolation and recovery support high availability: a failed system service can be dynamically restarted without a system reboot.
|
<urn:uuid:ef2952cb-064f-45b5-b7f4-14400548cac4>
|
CC-MAIN-2022-40
|
https://blogs.blackberry.com/en/2020/09/study-confirms-that-microkernel-is-inherently-more-secure
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00624.warc.gz
|
en
| 0.901572 | 791 | 3.265625 | 3 |
Sandboxes are isolated computing environments set aside from other programs in which a program or file can be executed without affecting the application it runs or other programs; if an error or security issues occur, those issues will not spread to other areas on the computer or pose any threat to other programs. In computers, the term sandboxing has long been used to run malicious code or to test new programing code so software developers can analyze it.
However effective sandboxing may be, there are advanced, persistent threats which can evade straightforward detection. By using previously unseen malware, these attacks exploit vulnerabilities and come from brand-new or seemingly innocent hosting URLs and IPs. Their goal? To compromise their target system with advanced code techniques that attempt to circumvent security barriers.
Advanced evasion techniques by which threats can evade security barriers include, but are not limited to:
Logic bombs are code that remains dormant after installation until a specific trigger occurs. Logic bombs can be difficult to detect, since the logic conditions are unlikely to be met in the sandbox without heavy instrumentation.
Another advanced evasion technique is awareness of the sandbox environment itself. Advanced persistent threat code may contain routines that attempt to determine if it’s running in a virtual environment, indicating it might be in a sandbox, or may check for fingerprints of specific vendors’ sandbox environment. If the code detects that it’s in a sandbox, it won’t run its malicious execution path.
Rootkits and Bootkits
Advanced malware often contains a rootkit component that subverts the operating system with kernel-level code to take full control of the system. Rootkits infect the system with malware during system boot-up—something that is typically not observed by a sandbox.
Once evasions are addressed, the value of a strong sandbox shines. The goal of sandboxing is to completely replicate the behavior of malicious code seeking entry to the organization. The reality is that malware creators are privy of all forms of security technology and will build disguises and use advanced evasion techniques in the hopes of bypassing security mitigations in order to successfully deliver their malware.
If you are looking for a way to take care of your computer network, The IT pros at Gulf South Technology Solutions can help you execute a comprehensive IT risk assessment that will help keep your network safe.
|
<urn:uuid:3178f769-49eb-46ce-abf7-58112dcd4e3d>
|
CC-MAIN-2022-40
|
https://gulfsouthtech.com/malwarenetwork-security/sandboxes-and-your-network-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00624.warc.gz
|
en
| 0.916906 | 478 | 2.65625 | 3 |
What is security compliance?
Being security compliant means your IT protocols follow prevailing local and international industry standards, as well as adhere to any laws that apply in your locality. Examples include adhering to local privacy and security of personal information laws if, for example, you record your customers’ personal and/or financial details. There are also global standards, such as the ISO/IEC 27000 family, that relate to the security of information management systems and are considered best practice.
These standards are there to help organizations keep their information assets secure. Your business could be subject to fines or worse if you don’t act to adequately protect your data assets.
SMBs are at risk.
The real impact of a data security breach is economic, and felt most acutely by SMBs, which often don’t have the human and financial resources to deal with it. In fact, around 71 percent of security breaches target small businesses, and 60 perfect of small businesses who experience cyberattack end up shutting down. The evolving nature of cybercrime also makes IT security a challenge for smaller organizations to keep up with.
Never say ‘never’
While many large-scale companies have been victims of hackers, including Yahoo, Sony, and internet infrastructure company Cloudflare, when it comes to IT security, never assume your business is small enough to slip under the radar. Cybercriminals don’t discriminate.
Five-step security compliance checklist
Follow these five steps to ensure your security protocols are compliant:
In terms of prioritizing your resources, the trick is to strike a balance and focus on protecting your business against security issues that come with the most financial risk.
As a business owner, it’s your responsibility to identify threats to your organization and take the necessary steps to ensure you’re security compliant. And think of it this way: ultimately, preventing security breaches will cost less than fixing one.
|
<urn:uuid:892337ae-57db-4972-8df6-728aaf784b61>
|
CC-MAIN-2022-40
|
https://gulfsouthtech.com/uncategorized/5-step-security-compliance-for-small-and-medim-buisnesses/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00624.warc.gz
|
en
| 0.949581 | 395 | 2.5625 | 3 |
Chinese Researchers Say New Quantum Computer Has 1 Million Times the Power of Google’s
(InterestingEngineering) Physicists in China claim they’ve constructed two quantum computers with performance speeds that outrival competitors in the U.S., debuting a superconducting machine, in addition to an even speedier one that uses light photons to obtain unprecedented results, according to a recent study published in the peer-reviewed journals Physical Review Letters and Science Bulletin.
China has exaggerated the capabilities of its technology before, but such soft spins are usually tagged to defense tech, which means this new feat could be the real deal.
The supercomputer, called Jiuzhang 2, can calculate in a single millisecond a task that the fastest conventional computer in the world would take a mind-numbing 30 trillion years to do. The breakthrough was revealed during an interview with the research team, which was broadcast on China’s state-owned CCTV on Tuesday, which could make the news suspect. But with two peer-reviewed papers, it’s important to take this seriously. Pan Jianwei, lead researcher of the studies, said that Zuchongzhi 2, which is a 66-qubit programmable superconducting quantum computer is an incredible 10 million times faster than Google’s 55-qubit Sycamore, making China’s new machine the fastest in the world, and the first to beat Google’s in two years.
|
<urn:uuid:5a45f3af-885d-49ae-a27b-d54a0407676d>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/chinese-researchers-say-new-quantum-computer-has-1-million-times-the-power-of-googles/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00624.warc.gz
|
en
| 0.944504 | 300 | 2.828125 | 3 |
A new worm, dubbed "Win32.Detnat.a," is on the loose and works in stealth mode with its eye on Windows machines.
MicroWorld Technologies issued a warning on this latest exploit today after increased sightings and concerns among its researchers that, if executed, the worm could spread like wildfire, according to Agnelo Fernandes, technical head for MicroWorld Technologies USA. "It's very hard for an antivirus program to detect it because it keeps changing every time it infects a new file," he says. The security firm first spotted this worm back in May.
As much chameleon as worm, Win32.Detnat.a affects executable files in Windows 98, ME, NT, 2000, XP, and Server 2003, and it hides itself by using a different mode of encryption each time it infects a file. It also keeps file sizes unchanged, so it's harder to detect, Fernandes says. It aims to infect shared network files and resources.
But it only spreads if a user executes an infected file, either through an email attachment or an infected Website, he says, so its risk is relatively low. But Fernandes warns that if this worm does get launched via an executable file, it instantly infects all executable files in a network.
It works like this: The worm copies the infected file to a Windows Temporary folder and then cleans up the file. The infected file then copies itself to the original file name to the Windows current folder where it was first executed. It downloads and executes files from two central Websites: http://www.cm9998.com and http://www.korearace.com.
"It's novel in how it tries keep itself hidden," says Jose Nazario, senior security researcher for Arbor Networks. "It's not changing passwords for a few bytes here or there" like other stealth worms, he says.
But some industry experts say Win32.Detnat.a is just another iteration of most viruses. The real threat here is this type of getting past the corporate firewall and into the internal network. "If a virus owns your computer, you're giving it access" to your network, says Tom Ptacek, a security researcher with Matasano Security. "If an email virus launches a second piece of malware to attack network printers, for example, which are totally vulnerable, your checks couldn't be printed anymore.
Ptacek says these viruses were a dime a dozen in the '90s when attackers could write them with the GUI-based Virus Creation Labs tool.
And Win32.Detnat.a does require a user to execute the infected file, typically sent via a bogus email, says Nazario, who concurs with Ptacek that the real danger lies in internal network infection.
The more frightening type of worm is one that doesn't require any user interaction, such as those targeted at browsers or email clients, Ptacek says. "This [new] virus isn't any different than one you can create with a GUI."
To prevent getting infected, you need an updated AV application as well as a firewall that blocks unwanted HTML requests, according to MicroWorld.
Kelly Jackson Higgins, Senior Editor, Dark Reading
|
<urn:uuid:0e30e62b-2cb9-426b-8528-6734268a2fff>
|
CC-MAIN-2022-40
|
https://www.darkreading.com/perimeter/new-windows-worm-on-the-loose
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00024.warc.gz
|
en
| 0.936671 | 659 | 2.59375 | 3 |
In the cloud computing era, application development moves far faster, and microservices is a big factor in this increased speed. Yet it is only with a clear plan for microservices best practices that companies attain the greatest efficiency boost.
Microservices are, simply put, large applications broken down into smaller, individual functions for more rapid revision and scalability. Microservices are used primarily in the cloud-based online world. A banking app, for instance, can be broken down into the various features: login, balance checking, and bill pay. These various features can then be revised at different times, instead of needing to rewrite the entire app all at once.
Microservices and containers are a natural combination. Microservices lend themselves well to a containerized environment, since all of the software components—code, runtime, system tools, system libraries, and settings—can be confined to a container. However, containers are not necessary to run microservices.
List of Microservices Best Practices
So what are some of the best practices to maximize your microservices architecture? Here they are:
• Team autonomy: Rather than a large team divvying up the work of a large app, assign small teams to work in parallel to focus on just their microservice and not be dependent on the work of others.
• Scalability: Smaller components require fewer resources and can be scaled to meet increasing demand for just that microservice. Plan accordingly.
• Revise your schedule: Because the overall scope for the team is much smaller, with focus on one feature or function, work can be completed much faster than in a monolithic environment.
• Automation: Because they work well with orchestration containers like Kubernetes and Docker, the deployment and updating of microservices can be easily automated and handled by the orchestrator. So a developer won’t always be needed.
• Flexibility: Microservices offer greater flexibility by allowing groups to work independent of each other, and therefore, they can innovate at their own pace rather than wait for other changes to take place. But make sure the teams are communicating with one another.
• Resilience: If a microservice crashes, it doesn’t take down all the other components like one bug would do to a monolithic app. Just spin up a new instance of the microservice and start debugging.
• Maintenance: Devote fewer resources here. It’s a lot easier for a team to maintain 10,000 lines of code than one million lines of code.
• Properly scoped functionality: In the process of breaking up a monolithic app into microservices, you must clearly define the scope and function of each service so as not to make it too big or too small.
• Presenting an API: APIs are a portion of a microservice – a login API for a login service, for example – so the service can be the wrapper for an API.
• Traffic management: Microservices are distributed by nature, so traffic management is a must to avoid overloading and bottlenecks.
• Data offloading: The converse of traffic management, data needs to be balanced among the services to avoid server or network overload
• Monitoring: You must be in a constant state of monitoring, because microservices are meant to be used as scale out apps to handle usage spikes. Monitoring helps determine if more or less resources are needed, spot errors in new code, and watch the status of your network.
• Development and deployment: Microservices work well in the continuous iteration/continuous delivery methodology of DevOps, much more so than giant applications. Also, your chances to break things increase with complexity. The small size of a microservice reduces the chance for something to go wrong, and the smaller code bases make it much easier to isolate and detect problems.
• Private Data Ownership: Each microservice can have its own database, and they can either communicate with each other or be isolated. This allows for greater security by isolating the most sensitive of data from less secure apps.
Related Key Issues with Using Microservices
1) Ask yourself why you are doing it.
This seems to happen with every new tech paradigm. People rush to embrace it because everyone else is doing it without asking why do they even need it. If you have a modest Web-facing presence and your enterprise apps are primarily inward-facing, you don’t need much for microservices and are better off with SOA.
If, however, you want faster response time to make changes, and to take advantage of the cloud, public or private, then microservices might be a better fit.
2) Understand what they are.
There are several reports out that delve deep into the differences between SOA and microservices. Know what a microservice actually is and the changes it will introduce to your network and your data.
3) Learn to distribute your code.
This is a big change that comes with microservices. Before, with a monolithic app, you had one process. Now you have many. That means adopting some form of distributed computing to distribute the load, which is hard to do right. You may want to containerize the services and other forms of load balancing, and if this is new to you, that’s part of the learning curve.
4) Insure coexistence and documentation.
The lure of microservices is that they can all be done independent of each other. That’s also a potential pitfall. Part of the testing should also insure that the services all work together and don’t crash each other, and make sure everything is well-documented. A small team is a familiar team but change is constant and new people need to be able to come up to speed quickly.
5) Adopt AI.
To err is human. And humans need a rest. Artificial Intelligence needs no time off. Utilizing AI to monitor and respond can bring much faster responses to problems that will inevitably crop up. Done right, AI can maximize efficiency of microservices in a way humans cannot.
No technology is bulletproof, microservices included. This technology has pitfalls, shortcomings, and other issues you must be aware of to avoid wasted time and resources.
• Watch the scale. Just because they are small doesn’t mean they can’t add up as you add more microservices. This is a distributed network so you must make sure to balance the load carefully and make sure all of the services you are adding can scale.
• New logging needed: Traditional logging is ineffective because microservices are stateless, distributed and independent, so each service would produce its own log and not know of problems in another service. A new form of coordination is needed.
• It’s all so new: Microservices platforms are still a work in progress. Shop around.
• Where’s the problem? Tracing performance problems across distributed tiers for a single transaction can be difficult with the nature of the app and your network.
• Infrastructure concerns: The structure of your network will play a much more significant role, since the work is distributed. This means making sure your network is well-balanced, and that your development teams work with operations to make sure one team doesn’t surprise the other.
• Keeping the languages straight: You can use a wide variety of languages in a microservices environment, so you must make sure you are all on the same page with coding languages and don’t end up in a situation where 20 services are written in eight different languages.
• Security is critical: Likewise, you must settle on a security model for all services because many services present more targets for hackers.
• Failover: A distributed network must have solid failover, so when something inevitably crashes, another portion of the network can pick up the slack.
|
<urn:uuid:e8be27f8-e19f-4474-b9b4-d0df6db12844>
|
CC-MAIN-2022-40
|
https://www.datamation.com/cloud/microservices-best-practices/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00024.warc.gz
|
en
| 0.936867 | 1,610 | 2.625 | 3 |
What is WAF –
WAF is an abbreviation for Web Application Firewall. A Web Application Firewall (WAF) is a network security firewall solution that protects web applications from HTTP and web application-based security vulnerabilities.
Need for WAF –
In spite of networks deployed with proxies, IPS/IDS devices including network firewalls in their network to prevent attacks, web applications are still vulnerable to other attacks.
Some of the most common types of attacks which are targeted at web servers (Web Applications) include –
- SQL injection attacks
- cross-site scripting (XSS) attacks
- DDoS attacks.
WAF devices are widely used to protect websites, E-commerce, mobile apps and other online applications. A WAF is deployed between application servers and network edge routers and firewalls.
A WAF filters, monitors, and blocks HTTP/HTTPS traffic to and from a web application to protect against attack to compromise the system data.
WAF solutions also become more important especially in financial customers they can also help your organization comply with PCI-DSS and HIPAA regulations.
WAF Appliances –
Some of WAF Appliances preferred across the globe are –
- Imperva SecureSphere
- Barracuda Web Application Firewall
- Citrix Netscaler Application Firewall
- Fortinet FortiWeb
- F5 BIG-IP Application Security Manager (ASM)
|
<urn:uuid:8acbcb10-b502-44bf-b12f-84baf27a4cc4>
|
CC-MAIN-2022-40
|
https://ipwithease.com/introduction-to-waf-web-application-firewall/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00024.warc.gz
|
en
| 0.909612 | 294 | 2.84375 | 3 |
Time to live (TTL) refers to the amount of time or “hops” that a packet is set to live inside a network before it is removed by a router. It is an 8-bit field in the Internet Protocol. The maximum TTL value is 255. TTL is mostly used in systems where endless loops are possible or updates must be forced in certain intervals.
It is a value in an Internet Protocol (IP) packet that tells a network router whether or not the packet has been in the network for too long and should be discarded. In IPv6, TTL field has been changed to hop limit.
TTL value is set initially by the Source system which is sending the packet. Its value can be anything between 1 and 255. Different operating systems set different defaults. Each router that receives the packet subtracts 1 from the count. If the count remains greater than 0, the router forwards the packet, otherwise it discards it and sends an Internet Control Message Protocol (ICMP) message (11 – Time Exceeded) back to the Source system, which may trigger a resend.
Related – What is ICMP?
TTL (Time To Live ): An Example
Below is an example where Host A wants to communicate with Host B through ping packet. Host A sets TTL of 255 in the ping and sends it towards its gateway i.e. Router A. Router A, seeing the packet destined for a layer 3 hop towards Router B, decrements the TTL by 255 – 1 = 254 and sends it towards Router B. In the same way, Router B and Router C also decrement the TTL (Router B decrements TTL in packet from 254 to 253 and Router C decrements the TTL from 253 to 252). On reaching Host B, the ping packet TTL is reduced to 252.
Network commands like ping and traceroute utilize TTL. When using the traceroute command, a stream of packets are sent to the destination using an ever increasing TTL, starting with a value of one. On receipt of a packet with a TTL of one, the first hop will decrement the TTL by one resulting in a value of zero. This will cause the router to discard the packet and send an ICMP Time Exceeded error message to the source.
Packets are then sent with a TTL of two and so on until the packets eventually make it to the destination host. The ICMP error messages and the source addresses of the hosts that sent them reveal which routers are used along the path to deliver packets to the destination. The traceroute tool then presents this information to the user in a logical way.
In IP multicast, TTL controls the scope or range in which a packet may be forwarded.
- 0 is restricted to the same host
- 1 is restricted to the same subnet
- 32 is restricted to the same site
- 64 is restricted to the same region
- 128 is restricted to the same continent
- 255 is unrestricted
TTL is also used in Content Delivery Network (CDN) caching and Domain Name System (DNS) caching. CDNs commonly use a TTL to determine how long cached content should be served from a CDN edge server before a new copy will be fetched from an origin server. By properly setting the amount of time between origin server pulls, a CDN is able to serve updated content without requests continuously propagating back to the origin. This accumulation allows a CDN to efficiently serve content closer to a user while reducing the bandwidth required from the origin.
In the context of a DNS record, TTL is a numerical value that determines how long a DNS cache server can serve a DNS record before reaching out to the authoritative DNS server and getting a new copy of the record.
Related – UNDERSTANDING TTL SECURITY IN BGP
|
<urn:uuid:f0f27666-01de-48e1-a211-783e6870934c>
|
CC-MAIN-2022-40
|
https://ipwithease.com/what-is-time-to-live-ttl-in-networking/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00024.warc.gz
|
en
| 0.906993 | 777 | 3.859375 | 4 |
An ACK-FIN flood is a DDoS attack designed to disrupt network activity by saturating bandwidth and resources on stateful devices in its path.
By continuously sending ACK-FIN packets towards a target, stateful defenses can go down (In some cases into a fail open mode). This flood could also be used as a smoke screen for more advanced attacks. This is true for other out of state floods too.
Below an analysis of an ACK-FIN flood is shown. The following images depict a high rate of ACK-FIN packets being sent from a single source IP towards a single destination IP.
In Image 1 below, you can see the flood of ACK-FIN packets coming from a single source. Notice the rate at which the packets are sent.
“Image 1 – example of single ACK-FIN packet being sent to port 80”
In Image 2 you can see the victim responding with an RST packet. The reason this RST packet is received in response to the original ACK-FIN packet is because the TCP stack receiving the ACK-FIN packet never had a corresponding sequence of SYN – SYN+ACK +ACK (Otherwise known as the TCP handshake). Some environments may opt not to send a RST packet back to the source of the offending ACK-FIN packet. The ACK-FIN packet is known as an out of state packet.
“Image 2 – RST packet received because of “out of state”ACK-FIN packet sent”
As seen in Image 3. The capture analyzed is 9 seconds long and the average number of packets per second are at 116.6, with a rate of around 50Kbps. Attack rates could be much higher.
“Image 3 – ACK-FIN Flood stats”
A typical ACK-FIN flood running against an unsuspecting host will look similar to the above analysis. Generally what is seen is a high rate of ACK-FIN packets (not preceded by a TCP handshake) and a slightly lesser rate of RST packets coming from the targeted server.
Analysis of an ACK-FIN flood in Wireshark – Filters
Filter ACK-FIN packets – “(tcp.flags.ack == 1) && (tcp.flags.fin == 1)”.
Goto Statistics -> Summary on the menu bar to understand the rate you are looking at.
Download Example PCAP of ACK-FIN Flood
*Note: IP’s have been randomized to ensure privacy.Download
|
<urn:uuid:bedc13f0-e0f1-47e6-a088-3389e3f38bac>
|
CC-MAIN-2022-40
|
https://kb.mazebolt.com/knowledgebase/ack-fin-flood/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00024.warc.gz
|
en
| 0.922084 | 529 | 2.59375 | 3 |
HBase is a distributed, column-oriented database that supports structured data storage for large tables. The data model used in HBase is very similar to Google’s Bigtable. HBase also implements the MapReduce framework.
HBase is essentially a structured key-value store where you can set variable-length columns to an arbitrary number of rows. You can think of it as a key/value store with the ability to run complex sorted queries against subsets of data stored in HBase.
HBase is built on top of HDFS and uses Zookeeper for coordinating processes across a distributed system.
- HBase is highly scalable. You can have billions of rows in a single HBase table. NoSQL databases are great for ad-hoc queries since it allows random reads. This means that you could select a specific record for a patient or select a specific employee to find out their salary information.
- You can set variable-length columns to an arbitrary number of rows. This is very useful if you want to identify individual records using a unique ID and have a lot of information about that individual, such as comments or likes, etc.
- You can run complex queries against subsets of data stored in HBase.
- HBase is more predictable than other NoSQL databases since it follows the MapReduce framework.
- HBase is cost-effective since it is built on top of HDFS and uses a Zookeeper.
- HBase is also great for data warehousing since it allows for random reads and writes. This means that you could update the salary column for a specific employee and add a new column to the table
|
<urn:uuid:f7900c01-cfdd-4a32-9f09-7a5ecb6c5246>
|
CC-MAIN-2022-40
|
https://data443.com/data_security/apache-hbase/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00024.warc.gz
|
en
| 0.926579 | 344 | 2.6875 | 3 |
Single Sign-on is a method of authentication process that helps to log in multiple application using single login credentials. Security is enhanced through Single Sign-on (SSO) in light of the fact that users are diminished of the various secret password trouble. Let be honest, users detest complex passwords; SSO Single Sign-on makes that agony more acceptable by diminishing the number of complicated passwords they have to remember.
There are two major challenges that these businesses are facing:
- How to manage the permission and roles of diverse users and applications.
- How to address the many challenging and ever-changing compliance and security risks that come with the digital expansion of access.
These challenges are a constant worry for those who manage the informatics systems and data or deal with compliance in any company. There are four important factors that need to be considered when a strategy for access management and identity is being developed by a company’s IT team and security.
The Expansion of Third-party Access
More entities are gaining access to the apps, data, and networks of a company. With different partners working in different locations, it can make things even more complicated when it comes to security and ensures only the right people are gaining access.
In the study completed by Aberdeen, it showed that about 1/3 of the enterprises studied allowed at least 25 third-party organizations to have access while a shocking 10% had upwards of 200 external partners. in this case, Single Sign-on (SSO) will be a very useful solution to protect your company asset.
The Balancing of Security and Usability
When handling the growing userbase of a manufacturer, security and cost are of utmost importance. If an enterprise is not prepared for the expansion, the risk of security problems is higher. The theft of this type of data can be devastating to a company.
While making sure the system is accessible by the people who need to use it is important, the security is just as if not more important.
The Frequency and Cost of Cyber Attacks
Manufacturers deal with a lot of sensitive information and are the victims of more phishing attacks than any other industry in the United States. One data breach costs around an average of $450k but can cost considerably more than that. A little bit of preparation can save a lot of money and trust.
Traditional System Costs
Operating a traditional system can be expensive, about $3.5 million for manufacturers. In some cases, they can costs tens of millions though. By using a single platform to manage access can save a lot of money in the end and save time.
multifactor authentication and an (SSO) single sign-on, it might be the solution a company is looking for to avoid credential based attacks.
It streamlines the whole process and provides support for all organizations that are accessing it no matter how far in the cloud they happen to be.
Reduce the headache of assisting users with password recovery using Single Sign-on (SSO)
Envision an organization running ten distinct administrations. A single sign on ( SSO) arrangement can incredibly decrease the measure of helpdesk manpower required as clients just need to recover a solitary Account. While not a security concern, this is an extremely unmistakable advantage to organizations by using Single Sign-on Solution.
Single Sign-on (SSO) Helps to Reduce the Amount of Passwords users have to remember.
Clients are urged to utilize endlessly unique passwords for different sites. single sign-on Dealing with that a Different of passwords can be tricky.
Clearly, this isn’t an issue if the client utilizes a password manager tool yet how about we are reasonable, what number of users would you be able to hope? An Single Sign-on (SSO) arrangement can extraordinarily decrease the quantity of passwords users need to remember, which may urge the user to pick a significantly choose a much stronger password.
|
<urn:uuid:96cbdeea-aa1c-4db1-b498-8239255b10bb>
|
CC-MAIN-2022-40
|
https://gbhackers.com/secure-single-signon-sso/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00224.warc.gz
|
en
| 0.935357 | 798 | 2.703125 | 3 |
Analyze your Project's Stakeholders with the help of the Salience Model
Managing the stakeholders’ expectations is an important factor in the success of a project. Due to the crucial impact stakeholders can have on projects, Stakeholder Management is an important component of Project Management.
There can be many stakeholders in a project. Small projects have a smaller number of stakeholders. This makes it easy to manage. On the other hand, large projects have many stakeholders. This makes it difficult to manage.
In a real-world scenario, you will not be able to treat every stakeholder equally. Every stakeholder has different requirements and expectations. We need to manage these requirements and expectations. So, first, you need to identify and classify your project stakeholders.
Classifying stakeholders is a critical process. Here you separate stakeholders as per their power, interest, urgency, etc. After classification, you will develop your stakeholder management strategy
The PMBOK Guide describes four models to classify stakeholders:
1. Power/interest grid
2. Power/influence grid
3. Influence/impact grid
4. Salience model
The first three models use two attributes and are similar. The salience model PMP uses three attributes. We will study the salience model to classify project stakeholders.
What does stakeholder salience mean?
Stakeholder salience is the degree to which the stakeholders are visible, vocal, and important to a project. It is an important aspect of stakeholder management. It is common for highly vocal stakeholders to try and define requirements and make decisions beyond their expertise and authority. This often leads to issues.
Let us see who stakeholders can be and get some understanding in this area.
A stakeholder can be an individual, a group, or an organization. A stakeholder may affect, be affected by, or perceive itself to be affected by a decision, activity, or outcome of a project. Stakeholders may have interests that may be positively or negatively affected by the project.
You may also like: PERT and CPM: How are they different?
They may or may not be actively involved in the project. Stakeholders may also exert influence over the project, its deliverables, and the project team to achieve a set of outcomes that satisfy strategic business objectives or other needs.
Stakeholders include all members of the project team as well as all interested entities. They can be are external or internal to the organization. The project team identifies internal and external, positive and negative, and performing and advising stakeholders. This helps them determine the project requirements and the expectations of all parties involved. The project manager should manage the influences of these various stakeholders. The project manager should consider the project requirements to ensure a successful outcome. The figure below illustrates the relationship between the project, the project team, and various stakeholders.
Stakeholders have varying levels of responsibility and authority when participating in a project. This level can change over the course of the project’s life cycle. Their involvement may range from occasional contributions in surveys and focus groups to full project sponsorship. This includes providing financial, political, or other support. Some stakeholders may also detract from the success of the project, either passively or actively. These stakeholders require the project manager’s attention throughout the project’s life cycle.
You may also like: From Project to Program Management: Differences and Considerations
Stakeholder identification is a continuous process throughout the entire project life cycle. Identifying stakeholders, understanding their relative degree of influence on a project, and balancing their demands, needs, and expectations are critical to the success of the project. Failure to do so can lead to delays, cost increases, unexpected issues, and other negative consequences including project cancellation
The following are some examples of project stakeholders:
Sponsor: A sponsor is a person or group who provides resources. He supports the project and is accountable for enabling success. The sponsor may be external or internal to the project manager’s organization.
Customers and users: Customers are the persons or organizations who will approve and manage the project’s product, service, or result. Users are the persons or organizations that will use the project’s product, service, or result.
Sellers: Sellers are also called vendors, suppliers, or contractors. They are external companies that enter into a contractual agreement. They provide components or services necessary for the project.
Business partners: Business partners are external organizations. They have a special relationship with the enterprise. It may be through a certification process. Business partners provide specialized expertise or fill a specified role such as installation, customization, training, or support.
You may also like: Top Seven Reasons For Project Failure And How To Avoid Them
Organizational groups: Organizational groups are internal stakeholders who are affected by the activities of the project team.
Functional managers: Functional managers are key individuals who play a management role within an administrative or functional area of the business, such as human resources, finance, accounting, or procurement.
Other stakeholders: Additional stakeholders, such as procurement entities, financial institutions, government regulators, subject matter experts, consultants, and others, may have a financial interest in the project, contribute inputs to the project, or have an interest in the outcome of the project.
Salience Model: Salience means “the quality of being particularly noticeable, important or prominent.” So, stakeholder salience means the importance/prominence of a stakeholder.
Stakeholder salience can be defined as the “degree to which managers give priority to competing stakeholders’ claims in their decision-making process.”
The stakeholder salience model was proposed by Ronald K. Mitchell, Bradley R. Agle, and Donna J. Wood in 1997.
You may also like: Make the best out of your project stakeholders
Here, a stakeholder has three attributes:
Power: Power is the influence or authority of the stakeholder on your project or its objectives. Focus on stakeholders with high power. These stakeholders are fewer in number.
Legitimacy: Legitimacy is how genuinely involved a stakeholder is with your project. You should not spend your time on a stakeholder who doesn’t have a legitimate interest. Pay attention to stakeholders with legitimate claims.
Urgency: Urgency is the degree to which stakeholder requirements call for immediate attention. Urgency depends on two factors: time-sensitivity and criticality. You need to find out whether any requirement is time-specific or if mere fulfillment is important.
Identify your project stakeholders and assign them attributes. After that, prioritize stakeholders according to their attributes. Based on this ranking you will develop the stakeholder’s management strategy. This will save time and help you win stakeholders’ support.
Stakeholder salience is not static; it is dynamic and can change during the project life cycle. Ensure you keep updating the stakeholder register to reflect the changes.
Stakeholders in the Salience Model
A stakeholder salience model diagram is a Venn diagram comprising of three attributes. These are represented by three circles. Attributes are power, legitimacy, and urgency. The intersection of circles shows stakeholders with multiple attributes.
Based on these attributes, you can classify stakeholders into seven groups.
To develop your strategy, you divide these groups into three categories:
These stakeholders have low legitimacy, high power, and low urgency. Being high power, they can impact your project. So, they need to be managed carefully. A stakeholder from top management does not take part in meetings. He has no interest in your project. However, you will still watch this stakeholder as they have power and you never know when they will change their mind.
These stakeholders have high legitimacy, low urgency, and low power. Although they have low power and low urgency, you will fulfill their requirements because of their legitimacy. NGOs or charitable organizations are examples of discretionary stakeholders. They do not have power or urgency, but they are legitimate stakeholders.
These stakeholders have high urgency, low power, and low legitimacy. They are usually vocal and can influence other stakeholders if their requirements are not met. These stakeholders want attention. These need to be managed carefully. For example, your project is in a public place, and residents from the neighborhood show interest in your project and ask for information.
These stakeholders have two attributes: they have expectations of the project and are active.
Some examples of expectant stakeholders are dominant, dangerous, and dependent.
These stakeholders have high legitimacy and high power but low urgency. As these stakeholders have a legitimate interest in your project, you will manage them closely. Since the urgency is low, their rank is below the core group. For example, you are constructing a building where local authorities are stakeholders. Though they don’t have urgent issues with your project, you will manage them closely as they have both power and legitimacy.
These stakeholders have high power, and high urgency but low legitimacy, and this makes them vulnerable. They can be violent and can create trouble for your project. You will manage them cautiously. For example, suppose you are working in a remote area of a third world country, and in this case, a group of local terrorists can act as dangerous stakeholders. The security of your team members is paramount. You must identify these stakeholders and mitigate the threats they pose.
These stakeholders have high legitimacy, high urgency but low power. Since these stakeholders have little power, you will not pay as much attention. For example, if you are doing construction work in a public place, local residents can be an example of dependent stakeholders. You will keep a watch on these stakeholders because of their legitimacy and high urgency. They may form a group or associate with powerful stakeholders. This can create trouble for you if their requirements are not met.
These stakeholders have three attributes and require the most attention. You will manage these stakeholders closely. An example of definitive stakeholders is “core.”
These are not stakeholders of your project, so you will not manage these people.
Strategy to Manage Stakeholders
You will manage your stakeholders as follows:
You will give the highest priority to the core group because this group has all the attributes.
The next highest priority should be given to dominant, dangerous, and dependent stakeholders because they have a mix of any two attributes.
The lowest priority group consists of discretionary, demanding, and latent because they have one attribute. You will give little importance to these stakeholders but observe them because you never know when they will change their salience.
Changes in Stakeholders’ Attributes
The project environment is dynamic, so you will continuously get new stakeholders and lose old ones. Stakeholders’ attributes can change as the project progresses. A powerless stakeholder may become powerful, and an illegitimate stakeholder may become a legitimate one. You should update your stakeholder management strategy to reflect the changes in stakeholders’ attributes.
Benefits and Challenges
Let us take a look at the benefits and drawbacks that come with using the Salience Model.
Benefits of the Salience Model
The benefits of the salience model are:
It helps identify the stakeholder’s interests
Potential risks and misunderstandings are known
Helps make mechanisms to positively influence other stakeholders
Helps control negative stakeholders as well as their adverse effects on the project
It provides you with better insight into your stakeholders.
It helps you to save resources, time, and effort.
It helps complete projects with minimal obstruction.
Challenges of the Salience Model
The salience model has the following limitations:
This model requires more time and effort
It takes a lot of resources and time to monitor three attributes continuously
Bias in opinion can influence its effectiveness and hence this is a subjective procedure.
This model assumes attributes are present or absent, though, they may vary between these two.
The salience model helps you manage your stakeholders effectively. While more time consuming than the other models, However, it provides you with better analysis and understanding of your stakeholders. This model lets you focus your energy on important stakeholders. It keeps you from wasting your time on less important ones.
|
<urn:uuid:9a9cfc97-bf78-468e-a912-142bd172e6ba>
|
CC-MAIN-2022-40
|
https://www.greycampus.com/blog/project-management/analyze-your-project-s-stakeholders-with-the-help-of-the-salience-model
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00224.warc.gz
|
en
| 0.939345 | 2,587 | 2.703125 | 3 |
Immune reactions caused by vaccination can help protect the organism, or sometimes may aggravate the condition. It is especially important now when multiple vaccines against COVID-19 are being developed. The top immunologists analyse types of immune response to predict what kind of vaccine would be the best.
The COVID-19 pandemic is still ongoing, and it is a major challenge for healthcare professionals worldwide. Currently, there are several strategies of preventing the spread of the disease caused by the SARS-CoV-2 virus, including confinement or quarantine measures, social distancing, use of face masks, and good hygiene – with frequent hand washing and application of antiseptics.
|
<urn:uuid:dfcf2303-6d34-40ed-9427-4e66ce97e8c7>
|
CC-MAIN-2022-40
|
https://biopharmacurated.com/which-immune-response-could-cause-a-vaccine-against-covid-19/?doing_wp_cron=1664209952.8020200729370117187500
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00224.warc.gz
|
en
| 0.94654 | 136 | 3.65625 | 4 |
“You can never protect yourself 100 per cent. What you do is protect yourself as much as possible and mitigate risk to an acceptable degree. You can never remove all risk.” So said Kevin Mitnick, arguably the world’s most famous hacker.
Indeed, the ever-presence of risk makes performing IT risk assessments critical for every business. An IT risk assessment is the process by which a company identifies its valuable data assets, establishes the business impact of having these data assets compromised, determines the threats that can likely cause a compromise, and analyses the vulnerabilities that an attack vector can exploit. Here’s a step-by-step outline of how to perform an effective IT risk assessment.
Identify all valuable data assets. Companies need to identify which data assets are valuable by first understanding the nature of their business. Many companies would consider things such as client contact information, product design files, trade secrets and roadmap documents their most important assets. Regardless of the type of data companies identify as critical, however, it’s necessary for them to understand how all of this critical data flows in their networks and identify which computers and servers are used to store this data. For smaller companies, this information is usually available with the top executives. For larger companies, this information may be available with each department’s head.
Estimate business impact due to loss. Risk and impact assessments have to go hand in hand. For each data asset, the corresponding negative financial impact of a compromise has to be estimated. Apart from direct costs, the negative impact can also include intangible costs such as reputational damage and legal ramifications.
Determine threats to the business. A threat is anything that has the potential to cause harm to the valuable data assets of a business. The threats companies face include natural disasters, power failure, system failure, accidental insider actions, malicious insider actions and malicious outsider actions.
Analyse vulnerabilities. A vulnerability is a weakness or gap in a company’s network, systems, applications, or even processes which can be exploited. Vulnerabilities can be physical in nature, they can involve weak system configurations, or they can result from awareness issues (such as untrained staff). There are several scanning tools available for performing a thorough systems analysis. Penetration testing or ethical hacking techniques could also be used to delve deeper and find vulnerabilities that regular scanning might miss.
Establish a risk management framework. Risk is a business construct, but it can be represented by the following formula: Risk = Threat x Vulnerability x Business impact. To reduce risk, company IT teams need to minimise the threats they’re exposed to, the vulnerabilities that exist in their environments, or a combination of both. From the business side of things, management may also decide to evaluate the business impact of each data asset and take measures to reduce it. A value of high, medium or low should be assigned for each of the variables in the formula above to calculate the risk. Using this process, a company can prioritise which data asset risks it needs to address. After this is done, a company should come up with solutions or redressal for each identified risk, and the associated cost for each solution.
Develop a risk appetite. Companies should now gauge themselves on what level of risk they’re comfortable taking. Do they want to address all the risks or do they only want to address risks identified as high? The answer to this question will vary from company to company.
Start mitigating risks. Finally, companies should invest in the right solutions and start mitigating the risks of data loss.
Making a good risk assessment better
It’s hard to identify what exactly has been stolen after a data breach. The affected company has to go through various data logs and reports to find out who accessed what, when, where and why. To put together a complete picture, the company needs to look at a host of reports from an effective security solution, and put its powers of deduction to use.
Get advice, service and products that fit your unique needs. KDI is an expert partner for complete IT Services and Networking Support based out of the Greater Vancouver area. We are your one-stop IT solution, uniquely combining aspects of information technology, software development, and accounting expertise to make your work life easier.
|
<urn:uuid:7d066bbc-3c00-4e25-929f-c904ba75e4dc>
|
CC-MAIN-2022-40
|
https://www.kdi.ca/how-to-perform-an-effective-it-risk-assessment/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00224.warc.gz
|
en
| 0.946607 | 882 | 2.65625 | 3 |
What is a Metaverse?
By Betsy Burton
Even though the most talked about metaverses are limited in availability (NVIDIA Omniverse) or unavailable (e.g., Meta Metaverse and Microsoft Mesh Teams), they represent an emerging technology and business model that is important to understand.
Digital Immersive Environment
A digital immersive environment is a software application/service that supports a collection of virtual shared spaces that are specifically designed to deliver an immersive world where users primarily interact with through an avatar.
A digital immersive environment is not new, they have existed for years in the form of Second Life, as well as a number of major gaming services, including Minecraft, Grand Theft Auto, and Roblox.
What Makes a Digital Immersive Environment Different?
What makes a digital immersive environment distinct and differentiated from other similar software applications and services is that it is specifically designed to deliver an immersive world that users primarily interact with through an avatar.
A digital immersive environment where avatars interact could be a digital twin of a physical world place (e.g., a manufacturing floor, a museum, a park, etc.) or it could be a fantastical fictitious place.
Five Metaverse Types Within the Digital Immersive Environment
A metaverse is a type of digital immersive environment.
Aragon Research has identified five different types of digital immersive environments based on their degree of openness versus control of the membership, governance and look-and-feel.
Interestingly, we are finding that the types of metaverse mirror, in many cases, the types of cloud environments that have emerged, including a public, private, enterprise, community and hybrid metaverse.
The reality is that metaverses are in their early days and may not apply to your specific business today. However, they are being discussed and hyped in the market, which means business and technology leaders must not ignore them or their strengths and issues.
Watch this space for new research from Aragon Research on metaverses and actionable advice about if, when, and how to use them.
To learn more about Metaverse register for our upcoming event, [Transform Tour 2022] Metaverse: Entertainment vs Enterprise.
|
<urn:uuid:d25189c0-00e7-4f77-a41d-03a1db55409d>
|
CC-MAIN-2022-40
|
https://aragonresearch.com/what-is-a-metaverse/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00224.warc.gz
|
en
| 0.933252 | 443 | 2.796875 | 3 |
To give you the simplest answer, SIEM (Security Information and Event Management) is defined as a complex set of technologies brought together to provide a holistic view into a technical infrastructure. Depending on who you talk to, there are about five different popular opinions on what the letters stand for.
Looking at the 10 layered security stack, with the notion of managing all of it, is enough to make you lose your hair! However, it’s not a train – there is light at the end of the tunnel. That light has come to be known as the SIEM.
- SIEM Features
- SIEM Usage
- SIEM Technology
- System information and event management (SIEM) systems can identify incidents or potential incidents, prioritize according to potential impact, track incidents until they are closed and provide substantial trend analysis over time.
|
<urn:uuid:c71f6d48-b87a-4338-96f4-c7faf0942ae1>
|
CC-MAIN-2022-40
|
https://wiki.glitchdata.com/index.php/SIEM
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00224.warc.gz
|
en
| 0.948027 | 177 | 2.671875 | 3 |
Introduction to Application Security Testing
Application Security testing is an integral part of SDLC (Software development lifecycle) and used to discover the weaknesses, risks or threats in software applications and help to detect vulnerabilities hidden in applications which can be exploited. The main objective of application security testing is to find all potential vulnerabilities of an application and fix them.
The security testing is required to be done in the initial stages of the software development lifecycle because if we perform security testing later post software execution phase it will cost more to fix gaps. Testing can be manually or in automated manner. In Manual testing various methods are used such as white box testing, black box testing and so on.
Today we look more in detail about some manual testing techniques – SAST, DAST and IAST. Their features, functionalities and use cases and will understand the difference between each of these testing techniques.
DAST (Dynamic Application Security Testing)
DAST or Dynamic Application security testing is a security tool which is used for scanning any web application to discover security vulnerabilities. This tool is used to detect vulnerabilities inside a web application program which is being deployed or in production. These tools send alerts to the security team for taking remediation actions. It can be integrated early into the software development lifecycle and its focus is to help organizations to reduce and protect against risks caused by application vulnerabilities.
DAST uses a black box technology and conducts vulnerability assessment from outside and doesn’t have access to application source code. DAST is used during testing and SQ face of SDLC cycle.
Pros and Cons of DAST
- Independent of underlying platform and technology
- Good support for manual penetration
- Insufficient coverage
- No information on location issue in code base
- Output is static report
- Can be slow
SAST (Static Application Security Testing)
SAST or Static Application security testing in this source code is tested way before application is live and deployed in production environment. It helps to detect vulnerabilities in applications before everyone else comes to know about them. SAST uses a testing methodology of analysing a source code to detect any traces of vulnerabilities which could provide a backdoor for hackers. It usually analyses and scans applications before compilation of code.
It is a white box testing technique where source code is visible to the developer and it is an approach where testers test the inner structure of a software before it integrates with the external systems.
Pros and Cons of SAST
- Multiple language support
- Easy to understand
- Poor accuracy (35% false +ve reported)
- No visibility of code execution flow
- Output is static report
- Requires customization and tuning
- Can be slow
- Not for systems in production stage
IAST (Interactive Application Security Testing)
IAST or Interactive application security testing tool was designed to test both web and mobile applications to detect and report issues even when application is running. Before someone else can use IAST they should have good knowledge of DAST and SAST techniques. It uses a grey box technique and it testing occurs in real time while application is running in stagging environment. It also checks source code at post build stage.
IAST agents are usually deployed on application servers and agent returns a line number of the issue from source code. The IAST agents can be deployed on application servers and during functional testing done by QA tester. Agents will study the pattern that a data transfer inside the application follows irrespective of it is vulnerable or not.
Pros and Cons of IAST
- Accurate, can detect 100% of OWASP benchmark with no False+ves
- Flexible for use
- No need for scan or attack application
- Results are in real time
- Continuous detection and DevOps friendly
- Truly Plug & play no configuration or tuning requirements
- Requires specific language support
Comparison Table: DAST vs SAST vs IAST
Below table summarizes the differences between the three:
|Testing Technique||This is a black box testing where there is no access to internal framework that comprises of application, source code and design||This is a white box testing where access to source code, application and design is available within internal framework||This is a grey box testing and used for identification of vulnerabilities in real time|
|Testing Methodology||The complete application is tested from the inside out. It is also known as developer approach||The complete application is testing from outside in. it is often called as hacker testing||This support the accuracy of SAST through use of the run time analysis of results generated from SAST|
|Deployment requirements||It requires deployment on an application server and not require to access source code||It does not require deployment and usually analyses source code directly without execution of an application||This requires deployment of IAST agent on application servers|
|Deployment scenario in SDLC||It is used only after code is compiled||It analyses source code and used very early in SDLC||Performed during Test and QA stage of SDLC|
|Costing||This tool is expensive as vulnerabilities are detected at a later stage of SDLC at times in the end.||It is not very expensive because vulnerabilities are detected very early in SDLC and remediated before code is in motion||Highly priced|
|Scanning techniques||It only scan applications by using dynamic analysis to detect run time vulnerabilities||It scans only static code and can’t discover run time vulnerabilities||Supports real time scanning|
|Applications supported||It scans only web applications||It supports all kind of applications||Supports web and mobile applications|
|Process type||DAST is a validation process and used to find and fix defects||SAST is a verification process and used to find defects||It is both validation and verification and combine the best of both SAST and DAST|
Download the comparison table: DAST vs SAST vs IAST
SAST, DAST and IAST are good security tools that can complement each other provided the organization has enough financial budget to support them. Security experts advise to use more than one combination of tools in the environment to address the majority of vulnerabilities.
|
<urn:uuid:11ccba09-24fa-49f0-874b-32342da31f25>
|
CC-MAIN-2022-40
|
https://networkinterview.com/dast-sast-iast-security-testing/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00224.warc.gz
|
en
| 0.907479 | 1,294 | 2.71875 | 3 |
Personal computers are getting faker. The percentage of counterfeit components is growing steadily, if unevenly. Fake components make PCs cheaper. The downside is declining reliability, safety and performance. Is it even possible to keep it real?
A massive crackdown at U.S. and European airports during two weeks in December yielded some 360,000 fake electronic components worth $1.3 billion, including phony Intel chips and about 40 other major brands. The raids were announcedlast Friday.
Such high-visibility busts mask the difficulty in stopping counterfeit components.
The problem, in a word, is: China. Murky and Byzantine supply chains, lax enforcement of weak intellectual property laws and an outlaw manufacturing culture all contribute to widely available fake PC components mixed together with legitimate parts in the PC components supply chain.
China is the world’s leading manufacturer of, well, just about everything. And it’s also counterfeit central — more fake products of all kinds come from China than from the rest of the world combined.
The growing counterfeits problem is the dark side of the incredible cheapness of PCs these days. The driving force is consumers and businesses who view PCs as commodities to be purchased based on price, rather than quality and reliability. Declining margins force OEMs to seek ever cheaper suppliers, which in turn seek out less expensive components. Often, the cheapest part is the fakest part.
It’s possible to buy a fully counterfeit PC and think it’s original equipment. The Alliance for Gray Market and Counterfeit Abatement (AGMA) says one in ten IT products sold is fake. But even a computer sold legitimately by a brand-name outfit might have a counterfeit motherboard. And even if the motherboard is real, various chips and parts on that board might be fake.
|Mike Elgan and More|
| The Best Free Service You’ve Never Used
Some counterfeit parts fail spectacularly — such as fake laptop or cell phone batteries that explode, catch fire and send people to the hospital. But most may simply fail prematurely, reducing overall reliability. Fake networking and other equipment may compromise security.
No company — not even the giants — can track down and verify the authenticity of every component. Testing is both extremely expensive and very time consuming. And a typical PC contains thousands of counterfeitable parts.
When we think of counterfeit components, we imagine copycat parts being manufactured. But another problem is re-labeling. When a part is upgraded to a new version, for example, unscrupulous distributors convincingly re-label the old part to look like the new one. Sometimes huge batches of legitimately manufactured but defective parts are purchased, relabeled, then sold without the knowledge of the manufacturer. In other cases, products are modified, then relabeled. For example, a chip might be “overclocked,” then sold as a higher-speed version.
In a few Asian countries, including China itself, counterfeit products are easier to get in some cases than the real deal. In the U.S., the most likely source for fake goods is no-name online stores that undercut everyone else in price. Even smaller brick-and-mortar companies can unknowingly buy fake parts. But phony products, or quasi-legit products with some fake components can show up just about anywhere, including the largest electronics stores.
You can minimize the risk of buying shoddy fake PCs by always buying from a reputable company, rather than an online store you’ve never heard of or an auction site. Be especially wary of online sources with radically lower prices than everyone else. And don’t buy on price alone. Check for reliability ratings, which is ultimately the best evidence for the ability of a company to control its supply chain and use authentic components.
The ugly truth is that you can never be 100% certain that any PC you buy contains all legitimate components. But you can minimize the risk by shopping for reliability, not just low price.
|
<urn:uuid:ce15310b-b4ae-486e-8f11-fca35957ae49>
|
CC-MAIN-2022-40
|
https://www.datamation.com/trends/how-fake-is-your-pc/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00224.warc.gz
|
en
| 0.943989 | 819 | 2.53125 | 3 |
In this increasingly virtual online world, you have to be careful to protect your data. Learn the basics of encoding and encrypting important bits of information, such as passwords, credit card numbers, and even entire messages. Get an overview of what it means to encrypt and decrypt information, as well as some practical examples involving passwords and other data, using PHP´s built-in functionality.
Consider how today´s world differs from the world of just 20 years ago. Long ago, in the 1980s, encryption was spy stuff — something you read about in a techno-thriller by Tom Clancy. If somebody wanted to keep a bit of information private, he encrypted the data with a password, a pass phrase, or another basic method.
Fast-forward to today and encryption is everywhere. Passwords are stored encrypted in databases. Encrypted tunnels through cyberspace are possible via SSL, SSH, and other technologies — not to mention virtual private networks. Everyday people can and do use Pretty Good Privacy (PGP) to armor their sensitive files and e-mail.Read Full Story
|
<urn:uuid:76ce2c61-994e-44d2-9dd1-77c828de1bd2>
|
CC-MAIN-2022-40
|
https://it-observer.com/php-encryption-common-man.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00224.warc.gz
|
en
| 0.925331 | 220 | 3.34375 | 3 |
The Zeus Virus can do a number of nasty things once it infects a computer, but it really has two major pieces of functionality.
First, it creates a botnet, which is a network of corrupted machines that are covertly controlled by a command and control server under the control of the malware's owner. A botnet allows the owner to collect massive amounts of information or execute large-scale attacks.
Zeus also acts as a financial services Trojan designed to steal banking credentials from the machines it infects. It accomplishes this through website monitoring and keylogging, where the malware recognizes when the user is on a banking website and records the keystrokes used to log in. This means that the Trojan can get around the security in place on these websites, as the keystrokes required for logging in are recorded as the user enters them.
Some forms of this malware also affect mobile devices, attempting to get around two-factor authentication that is gaining popularity in the financial services world.
Originally, the Trojan only affected computers running versions of the Microsoft Windows operating system, but some newer versions of the malware have been found on Symbian, BlackBerry and Android mobile devices.
The creator of the malware released the Zeus source code to the public in 2011, opening the doors for the creation of a number of new, updated versions of the malware. These days, even though the original Zeus malware has been largely neutralized, the Trojan lives on as its components are used (and built upon) in a large number of new and emerging malware.
|
<urn:uuid:1f1076d0-7923-4e90-8622-ceb382def3cf>
|
CC-MAIN-2022-40
|
https://usa.kaspersky.com/resource-center/threats/zeus-virus
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00224.warc.gz
|
en
| 0.950214 | 312 | 2.796875 | 3 |
Stroke holds the No. 4 spot on the list of leading causes of death in Americans. On average, a new one happens every 40 seconds, with approximately 795,000 people experiencing them annually, according to the American Stroke Association. Not every hospital has the proper tools for caring for patients who have a stroke, which only increases people's risks. When it comes to strokes, timing is everything. Unified communications solutions ensure that stroke patients receive the immediate care they need to make a recovery.
Reaching Remote NeurologistsTelemedicine is making a name for itself in not only the health care industry, but in politics as well. The proposed Telehealth Modernization Act of 2015 would standardize telemedicine and encourage its widespread use. One part of telemedicine is telestroke, which ensures that those who experience strokes receive the urgent care they need.
Smaller, more rural communities may not have specialists that cater to those sorts of conditions, which makes it difficult to treat patients correctly. However, through the use of video conferencing, doctors and nurses in remote hospitals can reach neurologists in larger, regional facilities, Prairie Business explained. Hometown physicians can communicate with specialists to assess and treat strokes as soon as the patient is brought into the hospital. It's crucial that these people receive tissue plasminogen activators, also known as blood-clotting medications, within the first three hours to stop and reverse stroke symptoms. With this technology, the care process is accelerated.
"This is so cool for the patients. It's using technology for the right reason, not because it's some new toy but for tangible benefit," Ken Flowe, chief medical officer at Rice Memorial Hospital, told the source. "Now that we've invested in the equipment, who knows what services we'll be able to provide?"
EHR Sharing CapabilitiesWhile video conferencing may help neurologists see people, it can't solely help them diagnose stroke patients. Doctors need to be able to see medical records in order to determine proper treatments. Other UC solutions that allow for collaboration can ensure that specialists receive the files they need.
Interoperability is key when dealing with the sharing of electronic health records. While patient files are stored within the hospitals software , they can be shared with a secure cloud service. Medical images, such as MRIs and CT scans, are needed along with patient histories to see the extent of people's health risks. By looking at these documents, specialists can determine the best course of action, especially in regards to tPA administration, according to HIT Consultant. Telestroke has improved tPA utilization by 97 percent, which proves to be a success since approximately 40 percent of the U.S. population lives in areas without stroke specialty care.
"Acute stroke care is such a time-sensitive issue, with a small window of treatment and, often, relatively limited access to stroke specialists. Our new telemedicine program addresses all these concerns head-on," Susana Bowling, medical director for neurosciences at Summa Health System, which recently implemented the program, told the source.
UC aids telestroke by allowing neurologists to be available anywhere, anytime. With communication and collaboration tools, stroke patients can receive the care they need almost instantly.
|
<urn:uuid:a9413728-c29f-48a8-9212-af7d65bc1255>
|
CC-MAIN-2022-40
|
https://www.fuze.com/blog/how-are-unified-communications-aiding-telestroke-programs
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00224.warc.gz
|
en
| 0.962323 | 672 | 2.671875 | 3 |
Government agencies are typically regarded as highly credible and trustworthy resources for citizens, and cybercriminals will take any opportunity available to exploit that established trust. As governments increasingly communicate through email, bad actors increasingly impersonate governmental organizations via email for malicious purposes. This is especially common — and dangerous — during times when citizens are on the lookout for authoritative information, such as during elections, states of emergency, tax seasons or other uncertain times. DMARC helps prevent organizations from being spoofed in phishing attacks. And when it comes to sending legitimate email to constituents, DMARC can also improve email deliverability, streamlining vital communication between governments, government employees and citizens.
DMARC (Domain-based Message Authentication, Reporting and Conformance) is an email authentication protocol used to protect an organization’s email channel from spoofs, phishing scams and other email-borne attacks. Established by Google, Yahoo!, Microsoft and others in 2012, DMARC builds on existing email authentication techniques SPF and DKIM to strengthen your domain’s fortifications against fraudulent use. DMARC is the best way for email senders and receivers to determine if a given message is authentically from the sender, and what to do if it is not. It also helps improve your organization’s email deliverability to the inbox.
Brand impersonation is rampant in the public sector. Yet, according to a recent Mimecast global survey, 64% of public-sector organizations have not yet deployed DMARC. To increase DMARC adoption, several countries have made DMARC implementation mandatory or recommended for national government organizations, including:
Governmental organizations can’t cut corners when it comes to keeping citizens safe. Constituents trust their local and federal agencies to do everything in their power to prevent cybercrimes. By capitalizing on that established trust, bad actors target potentially vulnerable people who are seeking health insurance, looking for tax assistance, trying to pay bills, registering to vote, renewing a driver’s license, trying to receive unemployment insurance benefits and more. When they’re successful, criminals can steal personal information, conduct fraud, deploy malware or ransomware, influence elections and more. Needless to say, a successful email domain spoof can deeply damage the government’s integrity, authority and trustworthiness.
DMARC helps secure all channels of communication between your agency, partners and constituents. Additionally, DMARC provides the following benefits to governmental organizations:
Online brand protection:
Local, state and federal agencies are common targets for cybercriminals to impersonate for malicious purposes. DMARC protects your brand’s integrity by keeping your organization out of their arsenal of easily spoof-able email domains.
Increased email deliverability:
By deploying DMARC authentication, you signal to email receivers that your organization’s emails are legitimate, ensuring they’re delivered to the inbox rather than blocked or sent to the spam folder.
A published policy that instructs ISPs and other email receivers to deliver, quarantine or delete emails:
With DMARC, you can decide if potential abuses of your email domain are solely reported back to you without further action, quarantined for further review or — the golden standard — automatically rejected.
Greater visibility into cyber threats:
DMARC’s reporting capability enables you to monitor all authorized third parties that send emails on your behalf, alongside those that are not authorized. This helps ensure compliance with security best practices and aids investigations into email security or phishing issues.
In 2016, the UK Revenue & Customs Department stopped over 300 million phishing attempts by implementing DMARC. This statistic is just one of many that underscores the inherent susceptibility of email. 95% of all cyberattacks start with email, and of those email-borne attacks, 91% are phishing scams. Why? The hard secret of email is that because it is so easy to set up, it’s easy for cybercriminals to create a fake email account exploiting your organization’s email domains. Countless reputable government organizations have been exploited by criminals to execute phishing and BEC attacks on citizens and government employees. Because government agencies rely so heavily on credibility and trust, any association with criminal phishing campaigns can be devastating — especially when they could have been prevented by enforcing stricter security standards like DMARC.
In order to achieve maximum return on your DMARC investment, governmental organizations must complete the necessary steps to correctly implement DMARC. Mimecast embarked on our own journey to enforce DMARC across all of our owned domains in 2020, and the project was documented in a three-part blog series for other organizations to use for reference. It’s important to note that while DMARC is a key component of any email security program, it is not a standard that can be deployed, configured, activated and then forgotten. Once you have set your DMARC policy to reject, it’s vital that your organization establishes a program of ongoing monitoring, as the online threat landscape is not static. In addition, most organizations are regularly deploying new, legitimate email senders that need to be managed as part of the organization’s DMARC program.
|
<urn:uuid:4935d97c-f9b9-4eda-9991-87ca99b7fe91>
|
CC-MAIN-2022-40
|
https://www.dmarcanalyzer.com/dmarc-governmental-organizations/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00424.warc.gz
|
en
| 0.928648 | 1,046 | 2.640625 | 3 |
Legacy systems are resilient and scalable, supporting fundamental cost, performance and functionality advantages.
In the technology world, people often become fixated on the latest solution: New systems are good, old systems are not. New does not always mean better, however. The wheel was invented thousands of years ago, and no one has improved on it.
It’s the same case for large-scale technology platforms, such as the IBM i or IBM Z. By keeping these systems up-to-date and modernized, organizations are more satisfied with the capabilities of their systems, according to an IDC white paper sponsored by Rocket Software, entitled “The Quantified Business Benefits of Modernizing IBM Z and IBM i to Spur Innovation.” Modernization also keeps costs low. When compared to replatforming, those who invested the same in new hardware spent 1.7% less on project costs when modernizing on IBM Z and 3.5% less on IBM i. Despite attempts at improvement, legacy systems remain the best choice.
This may seem to fly in the face of conventional wisdom, but in an industry where innovation is measured in months, systems that have been around for decades are still getting the job done every day. Thousands of government organizations around the world, including many federal agencies in the United States, rely on mainframes and other legacy systems to support a seemingly infinite number of transactions and interactions. In some cases, these platforms were first put in place 50 years ago. How is it possible to meet today’s demands with technology that was first conceived before the mobile phone was even a dream?
The most important thing to understand is that some of today’s legacy applications now run on operating systems and hardware that are actually relatively new. In fact, most mainframes in the world today were built in the last two years. The latest IBM z15 mainframe was introduced in September 2019, and the next generation of IBM Power servers are due to be released next year. Legacy systems are not dusty old computers in the back of an unlit data center. The truth is that these machines are extremely powerful and can process up to 19 billion encrypted transactions a day at 99.99999% availability.
The second thing to keep in mind is that a platform can be compared to a house’s foundation. It’s mostly underground and goes unnoticed unless there’s a problem. If the foundation is built correctly, the structure will stand for many years. Technology is the same. If a platform’s underpinnings are strong, it’s possible to build on and maintain the integrity of the original system.
Of course, technology leaders want to be at the cutting edge of new technology, which is why so many CIOs enter organizations with a mission to rip out existing systems and replace them with new ones. However, once they start looking at the actual costs of making a change and the Herculean effort involved, legacy systems start making a lot more sense in terms of cost, performance and functionality.
The IDC white paper backs up this surprising conclusion with strong data. The research firm interviewed technology leaders from Australia, India, New Zealand, the United Kingdom and the U.S. across 31 different industries to get their opinions on legacy systems as well as their experiences with the rip-and-replace model. It turns out that organizations that stuck with existing systems and modernized them lowered annual hardware costs by up to 12.5%, software costs by up to 5.8% and staffing costs by up to 4.6%. The same companies also increased annual revenue by 5% on average. That means government agencies and other large companies can achieve their technology revenue goals without the risk and disruption associated with replatforming.
Of course, cost is only half of the equation. If a system can’t do the job, and an organization is unhappy with the performance, it’s not worth any amount of money. It turns out that legacy systems actually perform as well as or better than their curative replacements. The IDC report revealed that those who modernized in-place experienced satisfaction and system performance rates that were 5% higher than those who chose to move off those systems.
The benefits to cost, performance and functionality support the fundamental advantages of legacy systems: they are resilient and scalable. For government agencies, these qualities are vital, especially during uncertain times when citizens rely on government services to distribute important public guidelines and emergency relief funds. Increased use of online tools to access these services can tax any system, but the mainframe was built to be reliable and flexible to respond quickly to meet demand. This means there is little risk of downtime, so citizens can count on being able to use government sites and services when they need them, while keeping their personal data safe.
The crucial role that legacy systems and their reliability play in operational efficiency also extends to remote teams and workloads. While employees work from home, mainframe systems ensure that processes run smoothly so that everyone stays connected.
That’s the essence of modernization right there: Programming languages, interfaces and user experience are constantly evolving even as the foundational system in the background continues on its silent and often unheralded journey.
|
<urn:uuid:a4769885-cadc-4349-ae25-3d46a1ac8c1c>
|
CC-MAIN-2022-40
|
https://gcn.com/cloud-infrastructure/2021/01/rip-and-replace-no-more/315920/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00424.warc.gz
|
en
| 0.963117 | 1,198 | 2.578125 | 3 |
Ethical hacking is when an organization allows a known person or organization to attempt to break into or attack your system. This type of service usually takes the form of a penetration test or pen test. More on pen testing in a minute.
Hacking comes in three flavors: white hat, gray hat, and black hat. These are the “actors” or people that do the hacking.
Black hat hackers are the individuals (or countries) that capture the headlines. These individuals attempt to gain access to networks and computers without permission. Usually, they intend to either do harm, such as cause outages or DDoS (Distributed Denial of Service) attacks, or steal data to sell it for financial gain (such as payment card or healthcare information) or to steal information for their own use (such as when intellectual property or military intelligience is stolen).
According to Westley McDuffie, Security Evangelist for IBM and member of the IBM X-Force team, the term “black hat” came from the old Western movies where the villains wore … black cowboy hats.
White hat hackers also attempt to gain access to networks and computers, but their purpose is finding vulnerabilities before they can be exploited by a black hat hacker. And once again, the Western movie analogy holds because it was the “good guys” in the story that wore white hats!
Gray hat hackers straddle the line. Sometimes they act as a white hat hacker, but they are not always pure of heart. While they lack the malicious intent of a black hat hacker, they may break laws or act unethically.
What’s the Purpose of Pen Testing?
Ethical hacking is performed by white hat hackers. Organizations perform ethical hacking or penetration testing to discover weaknesses or vulnerabilities in their security configuration. The most well-known penetration tests occur at the network. Vulnerability scanners identify open ports, services with known weaknesses, etc., and run against all servers in the network—including IBM i.
More recently, penetration testing has been performed against the databases residing on the servers—again, this includes IBM i pen testing. One reason for this new focus is that while understanding weaknesses associated with the network is important, it’s only one aspect of security configuration that needs to be tested. Testing to determine if one can gain access to key database files is a very different type of testing than the vulnerability scans performed against the network.
How Is Pen Testing Performed?
Most pen testing against a database uses a method of testing called white box or gray box. White box testing is where the tester (typically a white hat hacker) has access to architecture and implementation documents. In other words, they have knowledge of or are given documentation to show how the server and database are configured. Using this information, the tester attempts to gain access to data using as many access methods as possible—FTP, ODBC, 5250 sign on, etc. Testing is often performed as a regular user rather than a super user to find the vulnerabilities that lurk in configurations and that can be exploited by a typical user.
Black box testing (not to be confused with a black hat hacker—someone with evil intent) is where the tester has no formal documentation of the systems’ or databases’ configuration and attempts access by exploiting well-known vulnerabilities or newly documented zero-day defects by performing random access attempts. (A zero-day defect is a new vulnerability not previously known or documented and for which there may be no fix or the fix has just been released.) While black box tests uncover weakness to these well-known vulnerabilities, they are rarely effective in uncovering the vulnerabilities that are unique to the organization’s specific database configuration and often require a great deal of trial and error to show whether the organization has vulnerabilities.
Gray box testing uses both techniques. The tester has access to configuration documentation, but also uses some random testing techniques or attempts to exploit a well-known vulnerability.
In no case is the intent of white box or gray box penetration testing evil or unethical nor is the intent one of destruction. Remember, white box and gray box testing are performed by a white hat hacker—or a good guy (or gal!).
Who Needs Pen Testing?
The point of penetration testing is to help an organization discover vulnerabilities, so they can be remediated before they are exploited by someone with evil intent—a black hat hacker. For this reason, numerous laws and regulations are now requiring penetration tests, not just at the network level but also against the database. The Payment Card Industry’s Data Security Standard (PCI DSS), the New York State Cybersecurity Law, and the 2018 Singapore Cybersecurity Act, to name a few, all require database penetration testing.
The Professional Security Services team at HelpSystems performs penetration testing for IBM i and has done so for several years. It’s an effective means of showing our clients how the vulnerabilities documented in our Risk Assessment can actually be exploited. And it’s been rare that we haven’t been able to gain access to IBM i and to data in ways that were not expected. These clients now have the opportunity to resolve those issues prior to them being exploited.
If you'd like to learn more about how pen testing can uncover vulnerabilities before an attacker finds them, HelpSystems can assist.
|
<urn:uuid:bd314d31-d815-498e-aed7-015019c0c24f>
|
CC-MAIN-2022-40
|
https://www.helpsystems.com/blog/ethical-hacking-and-pen-testing-what-it-and-who-needs-it
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00424.warc.gz
|
en
| 0.935327 | 1,094 | 3.140625 | 3 |
First, let’s briefly take a look at what exactly is a “RAT.” Later, we’ll also explore why so many RATs have been discovered already in 2018, and we’ll explain how you can protect yourself from these threats.
What is a RAT?
RAT is an acronym that stands for “remote administration tool” or “remote access Trojan.” While there are plenty of perfectly legitimate software utilities for accessing another computer (for example, Apple Remote Desktop), the term RAT is generally reserved for software that’s designed to be installed and used without the computer user’s knowledge, often with the intention of spying or stealing resources.
RATs commonly include features that allow a remote attacker to control or spy on your computer. They may allow an attacker to do such things as observe your screen, take screenshots, activate your camera or microphone, log everything you type including your passwords, copy files to or from your computer, or execute other commands, all in the background and without your knowledge.
What’s the story behind OSX/Coldroot?
In preparation for a talk at an upcoming security conference, Wardle searched VirusTotal—a site that uses dozens of anti-virus engines to analyze individual files that users upload—for a sample of malware that attempts to directly modify a macOS database file (TCC.db) to grant itself special permissions.
One sample that came up in the search results was undetected by all 60 of VirusTotal’s anti-virus engines. However, Wardle felt that a variety of indicators pointed to the strong probability that the software was malicious. The following were indicators that were relatively easy to discover by a casual observer:
- The software’s name, com.apple.audio.driver2.app, is suspicious because it seems to be trying to disguise itself as Apple software.
- Although the sample is actually a Mac app, its icon is that of a text document, which seems to imply that the app’s creator may have been trying to trick users into double-clicking on it. (Incidentally, this has been a problem for decades, and Apple still hides the .app extension by default in the Finder, making it more difficult for users to detect a misleading icon.)
- If the app is opened, it prompts for your password without any observable behavior afterward, which is unusual. (This is how OSX/Coldroot gains the ability to run again whenever the Mac is restarted.)
OSX/Coldroot prompts for your password. Image: Wardle
There were also a number of other indicators that the software was probably malicious, but that would only be noticeable to a skilled researcher:
- The software attempts to directly modify TCC.db, an Accessibility database that tracks special permissions that the user has granted to apps. No third-party software should ever attempt to directly alter that database. (OSX/Coldroot attempts this hack for the purpose of keylogging.)
- The software is not digitally signed, as legitimate software from Apple and most reputable third-party developers usually is. (See our recent article about OSX/Shlayer malware for a brief explanation of code signing.)
- The sample is packed (compressed) with UPX, a common technique for attempting to hide malicious code from anti-virus scanners. It’s very rare for legitimate Mac software to employ a code-packing algorithm like UPX.
Once the malware is installed, it logs the user’s keystrokes to a file (misleadingly named “adobe_logs.log”). It also phones home to a command-and-control server which seems to be located in Australia, and it copies itself into a hidden folder and installs a LaunchDaemon to allow itself to run again every time the Mac is restarted.
Coldroot’s keylogging in action, as revealed by Wardle.
The RAT also has the capability of performing functions for the attacker such as:
- listing, renaming, or deleting files and folders
- listing the apps that are currently running
- launching or quitting apps
- downloading or uploading files
- determining which window is currently in the foreground
- streaming continuous screenshots to the attacker
- shutting down the computer
Wardle discovered that the developer of Coldroot had uploaded a 27-minute demonstration video in 2016. (The video was removed from YouTube after Wardle found it, but Wardle has posted a mirror of it in his write-up.) The video shows that Coldroot was evidently developed with cross-platform compatibility, capable of controlling infected Windows and Linux systems in addition to Macs.
Is my Mac infected with OSX/Coldroot?
Following are the main indicators of compromise (IoCs). An infected Mac may have components located at the following paths:
Network administrators can search Web traffic logs for attempts to access the IP address 184.108.40.206 on port 80.
Why so many RATs in a short period of time?
One might wonder: why were three different Mac RATs—CrossRAT, EvilOSX, and Coldroot—all discovered within a couple months of each other? The answer seems to be simply coincidence. In fact, variants of all three of these RATs were developed in 2017 or earlier, entirely independent of one another.
Coldroot, according to comments in the YouTube video, was evidently originally released in 2017, and Wardle found a source code repository for an early version that was under development in March 2016. WHOIS records indicate that Mohamed Osama (who goes by the name Coldzer0) evidently registered the domain for Coldroot’s homepage in July 2015. Thus, Coldroot is probably the oldest of the three RATs.
CrossRAT is cross-platform Java software, designed to infect Windows and Linux systems in addition to Macs. It is associated with Dark Caracal, a global cyber-espionage campaign that is believed to have nation-state backing. A joint report by Lookout and the Electronic Frontier Foundation indicates that the “version 0.1” sample they found was last modified in March 2017.
EvilOSX was first discovered almost a year ago, in May 2017. The February 2018 variant may have been noticed in part because of Wardle’s publication of his findings about OSX/Coldroot, which might have prompted others to look further into Mac RATs. Interestingly, in spite of including features that are overtly malicious in nature, EvilOSX is developed as open-source software that’s freely available on GitHub, a popular software development repository. The developer goes by the name Marten4n6.
How can Mac users protect themselves from RATs?
Keeping your Mac updated with the latest version of macOS, and installing Apple security updates as soon as they’re available in the App Store, are important steps in keeping your Mac protected from a variety of infections.
Wardle notes that the sample of OSX/Coldroot he found will not run on the latest version of macOS High Sierra (which he speculates may be due to a bug related to UPX packing). He also points out that the malware’s attempt to directly modify the system file TCC.db is thwarted by the operating system’s System Integrity Protection (SIP) feature if the user is running macOS Sierra or later.
Since RATs are designed to run secretively without alerting a user to their presence, and may be installed through any number of methods including as a secondary infection, one of the best ways to protect your Mac from RATs is to use anti-virus software with real-time scanning together with an outbound firewall. (The firewall that’s built into macOS only protects against certain inbound threats, but will not prevent malware on your system from phoning home to an attacker.)
Intego’s Mac Premium Bundle X9 includes both VirusBarrier X9 and NetBarrier X9, the best commercial anti-virus and firewall software available for Mac. VirusBarrier detects and eradicates OSX/Coldroot, CrossRAT (Java/LaunchAgent), and OSX/EvilOSX.Editor’s note: This story was updated March 9, 2018 for accuracy and comprehensiveness.
Have something to say about this story? Share your comments below!
|
<urn:uuid:d5c2941e-9cd3-4f55-ae3d-dba7fc7ff0c2>
|
CC-MAIN-2022-40
|
https://www.intego.com/mac-security-blog/osxcoldroot-and-the-rat-invasion/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00424.warc.gz
|
en
| 0.951147 | 1,820 | 3.171875 | 3 |
If Filename (F)= Bob.txt AND If User is Member of administrators (MA), Users (U), Power Users (PU)
If this expression were evaluated from left to right, the results would not match our expectations:
If (((F and MA) or U) or PU)
Instead, EFT evaluates the conditional statement first as its own atomic unit and then evaluates the resulting expression from left to right:
If (F and (MA or U or PU))
This allows you to create expressions that contain order-of-precedence grouping without having to use parentheses. The evaluative OR statement is hidden inside the conditional statement, as long as that conditional statement can evaluate against multiple criteria.
Only the following Conditions can evaluate against multiple criteria (strings):
To define multiple criteria for a Condition
Double-click a Condition in the list to add it to the Rule Builder. (To learn more about available conditions, refer to Conditions.)
If you are adding an additional Condition, highlight the existing Condition in the Rule Builder, then in the Conditions list, double-click the Condition you want to add. The Condition appends to the existing one and adds a logical operand (AND/OR).
Click the logical operand to change it.
You can insert multiple Conditions. That is, you can have Condition 1 AND Condition 2 OR Condition 3.
If you need to use more complex criteria using AND and OR, you can use wildcard logic to create any logic that wildcards support. For example, if you add the File Name Condition to the Rule Builder, you can then define the path mask using complex logic with wildcards.
|
<urn:uuid:04ff6dec-6cef-4241-9a9c-4b8855aa7431>
|
CC-MAIN-2022-40
|
https://hstechdocs.helpsystems.com/manuals/globalscape/eft8-0-7/content/mergedprojects/eventrules/compound_conditional_statement.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00424.warc.gz
|
en
| 0.879466 | 349 | 2.609375 | 3 |
Researchers over at the Stevens Institute of Technology in New Jersey have created a drone helicopter capable of hacking into poorly protected Wi-Fi networks and use them to turn the system into a botnet.
The device, creepily named SkyNET, is a DIY drone helicopter which costs less than $600 to build and can be used by hackers to create an army of botnets without raising an alarm.
According to the Naked Security blog run by security firm Sophos, Theodore Reed, Joseph Geis and Sven Dietrich used an easily available Parrot AR.Drone remote-controlled [PDF] quadricopter and a few hundred dollars to create a device capable of discovering wireless networks with poor security and infecting the computers attached to the compromised networks.
The SkyNET botnet drone was created after the researchers set out to discover new ways to infect computers and turn them into zombie botnets.
“Furthermore, a drone might be traced back to the location where the botmaster plans to retrieve his device - one wonders if he would pose as a park-goer playing with an expensive toy. In addition, lets not forget, unlike just about any other form of computer attack this is one which simply won't work when the weather is too wet or windy,” Sophos (opens in new tab) said.
|
<urn:uuid:19d5c5c7-798f-4199-a5df-b211f91d84eb>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/2011/09/12/researchers-develop-drone-helicopter-skynet-airborne-wi-fi-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00424.warc.gz
|
en
| 0.945831 | 267 | 2.59375 | 3 |
In 2003 two global Internet attacks took place that could be called the biggest in the history of the Internet. The Internet worm Slammer laid the foundation for the attacks, and used a vulnerability in the MS SQL Server to spread. Slammer was the first classic fileless worm, which fully illustrated the capabilities of a flash-worm – capabilities which had been foreseen several years before.
On January 25th, 2003, within the space of a few minutes, the worm infected hundreds of thousands of computers throughout the world, and increased network traffic to the point where several national segments of the Internet crashed. Experts estimate that traffic increased from 40% – 80% in a variety of networks. The worm attacked computers through ports 1433 and 1434 and on penetrating machines did not copy itself on any disk, but simply remained in computer memory. If we analyse the dynamics of the epidemic, we can assert that the worm originated in the Far East.
The second, more important epidemic was caused by the Lovesan worm, which appeared in August 2003. The worm demonstrated just how vulnerable Windows is. Just as Slammer did, Lovesan exploited a vulnerability in Windows in order to replicate itself. The difference was that Lovesan used a loophole in the RPC DCOM service working under Windows 2000/XP. This led to almost every Internet user being attacked by the worm.
As for viruses penetrating new platforms and applications, the year was surprisingly quiet. The only news was the discovery, in the wild, of MBP.Kynel, by Kaspersky Labs. This virus infects MapInfo documents and is written in MapBasic. The MBP.Kynel virus was undoubtedly written by a Russian.
2003 was the year of ceaseless epidemics caused by email worms. Ganda and Avron were first detected in January. The former was written in Sweden and is still one of the most widespread email worms in Scandinavia despite the fact that the Swedish police arrested the autour of the worm at the end of March.
Avron was the first worm to be created in the former USSR capable of causing a significant worldwide epidemic. The source code for the worm was published on the Internet and this has led to the appearance of a number of less effective versions.
Another important event in 2003 was the appearance of the first Sobig worm in January. Worms from this family all caused significant virus outbreaks but it was version ‘f’ which broke all records, becoming the most widely distributed worm in network traffic in Internet history. At the peak of the epidemic, Sobig.f, which was first detected in August, could be found in every 20th email message. The virus writers who created the Sobig family, were aiming to create a network of infected machines with the aim of conducting DoS attacks on arbitrarily selected sites and also to use the network for spam attacks.
The Tanatos.b email worm was also a notable event in virusology. The first version of Tanatos was written in the middle of 2002, but version ‘b’ appeared only a year later. The worm exploited the well-known IFRAME loophole in MS Outlook to automatically launch itself from infected messages. Tanatos caused one of the most significant email epidemics of 2003, coming second to that caused by Sobig.f, which probably has the record for the most machines infected by an email worm.
Worms from the Lentin family continued to appear. All these worms were written in India by a local hacker group as part of the ‘virtual war’ between Indian and Pakistani hackers. The most widespread versions were ‘m’ and ‘o’, where the virus replicated in the form of a ZIP archive file attached to infected messages.
Russian writers remained active; the second worm from the former USSR, which also caused a global epidemic was Mimail. The worm used the latest vulnerability in Internet Explorer to activate itself. The vulnerability allowed binary code to be extracted from HTML files and executed. This was first used in Russia in May 2003 (Trojan.Win32.StartPage.l) Following this, the vulnerability was used by the Mimail family and several other Trojan programs. The authors of the Mimail worm published the source code on the Internet, which led to the appearance of several new varieties of the worm in November 2003, written by other virus writers.
September was the month of Swen. I-Worm.Swen, masquerading as a patch from Microsoft, managed to infect several hundred thousand computers throughout the world and to date remains one of the most widespread email worms. The author of the virus exploited frightened users who were still nervous after the recent Lovesan and Sobig.f epidemics.
A recent significant epidemic was caused by Sober, a relatively simple mail worm written by a German, it is an imitation of the year’s leader, Sobig.f.
In 2002, the trend was towards an increase in the number of backdoor and spy Trojan programs and this continued in 2003. In this category, Backdoor.Agobot and Afcore were most notable. There are now more than 40 varieties of Agobot in existence, since the author of the original version created a network of websites and IRC channels where anyone who wanted could, for a fee starting from $150, become the owner of an ‘exclusive’ version of Backdoor, which would be created in accordance with the client’s wishes.
Afcore is slightly less widespread. However, in order to mask its presence in the system, it uses an unusual method; it places itself in additional file systems of the NTFS systems, i.e. in the catalogue stream, not the file streams.
A new and potentially dangerous trend was identified at the end of 2003; a new type of Trojan, TrojanProxy. This was the first and clearest sign of virus writers and spammers uniting. Spammers began using machines infected by such Trojan programs for mass spammer attacks. It is also clear that spammers participated in a number of epidemics as malicious programs were spread using spamming technology.
Internet worms constituted the second most active class of viruses in 2003; specifically I-Worms which replicated by seizing passwords to remote network resources. As a rule, such worms are based on IRC clients, and scan the addresses of IRC users. They then attempt to penetrate computers using the NetBIOS protocol and port 445. One of the most notable viruses in this class was the Randon family of Internet worms.
Throughout the year Internet worms remained the dominant type of malicious software.
Viruses, namely macro viruses such as Macro.Word97.Saver came in second. However, Trojan programs overtook viruses in the autumn, and this trend continues through today.
|
<urn:uuid:64dc94fe-f6dd-49f1-8fe0-2d93a3752f98>
|
CC-MAIN-2022-40
|
https://encyclopedia.kaspersky.com/knowledge/year-2003/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00424.warc.gz
|
en
| 0.97163 | 1,390 | 3.0625 | 3 |
Indeed, many might argue that we stand on the brink of a new age of renaissance. Except that this time the impact could be even more profound.
copyright by thetechportal.com
Instead of an age that will bring about thing like mass production into fashion, we are looking at an age of robots, machines that need no instructions and that can predict and see to our desires.
Even as we speak, machines and artificial intelligence are becoming even more capable, doing things that used to be the sole domain of humans. However, would these systems really be able to breach all frontiers and cross the bridge of what makes us humans?
How do you differentiate between a robot and a human anyways. At the risk of venturing into philosophy, one might say that creativity, the ability to come up with something brand new and original is what sets the human race apart from machines, and indeed, despite all the advances that we have made, can a machine create poetry? Can artificial intelligence produce a fresh story? Can a robot, be a journalist? Last December, the Nomura Research Institute published a report in conjunction with the Oxford University. The report stated that within a decade, 49 percent of jobs in Japan, will be performed by an AI. This included jobs like security guards, Bank tellers, cleaners, assembly workers and so on. Among the jobs that were said to survive this culling, were doctors, critics, lawyers, photographers, writers and so on. Interestingly, the AI is taking the learning curve that would be taken by humans as well — unskilled jobs first, followed by skilled ones.
News Flash made by Machines
So writing is one of the jobs that have been listed as safe. And is journalism but not an extension of writing? Well, that may be true but it is no reason for us to preen our feathers and celebrate. Notwithstanding what the report says, robots are already encroaching upon journalism as well and given a few years, we might find ourselves changing our minds about whether or not they can replace human journalists. See, journalism is a job that requires skills like quick response time, creativity, the ability to sift through data and so on. An AI would arguably be better than a human at most of them. For instance, the wordsmith software that has been developed by the Associated Press (AP), can automatically generate new stories pertaining to college sports. AP is also using the AI to generate quarterly earnings reports of corporations. And already, it is churning out up to 10 times the number of reports that human reporters were earlier generating.
Time to investigate stories
So yeah, as robots get better, we can expect them to take over journalistic duties like preparing reports, press releases. On the other hand, jobs that require investigation, deep analysis like writing editorials, doing profile stories and so on — will remain the domain of humans, at least for a few decades. It will be good for journalism in a way too, as reporters will be freed up from the more mundane, data crunching jobs and will be able to focus on the core creative aspect of their jobs.
|
<urn:uuid:996e55f4-0c60-4210-9627-b78bc35800c6>
|
CC-MAIN-2022-40
|
https://swisscognitive.ch/2017/05/22/how-ai-will-change-journalism/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00424.warc.gz
|
en
| 0.96499 | 636 | 2.734375 | 3 |
6 Telehealth Privacy and Security Essentials
HIPAA covers telehealth but does this make it safe? Learn the measures that ensure patient safety and privacy while using a virtual doctor visit program.
Over the past few years, the rise of telehealth in healthcare has transformed patient-doctor interactions. Nonetheless, the privacy and security of protected health information (PHI) remain a big question. These concerns make sense because new technology often comes with new challenges.
Luckily, every problem comes with a solution. Thus, making a few smart choices can work wonders to keep the patient data protected.
What is Telehealth? Breaking the Barrier and Bridging the Gap
The Health Resources and Services Administration (HRSA) defines telehealth as “the use of electronic information and telecommunications technologies to support and promote long-distance clinical health care, patient, and professional health-related education, public health and health administration.”
- The internet.
- Store-and-forward imaging.
- Streaming media.
- Terrestrial and wireless communications.
Simply put, telehealth provides a dynamic structure that allows patient-doctor interaction even when they are a thousand miles apart. It embraces health information, health care, and education. Note that the scope of telehealth goes beyond the patient-doctor interaction. Thus, it also includes other members of the healthcare system—for example, nurses, radiology, pharmacy, and psychology.
The types of telehealth include:
- Teleconsultations. Professional consultations between a physician and specialist who are far apart.
- Remote patient monitoring (RPM). Continuous monitoring of a patient by tracking the sensors on a device the patient is wearing.
- Intraoperative monitoring (IOM). Expert monitoring of a surgical procedure, especially during complex surgery. For example, brain and spinal cord surgery.
- Telehomecare (THC). A technique that allows caregivers to reassure a patient with some chronic conditions. For example, dementia.
- Diagnosis and treatment at the point of care. This technique eliminates the need for a direct visit to a clinic or hospital. In essence, the patient gets tested or treated at or near the place where they live.
Common Misconceptions about Telehealth
- It is not a single service. Instead, it is a broad range of services. Telehealth involves the use of information technologies, devices, and professionals. It may be categorized depending on the specialty—for example, teleradiology, telepharmacy, telepsychology, teletriage, teleophthalmology, and telenursing.
- It is not Health Information Technology (HIT) though they are related to each other. HIT is primarily concerned with EHRs, PHRs, and e-prescribing. Moreover, it may also include health apps and online health communities. But, the concept of telehealth focuses on the delivery of general or professional health information and not on the particular technologies involved.
- It is not telemedicine, even though people use these terms interchangeably. Telemedicine uses technology only to monitor and diagnose or treat a health condition. In comparison, telehealth includes diagnosis and management, education, and other related healthcare fields.
Protecting Health Data in Telehealth: 6 Solutions that Never Fail
To ensure the health data is safe and integrated, telehealth systems should comply with the HIPAA guidelines. For this purpose, organizations need:
1. Business Associate Agreement (BAA)
A BAA is a written contract between a covered entity and a business associate. It establishes the permitted uses and disclosures. Thus, BAA prevents a business associate from using or disclosing PHI. Moreover, BAA must take appropriate measures to prevent unauthorized use or disclosure of the information.
If collaborating with a business associate, a BAA is required. This is the first step to getting HIPAA compliance for telehealth systems.
Note: A “business associate” is a person or entity that works with or on behalf of the covered entity. Notably, a business associate can have access to PHI. A business associate also is a subcontractor. The subcontractor creates, receives, maintains, or transmits protected health information on behalf of another business associate.
2. Transport Encryption
Encryption, a must-have for data security, converts sensitive information into a meaningless/undecipherable stream of seemingly random data. That way, it prevents the information from falling into the wrong hands. To decode the encrypted information, one needs an encryption key available only to authorized persons. Hackers can access the transmission en route to the destination, especially over public Wi-Fi. If the information is not encrypted, the ePHI itself is available. When following a transport encryption protocol, data confidentiality is maintained. That includes audio and video files.
3. Data Storage on Devices
Storage encryption can also encode backed-up and archived data on devices. It makes the information unusable to the hackers even when they gain access to storage media.
4. Video Data Storage
There are multiple options when storing health data that include everything from a flash drive to cloud storage. In all cases, choose a HIPAA-compliant product or service. Different manufacturers provide encrypted flash drives and external hard drives. Likewise, others offer cloud-based storage systems and databases that can be used for PHI. Two key factors differentiate great options from good ones. These are storage performance and storage capacity. It is critically important to assess the organization’s needs before selecting an option.
Both covered entity and a business associate must have provisions for access and audit controls. Also, they should regularly update their systems.
5 & 6 Access controls for stored and active video
Videoconferencing is the hallmark of telehealth. Because videos may contain audio and visual PHI, they should not be widely accessible to employees. Physicians, in most cases, need to be able to access the stored data. Other entities such as providers or insurance payers can get the access on a need-to-know basis. Audit trails and restricted access are required to control and monitor access.
Want more tips on keeping your ePHI protected?
Talk to the experts at LuxSci for a Free Consultation.
|
<urn:uuid:c6c48be8-1478-4248-9d1b-99b458aad9e7>
|
CC-MAIN-2022-40
|
https://luxsci.com/blog/telehealth-essentials-privacy-security.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00424.warc.gz
|
en
| 0.904536 | 1,282 | 3.0625 | 3 |
There are a lot of factors that go into making sure a business is running smoothly, and one of the most important ones is technology. With the right technological solutions, a company can save money, become more productive, and offer better services to their customers. In recent years, virtualization has become a popular option for many businesses looking to improve their technology infrastructure.
What is virtualization?
Virtualization is the process of creating a virtual machine (VM) or a simulated version of a computer system that works independently from the physical hardware it is running on. Separating the computer system from the machine allows users to run multiple operating systems (OSes) or applications on the same physical machine.
In the past, businesses needed individual servers for each application they wanted to run. For example, if a company wanted to run an email server, a web server, and a database, they would need three separate machines, each running its own OS and using dedicated IT resources.
With virtualization, all three of these applications can be run on the same physical server, each in its own virtual environment. This eliminates the need for dedicated IT resources for each individual application and allows businesses to better utilize their existing hardware.
Why do businesses need virtualization?
Virtualization offers a host of business benefits. In particular, virtualization helps you:
Lower hardware and software costs
Servers can be the most expensive component of a company’s IT infrastructure. By consolidating multiple servers into one physical machine, you won't have to invest in as much hardware. You also won’t have to spend a lot on maintaining and upgrading multiple machines. In addition, virtualization can help you get more out of your existing software licenses by running multiple applications on the same server.
Save energy and space
A physical server uses a lot of energy to stay operational and avoid overheating. It typically needs fans, air conditioning, and other cooling systems to keep running properly. By consolidating multiple servers into one machine, you can reduce your energy consumption and lower your carbon footprint. You’ll also save on physical space since you won't need as many servers.
Eliminate incompatibility issues
Different applications often require different OSes and hardware. For example, your email server may be set up specifically for Windows, while your web server may run only on Linux. With virtualization, you can isolate these applications in separate VMs and run different OSes on a single host, preventing them from interfering with each other and causing compatibility issues.
Security is always a concern for businesses, especially when it comes to sensitive data. By creating multiple VMs, you can segment your data and limit access to only the people who need it. This reduces the risk of data breaches and helps you comply with industry-specific regulations. Additionally, it prevents malicious software from spreading between VMs and infecting your entire network.
Downtime is costly for businesses, and it can be caused by a variety of factors, from hardware failures to software updates. With virtualization, you can keep critical applications up and running while you update or troubleshoot other VMs.
Ensure agile disaster recovery
In the event of a disaster, you need to be able to recover your data and get your business up and running as quickly as possible. Virtualization can help you shorten recovery response times by allowing you to replicate your entire IT infrastructure in a remote location. This way, if your primary data center is damaged or destroyed, you can quickly spin up your VMs in the cloud and keep your business running.
Improve productivity and bottom line
With fewer physical servers, you’ll have less hardware to manage and maintain. Routine tasks like patching and backups can also be automated, freeing up your IT staff to work on more strategic projects. Additionally, virtualization can help you improve your utilization rates and get more out of your existing hardware, which can lead to significant cost savings and a higher return on investment for your business.
Virtualization is a powerful tool that can help you lower costs, secure data, and boost productivity. If you’re not already using it, now is the time to start exploring how it can benefit your business. Get in touch with our experts at Kortek Solutions for more information.
|
<urn:uuid:d9fc0c2c-f6ba-41e4-afaa-76893f4c6bf7>
|
CC-MAIN-2022-40
|
https://www.korteksolutions.com/2022/07/how-can-virtualization-benefit-your-company/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00424.warc.gz
|
en
| 0.933623 | 878 | 2.796875 | 3 |
Modern applications include baseline security configurations before they are deployed to an operational production environment. These baseline configurations also include default security controls that define authentication mechanisms, user registration, and component update functions. Default settings in the application stack often expose security vulnerabilities since malicious actors leverage known security controls to gain access to the system. This form of exploit is known as a security misconfiguration attack and is attributed as one of the leading causes of most modern cyber attacks.
This guide explores the security misconfiguration vulnerability, common misconfigurations, severity level, and effective prevention techniques.
|
<urn:uuid:f72b854c-a933-44e4-880b-2510238b4df7>
|
CC-MAIN-2022-40
|
https://crashtest-security.com/prevent-security-misconfiguration/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00624.warc.gz
|
en
| 0.932997 | 113 | 2.53125 | 3 |
What is MAC Address
In computer networking, Media Access Control (MAC) address as important as an IP address. Both work hand in hand for the delivery of information across network elements.
Now that we know that MAC is the key element in the networking world, let’s understand what a MAC address is, MAC address format, bits/ length, and its ingredients –
The MAC address is used by the Media Access Control sublayer of the Data-Link Layer (DLC) of telecommunication protocols.
Related – Control Plane vs Data Plane
Every NIC (also called LAN card) has a hardware address that’s known as a MAC, for Media Access Control. The MAC address is sometimes referred to as a networking hardware address, the burned-in address (BIA), or the physical address
A MAC address is given to a network adapter when it is manufactured. It is hardwired or hard-coded onto your computer’s network interface card (NIC) and is unique to it.
ARP (Address Resolution Protocol) translates an IP address into a MAC address. The ARP takes data from an IP address through an actual piece of computer hardware.
HOW MANY MAC ADDRESSES CAN THERE BE IN THE WORLD?
The original IEEE 802 MAC address comes from the original Xerox Ethernet addressing scheme. This 48-bit address space contains potentially 281,474,976,710,656 possible MAC addresses. All three numbering systems use the same format and differ only in the length of the identifier.
MAC Address Format –
Let’s try to understand the MAC address format with the help of the image given below.
MAC Address Bits/ MAC Address Length
As shown in the above diagram, MAC addresses are 12-digit hexadecimal numbers (48 bits in length). By convention, MAC addresses are usually written in one of the following formats:
MM:MM:MM:SS:SS:SS or MMMM-MMSS-SSSS format
The first half (24 BITS) of a MAC address contains the ID number of the adapter manufacturer. These IDs are regulated by an Internet standards body (see sidebar). The second half (24 MORE BITS) of a MAC address represents the serial number assigned to the adapter by the manufacturer.
For example, consider a network adapter with the MAC address “00-A0-C9-01-23-45.” The OUI for the manufacture of this router is the first three octets—”00-A0-C9″ – In this case Intel corporation. Here are the OUI for other some well-known manufacturers.
FINDING A MAC ADDRESS –
The table below summarises options for finding a computer’s MAC address.
|Windows 95 and newer||winipcfg|
|Windows NT and newer||ipconfig/all|
|Linux and some UNIX||ifconfig -a|
|Macintosh with Open Transport||TCP/IP Control Panel – Info or User Mode/Advanced|
|Macintosh with Mac TCP||TCP/IP Control Panel – Ethernet Icon|
MAC address usage –
MAC addresses are used to send Ethernet frames between two stations in the same local area network. Each station has a unique MAC address that is used to identify who is the sender (source address) and who the receiver (destination address) is. But Ethernet frames can’t travel between networks. DHCP also usually relies on MAC addresses to manage the unique assignment of IP addresses to devices.
Mac Address vs IP Address –
- MAC addressing works at the data link layer, IP addressing functions at the network layer (layer 3).
- The MAC address generally remains fixed and follows the network device, but the IP address changes as the network device move from one network to another.
- IP networks maintain a mapping (association) between the IP address of a device and its MAC address. This mapping is known as the ARP cache or ARP table. ARP, the Address Resolution Protocol, supports the logic for obtaining this mapping and keeping the cache up to date.
- DHCP also usually relies on MAC addresses to manage the unique assignment of IP addresses to devices.
- IP addresses are associated with TCP/IP MAC addresses are linked to the hardware of network adapters.
Related- Static MAC Entry on the Switch
|
<urn:uuid:ced995c4-67c8-4f3b-975a-e8611b6024ff>
|
CC-MAIN-2022-40
|
https://ipwithease.com/what-is-a-mac-address/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00624.warc.gz
|
en
| 0.887355 | 947 | 3.765625 | 4 |
A new study by French researchers from the Institut Pasteur-Paris, Université de Paris, Vaccine Research Institute-France and Sorbonne Université-Paris has alarmingly found that the various new emerging SARS-CoV-2 variants display enhanced syncytia formation.
Syncytia is formed by fusion of an infected cells with neighboring cells leading to the formation of multi-nucleate enlarged cells.
This event is induced by surface expression of viral fusion protein that are fusogenic directly at the host cell membrane. Syncytia canonly happen with viruses able to directly fuse at the cellular surface without the need of endocytosis.
Typically, severe COVID-19 is characterized by lung abnormalities, including the presence of syncytial pneumocytes.
The syncytia forming potential of spike variant proteins remain poorly characterized.
The researchers team first assessed Alpha (B.1.1.7) and Beta (B.1.351) spread and fusion in cell cultures, compared to the ancestral D614G strain. Alpha and Beta replicated similarly to D614G strain in Vero, Caco-2, Calu-3 and primary airway cells.
However, Alpha and Beta formed larger and more numerous syncytia. Variant spike proteins displayed higher ACE2 affinity compared to D614G.
Alpha, Beta and D614G fusion was similarly inhibited by interferon induced transmembrane proteins (IFITMs). Individual mutations present in Alpha and Beta spikes modified fusogenicity, binding to ACE2 or recognition by monoclonal antibodies.
The study findings were published in the peer reviewed EMBO Journal.
reference link :https://www.embopress.org/doi/abs/10.15252/embj.2021108944
Syncytia are evolutionarily conserved cellular structures form by the multiple cell fusions of uninuclear cells. In mammals, the best example of physiological syncytia is muscle fibers, which contain thousands of fused muscle cells to allows their rapid coordinated contraction .
It is also important in the decidualization process during embryo implantation . Syncytia can also be induced by certain types of infections by viruses, such as human immunodeficiency virus, respiratory syncytial virus, and herpes simplex virus .
It could be envisioned that virus-induced cell fusion facilitates the transfer of viral genomes to the neighboring cells. However, the viral and cellular mechanisms regulating the formation of syncytia during SARS-CoV-2 infection remains largely elusive.
While examining the histopathologic lung sections from patients died from COVID-19, the Giacca group and the Sun group observed the prevailing existence of atypical cells containing 2-20 nuclei.
The identities of these syncytia were later confirmed by their expression of pneumocyte specific makers. In vitro co-culture assay showed that monkey kidney epithelial cell line, Vero cells (ACE2+), upon expressing the SARS-CoV-2 spike protein, could form homologous syncytia or fuse with other cell lines as long as the ACE2 receptor was present.
Interestingly, when Vero cells were transfected with Spike protein from SARS-CoV-1, no formation of syncytia was observed. Therefore, the key element responsible for SARS-CoV-2-mediated syncytia is absent in the spike protein of SARS-CoV-1. Driven by this hypothesis, Sun and et al. compared the spike protein from SARS-CoV-2 and SARS-CoV1 and found that there is a four amino acids (PRRA) insertion before the S1/S2 cleavage site in the SARS-CoV-2 spike protein.
The truncated mutation of SARS-CoV-2 spike protein with “PRRA” deletion lose its abilities to fuse cells. Consistently, spike protein from SARS-CoV1 effectively induced syncytia once the “PRRA” sequence was inserted before the S1/S2 cleavage site of the SARS-CoV-1 genome.
Furthermore, the Sun group demonstrated that a bi-arginine motif containing R682 and R685 dictates syncytium formation by constructing single or combine mutations in the “PPRA” insertion. Fig. 1.
Fig. 1: The SARS-CoV-2 spike protein and cellular TMEM16 ion channel collaboratively mediated the formation of syncytia in COVID-19 infections.
The new data obtained by Sun et al. provide critical information for understanding syncytia through deciphering the structure basis required for SARS-CoV-2 spike protein-mediated cell fusion, while Giacca et al. focused on the cellular mechanism and therapeutic potential of syncytia during SARS-Cov-2 infection.
In this regard, Giacca and colleagues screened 3049 FDA/EMA-approved drugs using SARS-CoV-2 spike protein expressing Vero cell-based in vitro cell fusion system to search for drugs that block syncytia.
Interestingly, drugs that suppressed cell fusion are all capable of regulating intracellular Ca2+ levels. Among the syncytia blocking drugs, niclosamide, an oral anti-helminthic agent, was found to be effective at a very low dose (IC50 = 0.34 μM) and could prevent cell from virus-induced cell death.
Niclosamide is a potent antagonist of Ca2+-activated TMEM16/anoctamin family of chloride channels . TMEM16F was also dramatically increased in Vero cells upon spike protein expression. When TMEM16F expression was disturbed, the syncytia formation in spike-expression cells were diminished. Therefore, TMEM16F activation is the signal responsible for triggering syncytia.
These two elegant studies collectively revealed a new concept of syncytia formation and its roles in SARS-CoV-2 infections, which can be briefly précised as follows. The SARS-CoV-2 infections induce the surface expression of the spike glycoprotein.
The interaction of the spike protein with the ACE2 receptor of the neighboring cells then activate TMEM16F, and trigger the unshealthe of the profusion S2 fragment of the spike protein in a bi-arginine motif dependent manner, which eventually leads to the membrane fusion and syncytia formation. However, there are still many questions that remain to be elucidated.
One of which is whether the bi-arginine motif is required for the activation of TMEM16F. Another is the impact of syncytia formation on SARS-CoV-2 infections in vivo. Sun et al. found that a type of CD45 positive cell structure presents in the syncytia of the COVID-19 patients.
This could be a cell-in-cell structure. When human peripheral blood mononuclear cells were co-cultured with SARS-CoV-2 spike protein-induced syncytia, they could be engulfed by and die inside the syncytia, thus providing a possible explanation for lymphopenia in SARS-CoV-2 infections .
It can be highly suspected that syncytia are deleterious for COVID-19 patients since syncytia were observed only in the severe stages of the diseases and syncytia may induce lymphopenia. Despite the observation of multinucleate pneumocytes in autopsy, it is still not known whether such syncytia play a critical role in the pathogenesis of CRDs of severe COVID-19 patients. Recently, an antidepressant drug, fluvoxamine, was shown to lower the likelihood of clinical deterioration of severe COVID-19 patients in a randomized clinical trial .
Interestingly, fluvoxamine could facilitate TMEMF activation and phosphatidylserine exposure . It is imperative to examine whether fluvoxamine affects syncytia formation. It is also worthy to evaluate the whether the combine uses of anti-syncytia drugs with other COVID-19 targets would yield better clinical outcomes [14, 15].
Overall, these two papers provide critical information for the understanding of how syncytia occurred during SARS-CoV-2 infections at the virus structure and cellular signaling points of views and open up a new revenue in COVID-19 studies. It is anticipated that these novel findings may provide information for developing new strategies to combat the current pandemic.
reference link : https://www.nature.com/articles/s41418-021-00795-y
Discussion – reference link :https://www.embopress.org/doi/abs/10.15252/embj.2021108944
The replication and cytopathic effects of SARS-CoV-2 variants is under intense scrutiny, with contrasting results in the literature (Frampton et al., 2021; Hou et al, 2020; Leung et al, 2021; Liu et al, 2021b; Touret et al., 2021). For instance, there was no major difference in the replication kinetics of Alpha and D614G strains in some reports (Thorne et al, 2021; Touret et al., 2021), whereas others suggested that Alpha may outcompete D614G in a co-infection assay (Touret et al., 2021). Other studies proposed that the N501Y mutation may provide a replication advantage, whereas others suggested that N501Y is deleterious (Frampton et al., 2021; Hou et al., 2020; Leung et al., 2021; Liu et al., 2021b). These discrepant results may be due to the use of different experimental systems, viral strains, multiplicities of infection and cell types.
Here, we show that Alpha and Beta variants replicate to the same extent as the early D614G strain in different human cell lines and primary airway cells. Moreover, Alpha and Beta induced more cell-cell fusion than D614G. Increased fusion was observed in U2OS-ACE2 cells and in naturally permissive Vero cells. In agreement with infection data, transfection of Alpha and Beta S proteins, in the absence of any other viral factors, produced significantly more syncytia than D614G, which in turn, fused more than the Wuhan S.
Comparative video microscopy analysis revealed that Alpha S fused the most rapidly, followed by Beta, D614G, and finally Wuhan. Thus, Alpha and Beta variants display enhanced S-mediated syncytia formation. One limitation of our study resides in the fact that were unable to look at surface expression of the variant S proteins in Vero and Caco2 without losing the large S protein positive syncytia.
We thus used the non-fusogenic 293T cells to control for surface expression. We further show that S-expressing 293T cells fuse with Vero cells in donor/acceptor experiments. The experiments confirmed the enhanced fusogenicity of the variants in cells with similar levels of S protein at their surface.
We further show that Alpha and Beta remain sensitive to restriction by IFN-β1. The fusion mediated by their respective S proteins is inhibited by IFITMs. This extends previous results by us and others demonstrating that ancestral Wuhan S is effectively inhibited by this family of restriction factors (Buchrieser et al., 2020; Shi et al., 2021).
It has been recently reported in a pre-print that Alpha may lead to lower levels of IFN-β1 production by infected Calu-3 cells and may be less sensitive to IFN-β pre-treatment, when compared to first wave viral isolates (Thorne et al., 2021).
We did not detect here differences of IFN-β1 sensitivity between the variants in Vero and U2OS- ACE2 cells. Again, these discrepant results may reflect inherent differences between Calu-3, Vero and U2OS-ACE2 cells, or the use of different viral isolates.
We then characterized the contribution of the individual mutations present in Alpha and Beta S proteins to their respective fusogenicity. The highly fusogenic Alpha S consists of more mutations that robustly increase fusion (P681H and D1118H) than mutations that decrease fusion (∆69/70).
In contrast, the Beta variant is comprised of several restrictive mutations (∆242-244, K417N, and E484K) and only one mutation that modestly increased fusion (D215G). The strongest increase in fusion was elicited by the P681H mutation at the S1/S2 border. This mutation likely facilitates proteolytic cleavage of S and thus promotes S mediate cell-cell fusion. Indeed, the analogous P681R mutation present in B.1.617.2 and B.1.617.3 variants increases S1/S2 cleavage and facilitates syncytia formation (Ferreira et al, 2021; Jiang et al, 2020).
Of note, another report with indirect assessment of variant S fusogenicity suggested a mild decrease or no difference in cell-cell fusion of Alpha and Beta relative to Wuhan S (Hoffmann et al, 2021). These previous experiments were performed in 293T cells at a late time-points (24 hours post-transfection), which may preclude detection of an accelerated fusion triggered by the variants.
We show that the binding of variant S to soluble ACE2 paralleled their fusogenicity. Alpha bound the most efficiently to ACE2, followed by Beta, D614G and finally Wuhan. However, the ACE2 affinity of S proteins carrying individual mutations did not exactly correlate to fusogenicity. For instance, the N501Y and D614G mutations drastically increased ACE2 affinity, but only D614G enhanced fusogenicity.
The K417N substitution, and to a lesser degree ∆242-244, had a lower affinity to ACE2 and also restricted cell-cell fusion. The E484K mutation significantly restricts fusion, but mildly increases ACE2 affinity. This suggests that on the level of individual S mutations, the relationship between ACE2 affinity and increased fusogenicity is not always linear. Variant mutations may also confer advantages in an ACE2 independent manner.
Indeed, recent work has suggested that the E484 mutation may facilitate viral entry into H522 lung cells, requiring surface heparan sulfates rather than ACE2 (Puray-Chavez et al, 2021). It would be of future interest to examine the syncytia formation potential of the variant mutations in other cell types.
We selected a panel of 4 mAbs that displayed different profiles of binding to Alpha, Beta, D614G and Wuhan S proteins. The mAb10 targeting the S2 domain recognized all variants and was used as a positive control. Wuhan and D614G were recognized by the three other antibodies, targeting either the NTD or RBD. Alpha lost recognition by the anti-NTD mAb71, whereas Beta was neither recognized by mab71 nor by the two anti-RBD antibodies mAb48 and mAb 98. Upon examining the potential of S proteins carrying individual mutations to bind to human monoclonal antibodies, we found that the ones that restrict (∆242-244, K417N) or have no effect on fusogenicity (∆Y144) are also not recognized by some mAbs.
This suggests that variant S proteins have undergone evolutionary trade off in some circumstances; selecting for mutations that provide antibody escape at the detriment of fusogenicity. In accordance with our findings, deep sequence binding analysis and in vitro evolution studies suggest the N501Y mutation increases affinity to ACE2 without disturbing antibody neutralization (Liu et al, 2021a; Starr et al., 2021; Zahradník et al, 2021).
The E484K and K417N RBD mutations in the Beta variant may also increase ACE2 affinity, particularly when in conjunction with N501Y (Zahradník et al., 2021) (Nelson et al, 2021). However, the resulting conformational change of the S protein RBD may also decrease sensitivity to neutralizing antibodies (Nelson et al., 2021). Future work assessing the structural and conformational changes in the S protein elicited by a combination of individual mutations or deletions may further help elucidate the increased fusogenicity and antibody escape potential of the variants.
While we had previously shown that the interaction between the S protein on the plasma membrane with the ACE2 receptor on neighboring cells is sufficient to induce syncytia formation, there is compelling evidence of the importance of the TMPRSS2 protease in S activation (Buchrieser et al., 2020; Dittmar et al, 2021; Koch et al, 2021; Ou et al, 2021). We found that the S protein of the novel variants induced more syncytia formation than the D614G and Wuhan S proteins in human Caco2 cells which express endogenous ACE2 and TMPRSS2. However, we did not detect any major
differences in the processing of the variant S proteins by TMPRSS2. It will be worth further characterizing how the fusogenicity of variant associated mutations are influenced by other cellular proteases like furin.
The presence of infected syncytial pneumocytes was documented in the lungs of patients with severe COVID-19 (Bussani et al., 2020; Tian et al, 2020; Xu et al, 2020). Syncytia formation may contribute to SARS-CoV-2 replication and spread, immune evasion and tissue damage. A report using reconstituted bronchial epithelia found that viral infection results in the formation and release of infected syncytia that contribute to the infectious dose (Beucher et al., 2021).
The neutralizing antibody response to SARS-CoV-2 infection has divergent effect on cell-cell fusion, with some antibodies restricting S mediated fusion, while other increase syncytia formation (Asarnow et al, 2021). Cell-to-cell spread of virus may be less sensitive to neutralization by monoclonal antibodies and convalescent plasma than cell-free virus (Jackson et al, 2021). It is thus possible that infected syncytial cells facilitate viral spread. Within this context, it is necessary to better understand the fusogenic potential of the SARS-CoV-2 variants that have arisen and will continue to emerge.
We have characterized here the replication, fusogenicity, ACE2 binding and antibody recognition of Alpha and Beta variants and the role of their S-associated mutations. Despite the insights we provide into the S-mediated fusogenicity of the variants, we did not address the conformational changes that the mutations individually or in combination may elicit.
We further show that Alpha, Beta and Delta S proteins more efficiently bind to ACE2 and are more fusogenic than D614G. Which virological and immunological features of the Delta variant explain its higher estimated transmissibility rate than Alpha and other variants at the population level remains an outstanding question.
|
<urn:uuid:36101332-ed9e-49c7-b390-0c63c0960ef8>
|
CC-MAIN-2022-40
|
https://debuglies.com/2021/10/06/emerging-sars-cov-2-variants-possess-enhanced-syncytia-formation/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00624.warc.gz
|
en
| 0.912376 | 4,051 | 3.265625 | 3 |
As the internet becomes an increasingly integral part of day to day life and commerce, it’s easy to forget the ever present threat of hackers and cyberattacks. However, hackers have refused to forget how easy it is for the average person to make a costly mistake when it comes to cybersecurity online. Businesses aren’t afforded the luxury of letting their guard down, however, because they are often targeted by hackers. Here are the ways in which modern businesses are tackling the ever-increasing need for cybersecurity.
The current arc of technological innovation seems to be interesting in creating the most interconnected and accessible social and commercial landscapes, and that’s an admirable goal on paper. However, the looming threat of hackers means that this utopian philosophy of design is a double-edged sword. Zero Trust proposes an alternative, a way to have your cake and eat it too. Essentially, microsegmentation is the practice of protecting a network from within by never assuming that any system or user is already authorized.
This actively bucks the existing trend of giving users ways to shortcut login processes by having their information remembered, and that’s because those shortcuts are potential liabilities. It also allows users to have a segmented space within a fairly connected landscape. It’s a great way for compensating for the potential risk of hiring employees remotely, for example, because you can include remote employees in the broader network without blindly accepting the ramifications of their presence.
Passwords have been a security staple since well before the advent of the internet, and they’ve done a fair job for the most part. However, passwords are arguably the weakest link in all of cybersecurity, because hackers have pinned down the science of cracking passwords. Faster machines enable more efficient “brute forcing” of passwords, and social media allows hackers to match a list of common password ideas to the details of a person’s public persona. The writing’s on the wall, and passwords are simply never enough on their own. Multi-factor authentication provides a pretty potent alternative, however.
Multi-factor authentication secures the login process by requiring two or more identification credentials, rather than just a password. This is especially promising when you consider the emergence of MFA arrangements that no longer even use passwords. Typically, a password is still used, but it is strengthened by the additional requirement of a 4 digit code sent to the email or the smartphone of the intended user. This has been shown to be incredibly effective as a deterrent even after a given password has been compromised.
Once your system is in place, you’ll need to make sure that it is as foolproof as possible, and the best way to do that is to conduct penetration testing. This process entails hiring a cybersecurity specialist to employ known hacking techniques in order to probe for weaknesses in your system, and it’s by far the best way to root out these backdoors that could serve as an open invitation to hackers. This kind of stress testing is generally a good idea when working with tech, although the specifics vary from case to case.
The most prevalent cause of cybersecurity breaches is the very same human fallibility that hackers target in the first place. A general lack of cybersecurity on the part of the average person has been a blessing to hackers, because this allows the relatively tame threat of malware to continue to work to this day. By educating your staff on how to spot malicious downloads or, better yet, how to avoid them entirely, you can drastically reduce your odds of being attacked successfully. It’s also important to make sure employees are aware of social engineering tactics that are sometimes used in phishing campaigns, for example. Of all the many tools and techniques hackers have at their disposal, the implicit trust of general users has proven to be the most advantageous.
|
<urn:uuid:4570037b-38c9-4fdc-90fe-c5b1a21fd6d7>
|
CC-MAIN-2022-40
|
https://www.crayondata.com/4-crucial-cybersecurity-considerations/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00624.warc.gz
|
en
| 0.953312 | 789 | 2.734375 | 3 |
Predictive analytics is the branch of the advanced analytics which is used to make predictions about unknown future events.
Predictive analytics uses many techniques from data mining, statistics, modeling, machine learning, and artificial intelligence to analyze current data
to make predictions about future. It uses a number of data mining, Predictive modeling and analytical techniques to bring together the management,
information technology, and modeling business process to make predictions about future.
|
<urn:uuid:b980063a-aab3-4c6d-b144-808ebb65b7a9>
|
CC-MAIN-2022-40
|
https://kreyonsystems.com/DataScience.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00624.warc.gz
|
en
| 0.860157 | 94 | 2.90625 | 3 |
Video converters, at their most basic, convert and reformat signals from one video interface for another type of video interface, such as ones that enable you to display VGA computer video on an NTSC or PAL TV. Basic video converters are neither scalers nor scan converters. This means that the resolution of the video signal at output is the same as the input signal, which can be a problem if you’re trying to send PC video to an HDMI-enabled display. Therefore, if you set your PC at a resolution of 1024 x 768 (XGA), your display may not show the HDMI image. In this case, you have to set your PC’s resolution to either 640 x 480 at 60 Hz (VGA interpreted as 480p), 800 x 600 at 50 Hz (SVGA interpreted as 576p), 1280 x 720 at 60 Hz (WXGA interpreted as 720p), or 1920 x 1080 (1080p).
A scaler is a device that samples an input signal and scales it up or down to a resolution and timing suitable for the display. A scaler may optionally also convert the signal to a different format. A scaler that downscales video is sometimes called a scan converter.
Scalers are particularly useful when you want to connect different analog and digital equipment for output on a common display, such as in a presentation environment where you don’t want to fiddle with controls to get the picture right. All you do is set the output resolution to match the native resolution of the connect display.
Scalers that support switching take this concept further, enabling you to electronically switch video inputs and letting the box automatically make the necessary adjustments.
Usually, the scaling involves “upconverting” the video signal so the resolution is accurately framed for a newer display type. The actual scaling is a process where the number of display pixels is mapped and adjusted to accurately match the display’s resolution.
Also, deinterlacing technology with advanced motion compensation intelligently scales the source signal to the desired resolution with virtually no artifacts or distortion. In addition, scalers often perform frame rate adjustments so the proportion of the image isn’t resized incorrectly.
]If you’re not sure if you need a converter or a scaler, give our free Tech Support experts a call at 724-746-5500.
|
<urn:uuid:bd95d5e4-afa8-4eae-915d-56bc13b5e543>
|
CC-MAIN-2022-40
|
https://www.blackbox.com/en-be/insights/blogs/detail/technology/2014/03/07/the-difference-between-converters-and-scalers
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00624.warc.gz
|
en
| 0.87954 | 491 | 2.90625 | 3 |
Adaptive Authentication is one of the fastest growing approaches in the field of multifactor authentication.
Adaptive systems take advantage of contextual and behavioral aspects to assess the risk of an access attempt and adapt the type of authentication accordingly.
For instance, let’s say an access request is emanating from an unusual location or at an unusual time, strong indicators of a digital identity compromise. An adaptive system can be programmed to view this as suspicious activity and demand additional authentication factors. In addition to location, adaptive systems can use a variety of behavior measuring techniques to detect potentially dangerous activity. These include everything from keystroke sequences to the pattern of services and tabs opened on a site. Other forms of adaptive systems, instead of employing preset protocols, assess user activity with computer algorithms, flagging actions that don’t jive with the user’s typical behavior or are otherwise suspicious. When any of these patterns are identified, the system can then interrupt a session until more authentication is provided.
At a first glance, this method seems like an ideal balance of the two most important factors of authentication – UX and risk. However, it isn’t all it’s cracked up to be. There are more elegant solutions to this UX-risk dichotomy available in the market today that offer high-assurance password-free authentication.
Let’s examine and compare the methods.
While adaptive systems give a network an added layer of protection, they do not replace other authentication methods that are necessary to log in. Users still need to deal with a password. This means remembering and frequently changing long and complicated sequences of numbers and digits.
alternatively, password less methods completely eliminate passwords, and with that, the need to maintain, replace, and manage them. Authentication is always seamless, requiring only the response to a push notification.
A basic idea lies at the heart of the security concept of Adaptive Authentication: user access to company resources should be easy.
How easy? As easy as the risk level for any given access permits. To this end, an adaptive system needs to assess the risk associated with any request and keep authorization requirements as low as possible.
Opposing this aspiration for smooth access is the need to maintain high-security assurance to protect company assets. In an adaptive system, these two considerations will always be in conflict. Lowering your walls may make it easier for your own people to get in, but also opens the doors for intruders.
But there are other alternatives. Secret Double Octopus authentication, for example, is based on Secret Sharing technology, alleviates the need to work within the UX-security schism. The most powerful and reliable authentication is achieved from the initial login. No additional authentication steps are required through the rest of session.
A major consideration in deploying any system of authentication is the total cost of owning and managing the tools, or TCO. The more factors required in a system, the more TCO goes up. Some factors like hardware tokens need to be procured and distributed to each individual user. Others like passwords, while not requiring an additional device, still divert considerable resources to maintain. Company IT needs still needs to handle password management and replacement. According to the biggest names in industry research such as Gartner, HDI, and Forrester, the cost of the average call to technical support for password reset can range from $17 to $25.
Adaptive Authentication, far from being an alternative, only add more weight to the workload of IT departments, who now have to manage the adaptive layer of the system in addition to the other factors.
Password less multifactor systems., however, saves cost while achieving the highest level of authentication security. The fact that an estimated 20 to 50 percent of helpdesk calls are password related, means eliminating password management translates into substantial savings even for small organizations. All other costs associated with passwords, including storing, and encrypting, are also eliminated.
|
<urn:uuid:ad8dba0d-a1c8-4ade-9744-b4cf0dfb40e7>
|
CC-MAIN-2022-40
|
https://doubleoctopus.com/blog/passwordless-mfa/adaptive-authentication-pros-and-cons/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00624.warc.gz
|
en
| 0.932502 | 807 | 2.640625 | 3 |
In general, a healthy dose of skepticism can be good. It helps us separate real news from fake, choose our friends wisely, and gives us pause before we buy the XXL Cheese-buster Burrito from the frozen foods section. Sure, being overly skeptical or untrusting can be a flaw, but in the right dose, skepticism is a positive trait. When it comes to data security, skepticism is even BETTER! Cyber Security Awareness Training puts an emphasis on skepticism: caution in clicking links, caution in downloading files, caution in sending sensitive information. It encourages a raised eyebrow in response to cyber crime that preys on the trusting nature of people.
Here is why remaining skeptical when working online is the right move and how Cyber Security Awareness Training can help:
The Human Element
People are great! The problem is, criminals know it. The average employee would think nothing of sending their supervisor whatever he or she asked for, however he or she asked for it. Even if the request is for sensitive data to be sent over e-mail, most employees will comply without a second thought. That is exactly what cyber criminals are counting on. The human element poses a great risk to your data security. Because people are trusting, they can be taken advantage of. Cyber crime has evolved to exploit that idea. Phishing is an example of a scam that has caused havoc in many businesses.
Phishing and Other Scams
Phishing is a scam that sees the criminal pose as a trusted person, website, or other authority in order to steal private data. The person being scammed thinks that they are sending the information to a secure source. In reality, they're unknowingly handing over sensitive data directly into criminal hands. It's all too easy to set up a fake site or e-mail account that can dupe someone into sending information they shouldn't. Similar scams include general e-mail scams, phone scams (often posing as the IRS or another authority) and more.
Education and Action
With a sharp rise in Phishing and other scams that prey on the human element, education is the best defense. That's why Cyber Security Awareness Training exists: to teach employees to go against their trusting nature and exercise skepticism. Whenever a request for private data comes across a desk, it should be met with scrutiny. The identity of the person requesting the information should be confirmed several ways and the request itself should be examined for authenticity. This mindset should extend to links in e-mail, e-mail attachments, and anything else that could be a virus-in-disguise.
Cyber Security Awareness Training should be a part of every business. Empowering employees with data security best practices can be the difference between a secure network and a costly breach. Teach employees skepticism and they can help fight against cyber crime.
Download our Data Security Checklist and make sure your business is protected:
|
<urn:uuid:0211d3a0-c2cf-4822-a26a-980369e0ae8d>
|
CC-MAIN-2022-40
|
https://blog.integrityts.com/cyber-security-awareness-training-skepticism-can-be-a-good-thing
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00624.warc.gz
|
en
| 0.943784 | 585 | 2.875 | 3 |
Two things are inherently true when it comes to cyber criminals. The first is that they follow the money. This is why ransomware grew to a billion dollar business overnight. The second is that like water, their efforts flow towards the path of least resistance. Cyber criminals are like many people, they go for the easy money. Phishing has been the dominant delivery method for malware and cyber attacks for a number of years now. However, phishing is not as easy as it used to be. Spam filters and email gateways now react quickly in shutting down a malicious email domain. Email security technology is now using analytics to more accurately identify behavior abnormalities and possible email threats. Even users are growing more guarded when opening emails and are becoming more astute at identifying suspicious links and attachments. We have a long way to go of course, but it is getting better.
Which is why smishing attacks are growing more prominent today as cybercriminals turn to a medium that users trust more than their email. Smishing is a social engineering attack that utilizes SMS text messaging instead of email. There are over 6 billion smart phones in circulation today and a third of them are smartphones. Because cell phones today are part of our persona and extension of our day-to-day lives, users look to them as trusted devices that they can count on. Unfortunately, from a cyber security point of view, a personal cellphone is a vulnerable computer with a direct connection to the Internet. With the proliferation of Bring Your Own Device (BYOD) within companies today, the enterprise is now exposed to personal cell phones on a regular basis. Just like laptops and more traditional computer devices, personal cell phones can become launching pads to spread malware, DoS attacks and seize control of privileged accounts.
Texting is an Ideal Medium for Social Engineering Attacks.
Here is why:
- There are approximately 913,242,000 texts sent every hour of every day around the globe. That is 15.2 million per minute. This rapid open rate velocity of text messages makes it highly favorable for perpetrators.
- More than 90% of SMS text messages are opened within 3 seconds. This accents the sense of urgency that is a necessity for a social engineering attack. Users need to feel that an action must be performed right away without proper venting and consideration.
- There is no current text filter technology in text numbers are confirmed as trusted sources.
Examples of SMS phishing attacks exemplifying this sense of urgency look something like this:
- “Congratulations, your entry in our store drawing has won you a $100 gift card. Please click the link to accept your prize.”
- “You have received an alert notice concerning a large withdrawal from your account. Please respond with your account number to confirm your identity.”
- “As a valued customer, we are now offering our new banking app. Please click the link to install.”
- “You have been selected for jury duty. If you cannot make it, please call the following number and have your name and social security number ready.”
How to Protect Yourself Against SMSishing
Thanks to the highly publicized mammoth data breaches over the past couple of years involving millions of personal records, cyber criminals have access to a large pool of legitimate phone numbers. This makes it possible to extrapolate these records in order to better target the users who have these numbers. Most SMS phishing attacks however are sent like traditional phishing attacks in which a large net is cast to catch easy prey. Users should be wary of text messages that come from a “5000” number, as this is indicative of an SMS message send over the web rather from a cell phone. Below are some other tips to protect yourself from smishing attacks.
- Never install an app from a web link that appears in a text message. All apps should only be downloaded from verified app stores. If possible, set your phone to block apps from unknown sources.
- Never respond to a text message that conveys a high sense of urgency or panic and demands an immediate action
- Verify all links sent by family and friends to confirm they knowingly sent you the specific link
- Never respond to a text from a financial, healthcare of governmental institution. Call the verified number of these organizations to confirm the contents of a text message that asks you to take some type of immediate action.
- Do not respond to a text message asking them to stop sending you spam messages. This will only confirm your number and encourage them. You can block numbers that consistently send you message through your cellular provider’s portal or by calling them.
- Consider a NAP or NAC policy for all BYOD mobile devices. Many companies create policies enforcing minimum-security standards for laptops and tablets. Requiring users to have malware protection on their phones is a legitimate request considering today’s environment.
While the majority of SMS phishing attacks are implemented for a quick payoff from an unsuspecting user, smishing cyber attacks are rapidly growing in sophistication and can be used as a method to target an entire enterprise in the future. It is important to ensure that SMS messages are an area of least resistance.
HALOCK is a cyber security consulting company headquartered in Schaumburg, IL, in the Chicago area and advises clients on reasonable information security throughout the US.
HALOCK Breach Bulletins
Recent data breaches to understand common threats and attacks that may impact you – featuring description, indicators of compromise (IoC), containment, and prevention.
|
<urn:uuid:f57372ec-0d55-44a6-bf38-77f5a4bc03c3>
|
CC-MAIN-2022-40
|
https://www.halock.com/smishing-attacks-increasing/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00624.warc.gz
|
en
| 0.940935 | 1,122 | 2.90625 | 3 |
Nobody wants to spread or catch diseases.
According to St Louis Children’s Hospital (link) the average K-12 kid has four to six colds per year, and diarrhea with or without vomiting 2-3 times per year. Rounding that to seven incidents per year, the average school with 529 students (NCES) will have disease-spreading events 3,703 times each year. That’s 20 per school day. When parents say “yeah, they probably picked it up from school,” they’re probably right.
Who is responsible for minimizing the risk of spreading disease?
A school kid who is still recovering from an illness might return to school while still contagious. They will eventually touch some surfaces. In this situation. whose responsibility is it to clean up behind them?
In Japan, the clean school culture begins and ends with the individual. A sick kid will not come to school. As they recover, they will mask up. Once reunited with their friends, they will find themselves, at the end of the day, divided into groups. One will mop, wipe, and disinfect the classroom, another will take the hallways and stairs, and the final group will hit the toilets.
In most of the rest of the world it is up to custodians or janitorial staff to cover immense distances and wipe down many surfaces every night. It is a herculean task.
It is impossible to cover every square inch as much as they would like, so they must be pragmatic. Custodians must make judgement calls on which surfaces get disinfected, and how often. And since microorganisms are invisible to the naked eye, it is impossible to know how well one is sanitizing, or when it is time to change the mop or cloth to avoid merely smearing bacteria and viruses from one spot to another.
This important task that keeps staff and students’ health is low paying and unsupervised. Unless we see the same breadcrumb on the floor the morning after, we have no clue what was achieved over-night.
School kids and staff have a basic right to a safe and clean environment, especially after the corona-years. Manual methods are too inconsistent and don’t scale. It needs automation.
EPIC iO has developed a new classroom disinfectant solution that is safe and automatically disinfects all rooms and surfaces, killing bacteria, viruses, and mold to protect occupants.
The wall- or ceiling-mounted briefcase-sized biosecurity device destroys pathogens by delivering carefully timed micro-doses of FDA-approved ozone. Ozone is proven to inactivate serious pathogens like MRSA, Strep, and other viruses, making classrooms far safer than they were before and reducing contagious illnesses.
Our solution runs in an empty room overnight. Compared with traditional disinfectants that use chemical sprays, foggers, or UV lighting, EPIC iO’s device is safer, easier to use, and disinfects evenly throughout a room, including hidden surfaces.
Schools are using it after the custodians complete their cycle, to provide the decontamination air-cover they need to touch every corner of each room. Without it, it’s just disease as usual.
|
<urn:uuid:1723e270-18d8-4e38-80f8-2f22847f71bf>
|
CC-MAIN-2022-40
|
https://epicio.com/the-clean-school-culture-and-keeping-schools-free-of-diseases/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00024.warc.gz
|
en
| 0.959186 | 670 | 2.875 | 3 |
How Digital Wallets Are Changing the Security Industry
Digital wallets offer individuals a new way to digitally authenticate their identities conveniently and seamlessly, including storing, organizing and verifying the identities we need in daily, digital interactions. In fact, emerging forms of digital identity are poised to reshape nearly every aspect of our lives. Digital wallets can help people not only to transact safely, but to travel freely and work productively.
Here we’ll explore digital wallets, the evolution of its capabilities, what’s driving the shift and what it means for the security industry.
What Is a Digital Wallet?
A digital wallet is technology that can be used on an internet-enabled device, including smartphones and computers, and allows the user to securely store information like banking data or passwords to make safe transactions online, in apps or via peer-to-peer payments. Some digital wallets may even charge fees, such as Venmo’s instant transfer feature that assesses 1.5% of the total transaction to complete the request.
Are Digital Wallets and Mobile Wallets the Same?
There is a difference between digital wallets and mobile wallets. A mobile wallet is a specific type of digital wallet technology accessible through a mobile app. Mobile wallets use wireless capabilities, including near-field communication protocol (NFC) and Bluetooth (BLE) to connect with another device, such as a reader. A mobile wallet can be used for e-commerce payments, to transfer money between friends or to make contactless mobile payments. Mobile wallets can also be loaded with just about anything you would put in a physical wallet, like concert tickets, loyalty cards, employee badges, flight boarding passes or even keys to lock or unlock a car.
An easy way to think about the difference? Apple Pay vs. Apple Wallet. Or for the Android users out there, Google Pay vs. Google Wallet. Apple Pay and Google Pay are safe ways to make secure purchases in stores, in apps and on the web, making them digital wallets. Apple Wallet and Google Wallet are the places where you store your credit or debit cards so you can use them with Apple Pay or Google Pay, making them mobile wallets.
The Changing Identity Landscape — Digital Identity Verification
What’s driving the shift to digital identity verification? There are a few forces at work:
- Consumer demand for convenience, automation and contactless transactions is pushing innovation in the digital wallet space. Not only did the infrastructure to support digital transactions grow over the last several years, but the adoption of mobile wallet apps also grew with 32 percent of users having three or more mobile wallets downloaded on their smartphones.
- Mobile convergence and the multitude of converged devices are facilitating interoperability and seamless, optimized connectivity for users. As we continue to shift to a more digital world and individuals expect a seamless mobile experience, this consolidation of networks is set to unleash the next generation of applications and use cases.
- Legislation, regulations and standards across the globe are paving the way by providing a consistent legal framework for accepting digital identities. One such piece was passed in Europe — the eIDAS Regulation — as a key enabler of cross-border digital transactions.
- Digital wallets are adding new functionality. In 2021, Apple announced it was working with several states to roll out mobile driver’s licenses or state IDs for iPhone and Apple Watch to residents.
- Tech giants and startups are racing to create the go-to app for all digital identity verification needs in a single platform, from financial and insurance documents to health information and government IDs.
The Digital Wallet of the Future and the Implications for Security
Digital wallets are increasingly being used for new approaches to digital identity verification, but how are they expected to evolve? A range of emerging use cases helps to paint the picture of what digital wallets will be storing in the near future. It’s expected that digital wallets will be used in the management and access of medical records, travel documents, insurance and investment information — in addition to the broad adoption of government IDs and employee badges.
As capabilities for digital wallets and digital identities continue to expand, they are poised to have a substantive impact not only on our daily lives, but also on the security industry. In fact, security leaders should start thinking ahead to the next iterations of digital wallets and digital identities, and what considerations are key to the discussion.
What infrastructure is needed to fully embrace a connected, mobile experience?
The ability to use digital wallets rests with updated infrastructure, including NFC and BLE-enabled hardware. The issuance of digital identities requires a modern credentialing program with security management in the cloud. Defining what identity management means in the context of each business will help to create the right identity infrastructure, allowing for scalability in a more secure way and the adoption of the right mix of technologies, applications and processes to manage digital identities to meet the defined requirements.
What compliance is necessary and what communication is needed regarding how data is used and stored, or if privacy concerns are raised?
Digital identities and their different sources create unique multi-faceted personas and experiences along with a wealth of associated data. While there is value in utilizing collected data to offer better products and services to both customers and employees, those customers and employees likely see it differently. Data collection is a practice that can easily lead to mistrust and suspicion without transparency. According to a recent KPMG report, the U.S. population responds more favorably when they know exactly how their data is used. Digital identity systems should take into consideration regionally and globally relevant laws, regulations and industry standards. Privacy and security laws like General Data Protection Regulation (GDPR) highlight the shift in control and ownership of Personally Identifiable Information (PII) from the service provider to the user. This means time and attention should be dedicated to develop internal policies — especially those relating to data privacy, data protection and fraud prevention.
What opportunities exist not just now, but in the future, to organize around service models and service-led growth, such as SaaS-delivered identity?
The use of digital wallets is now bound to higher level initiatives than just purchases. In fact, a recent article by Deloitte points to identity as the center of the digital experience. Organizations who utilize identity as a frame for digital transformation can overcome barriers to adoption, opening the opportunity to take full advantage of digital identity verification and the use of digital wallets. This necessitates security management moving to the cloud and the infrastructure updates, including hardware, to support it. Physical security, operations and IT teams will need to collaborate and speak the same language in order to determine program objectives and develop a business case, including payback periods and total ROI. It’s a critical exercise to obtain the budgets needed for such projects, especially in large organizations. By engaging on the success criteria and onboarding the right consultants, partners and integrators for the implementation, security teams can deliver on the end goal, including measuring and improving on the success of the initial deployment.
The Bottom Line
Today, people are much more likely to download a mobile wallet from the app store than to purchase one from a brick-and-mortar one. The evolution of technology, together with consumer demand for a consolidated, convenient experience and a growing body of legislation and standards are enabling digital wallets to reshape digital identity verification from access control to financial transactions and beyond. With a wide reach and daily touchpoints, the digital wallet category is poised to have a substantive impact on the way we interact with our homes, with our employers and workplaces, with the tourism industry and even with our governments — but only if new solutions can deliver on reliability, scale and convenience in balance with security and privacy.
Visit the HID security & identity trends blog for more insights on what’s happening in the security industry today.
Bevan has more than 20 years of experience in Smart Building technology, holding strategic roles in the build out of disruptive software product businesses and practices. His extensive background includes positions at Honeywell, Jibestream and McCann, working in the capacity of product leadership, enterprise sales and go to market. Bevan joined HID in 2020 as Product Marketing Manager responsible for the ongoing evolution of mobile access control solutions globally.
|
<urn:uuid:d58a977c-4ef9-4289-93f6-742b90afcee5>
|
CC-MAIN-2022-40
|
https://blog.hidglobal.com/ko/node/39343
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00024.warc.gz
|
en
| 0.921706 | 1,676 | 2.59375 | 3 |
Exploring the virtual plant: How virtual commissioning aids system integration
Every plant has its unique lifecycle, which sees both equipment and employees evolve over time. Industry 4.0 has expedited this evolution, with new technologies leading to increased productivity, improved efficiency and decreased costs.
Control system simulation goes as far back as the 1970s where lamps and switches were used to simulate plant signals. The downside to this process was that the simulation had to be wired to the system’s inputs and outputs just for testing and there was no logic around the lamps and switches.
In the mid-to-late 1980s, software-based simulations were introduced, either as a programmable logic controller (PLC) or a separate PC running the simulation. Although this process was far better than the simple lamps and switches of the 1970s, as it was possible to use human machine interface (HMI) or supervisory control and data acquisition (SCADA) screens to visualize the testing, it still wasn’t perfect.
These simulations were reliant on the integrity of the model used, which would need to be configured separately from the PLC code — meaning extra work, costing both time and money.
In recent years, Industrial Internet of Things (IIoT) technologies have taken software simulations one step further, addressing many of these issues with the introduction of virtual commissioning.
Visualizing the system
Virtual commissioning is the creation of a digital replica of a physical manufacturing environment. The process involves using simulation technology to test new equipment and make changes to existing systems in a 3D virtual environment before they are made in the physical plant.
Once the simulation has been programmed, every aspect of the system can be tested in a virtual world to ensure that, when it is physically installed, all other systems in the plant integrate with it correctly. If further changes are required, these can be made and tested in the simulation.
Unlike the simulations of the 1970s and 1980s, virtual commissioning makes use of a model which was developed as part of the design of the system that is not an attempt to simulate the plant, but rather make use of a digital replica. With the digital twin concept, we can now commission the dynamics of the whole plant, rather than just a single piece of equipment. Process snags can be seen, diagnosed, fixed and immediately integrated with the corresponding elements of the control system.
Testing equipment and systems in a simulated environment brings a number of benefits. Many automated systems are controlled by programmable logic controllers (PLC) that allow plant managers to alter key processes. When these changes happen, engineers will be required to stop production and shut systems down.
By testing digitally, before making any physical changes, plant managers can identify and correct any errors that may arise when integrating the PLC with the larger plant. This minimizes potential risks such as production downtime, reduces integration time and saves money that may have been spent on correcting errors.
An automation heavy plant is the ideal candidate for virtual commissioning. Unlike a literal evolution, in which a biological population might respond to unforeseen stimuli in a surprising way, we can accurately predict the results every time. The only unpredictable element is the human one, which is why it pays to call in an expert to help you through both the design and the implementation phases.
Implementing virtual commissioning into a plant’s design process allows for almost immediate benefits to be triggered. Its ability to discover unforeseen challenges and mitigate them before they impact on the plant, makes virtual commissioning a key tool for manufacturers.
This article was written by Nick Boughton, digital lead at leading systems integrator Boulting Technology.
|
<urn:uuid:292177e5-f8e0-4dc8-b83c-828121983bea>
|
CC-MAIN-2022-40
|
https://www.iiot-world.com/industrial-iot/connected-industry/exploring-the-virtual-plant-how-virtual-commissioning-aids-system-integration/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00024.warc.gz
|
en
| 0.951639 | 749 | 2.859375 | 3 |
DNS, also known as Domain Name System, is the internet-wide service that translates fully qualified hostnames (FQDNs) such as www.netsparker.com into an IP address. It was developed because it's much easier to remember a domain name than an IP address.
In 2017, an internet draft to send DNS requests over HTTPS was filed to the IETF by P. Hofmann (from ICANN) and P. McManus (from Mozilla). Was this a positive move toward a more secure internet, or will it only create more problems?
In this article we dig deep into the subject, explaining our angle on the pros and cons of running DNS over HTTPS.
What is the DNS and How Does It Work?
First, let's refresh our memories on how DNS works. When you visit https://www.invicti.com the following happens:
- Your browser sends a request to a recursive domain name server (DNS) that is configured on your computer. Let’s call this DNS server 18.104.22.168.
- Since 22.214.171.124 does not know the IP address of www.netsparker.com, it queries the internet root servers, which refer 126.96.36.199 to the nameserver responsible for the .com top level domain (TLD).
- Next, 188.8.131.52 asks the .com TLD name server for the name servers of the netsparker.com domain.
- Then, 184.108.40.206 asks the netsparker.com name servers for the IP address of the FQDN www.netsparker.com. Once the server gets the response, it forwards it to the web browser.
- The web browser connects to this IP address and requests the website www.netsparker.com.
How Far Does DNS Lag Behind?
Back in 1983, when DNS has just been invented, DNS requests and responses were sent over the internet in clear text, and they still are. Now, with so much at stake on the internet, there is an additional need to encrypt DNS traffic.
However – like many other fundamental building blocks of the modern web – DNS was not ready for the hype!
Unlike other protocols such as HTTP and FTP, DNS never got a security upgrade that prompted widespread adoption. Instead, one of the most important features of our modern internet has used the same level of encryption for the last 35 years – none at all!
Introducing DNS Over HTTPS
In 2017, following years of unencrypted DNS requests, the first IETF Internet Draft (I-D) for DNS Over HTTPS (DoH) was published. It was a precursor to an official RFC document, and you can the 13th revision of the initial draft (DNS Queries over HTTPS (DoH), though its RFC is not yet finalised. It isn't the only protocol that aims to add encryption to the DNS protocol (there is also DNS over TLS and DNSCrypt), but it's the one that companies such as Mozilla and Google chose to integrate into their products.
Let's take a look at how it works and why it's probably not the solution to all DNS privacy problems.
DNS over HTTPS – Technical Basics
First, let's look at the technical aspects described in the latest Internet Draft and implemented in real-world applications.
The client sends a DNS query via an encrypted HTTP request – not a shocking revelation, given the name of the protocol. There are two possible ways to send the data – via a GET or POST request. Each has its own characteristics and advantages.
GET and POST Requests
If you send the data via a POST request:
- The Content-Type header field indicates the media type:
- The I-D describes one media type (application/dns-message), but the major DoH providers we'll talk about use another one (application/dns-json) that is better suited for web applications.
- The DNS query is sent in the message body:
- This has the additional advantage that you don't need to encode the message
- The message size is smaller than sending it with a GET request:
- As described above, this has to do with encoding
If you send the data via a GET request:
- It's bigger than a POST request:
- The encoding that you need to use is base64url, which means that the encoded message is about one third larger than the original one
- It's HTTP Cache-friendly:
- Caching GET requests is well supported even in older cache servers
- The DNS query is sent in a GET parameter
- This is not surprising, since the I-D mentions 'dns' as the GET parameter name
However, even though GET requests are more cache-friendly than POST requests, there is still one problem. Usually DNS packets have an ID field, that is used in order to correlate request and response packets. It is a unique, random identifier that results in different request URLs for what is essentially the same request. Therefore, clients should set this ID field to '0'.
This demonstrates that porting DNS from cleartext UDP/TCP to encrypted HTTPS requires some adjustments, at least if you want to use HTTP's full potential (which is advisable since HTTPS comes with quite a bit of overhead compared to the simple, unencrypted wire protocol of DNS).
Is Today's Web Ready for DNS Over HTTP?
Now that you know some of the important technical details regarding DNS over HTTP, what about the infrastructure of DoH?
Let's keep in mind that this technology is still in an experimental state and there is a lot of old DNS infrastructure that doesn't support encryption. Could you even deploy DoH when most of the name servers out there don't encrypt their DNS responses? Does DNS over HTTPS even make sense? And wouldn't you need to change how browsers or operating systems work in order to use it?
It turns out that it's not really necessary to update everything, even though the latest, nightly Firefox build added support for DNS over HTTPS. This is a recent, bleeding-edge version that may contain features that aren't yet available in the latest stable version. And Google's Android Pie is going to have a built-in DoH feature.
There is a way to use DoH without an operating system or browser update, though. Obviously, you should keep browsers and operating systems updated, but let me tell you why it will take a long time for most people with an Android phone to use DoH (even though Google added it in Android Pie).
Why You Won't See Native DoH on Your Android Phone for a Long Time
From painful personal experience, I can explain. Some time ago, I bought a new Samsung smartphone. Then Google released a new Android version. The update included a cool UI overhaul and some new features.
Then I waited. Yet month after month, my screen informed me, "Your phone is up to date". This annoying delay was down to the fact that Samsung heavily customizes Android on their own phones. Back then, their version of Android was called TouchWiz and it was full of bloatware. Following complaints, they slowly removed most of the annoying features and software, to the point where people couldn't recognize it as TouchWiz anymore and they had to rename it. (I'm not making this up.) Even though now they have a few fewer Samsung-specific features that they need to adjust to new Android versions, it still takes much too long to get a new update.
A friend of mine had a much older Android device from the same manufacturer and always had the latest Android version installed. That's because he flashed a new CyanogenMod operating system to the phone. It was third-party software that didn't have the TouchWiz UI, but it was the latest Android version that was available. Other problems aside, it's ironic that you could get a fully-patched, up-to-date phone by flashing third party software a few days or weeks after Android published it, yet you needed to wait months for your phone manufacturer to do the same. Obviously, doing that would void your warranty and the average user won't even be aware such a thing is possible. So, even though Android 9 will have DoH support, it may take months or even years for you to be able to use it.
Is There An Alternative Way to Use DoH Even Though Your OS or Browser Doesn't Support It?
There are several options:
- You can install a DoH proxy on the name server in your local network, which means that your device still sends traditional, unencrypted DNS packets to the local name server. However, that server will query DNS over HTTPS servers on the internet in order to resolve your query, which enables you to use DoH without having to modify your system. Still, it's unencrypted within your local network.
- It's also possible to install a DoH proxy on your local system, even though I'm not sure if it's possible for Android phones. Using this technique, instead of relying on a second machine in the local network, the proxy runs on the same machine as your browser. Therefore, even if you have an attacker in your local network, he can't read your DNS requests since they are already encrypted once they leave your machine.
Does DNS Over HTTP Enhance Security and Privacy?
It's up for debate. There are some problems with DoH that are worth a mention. The first of these, due to the way DNS works, is that it's almost impossible to have an end-to-end encrypted connection from your browser to example.com's name server, without making it known to intermediate servers.
Let's recap by looking at how the recursive name server from our earlier example resolves the IP for www.example.com:
- It asks one of the internet root servers:
- Question: "I want to visit www.example.com, do you know where it is?"
- Answer: "No, but here are the nameservers for .com. Try your luck there!"
- Question: "So, can you tell me www.example.com's IP address?"
- Answer: "You should ask the example.com name servers."
- Question: "What is the IP of www.example.com?"
- Answer: "The ip is 220.127.116.11"
In each of these queries, the full hostname is sent to the DNS server. All of these servers now know that you want to visit www.example.com, even though this information is only of real interest to example.com's name server – obviously less than ideal in terms of privacy.
DNS Query Name Minimization
There is a solution to the problem described above and it's not even a DoH-specific one. It's called DNS Query Name Minimization and this is how it works.
If you want to visit example.com, the conversation between your recursive name server and the other name servers would look like this:
- It would ask the internet root servers:
- Question: "Do you know the nameservers for .com?"
- Answer: "Yes, here is their IP address"
- Question: "Do you know the nameservers for example.com?"
- Answer: "Yes, here is the IP address of the example.com nameserver"
- Question: "Do you know the IP of www.example.com?"
- Answer: "Yes, it is 18.104.22.168"
The only name server that knows the full hostname is the one for example.com, since it's also the only server that needs to know it. All the other servers only know a part of the query. This doesn't help you to stay completely anonymous, yet it does reduce the amount of data you give away. This is part of the Firefox DoH implementation and its Trusted Recursive Resolver (TRR) technology.
There are Some Trust Issues
The second problem with DoH is Threat Models. Threat Modeling involves identifying potential vulnerabilities and suggesting counter measures. And it means ignoring low-level risks. Even though they are an important part of information security, I'm usually not a fan, at least when it comes to the average user. Sure, air gapping your PC and placing it behind two layers of bulletproof glass and an electric crocodile pit is a little over the top if you only use your computer for playing spider solitaire. But if you need to decide whether you should enable HTTPS for your homepage or not, you don't really need to spend hours pondering your threat model. It takes less than 15 minutes to set up with Let's Encrypt, so just set it up.
Unfortunately TRR is a completely different beast. I don't know the rationale behind Mozilla's decision, but I assume since it takes a huge amount of bandwidth and the fact that Mozilla may want to have TRR enabled by default in future versions of Firefox, they wanted to build infrastructure that was both reliable and safe from DDoS attacks. If you think about reliability and DDoS proof, Cloudflare immediately comes to mind. Mozilla partnered up with them and use their 22.214.171.124 server for their DoH implementation.
This causes problems for some people. But, if you only ever visit Facebook and Twitter, you couldn't care less whether Cloudflare knows about your DNS requests. However, if you are a reporter conducting research for articles, and handling sensitive information, you may not want to route all your DNS requests through an American company that could potentially trace it back to you. But there are a few benefits to having an external DoH server. For example, if you are working on an insecure public network, you don't have to communicate with a DNS server via cleartext if you use the encrypted Cloudflare server. Also, all the server's 126.96.36.199 queries will only see a Cloudflare DNS server asking for the IP that belongs to a given hostname, not your IP.
How Common is it for Tech Companies to Lose Customer Data?
The question is, is Cloudflare trustworthy? Well, yes of course it is. And they promise to delete any information they have stored about you within 24 hours. But, mistakes happen.
- Just this year, Twitter admitted to accidentally storing the plaintext passwords of their users in a log file.
- Then, German domain registrar DomainFactory unintentionally leaked sensitive user account data, which was retrieved by an attacker. I'd love to report that this was an elaborate hack by a gang of sophisticated attackers, probing the company's website and infrastructure for months with the goal of selling the data. But the vulnerability was painfully simple. It appears that they exposed some error data via an XML feed (why would you do that?!) when a user caused an error in some way and a lot of their sensitive data was leaked via that feed. You may wonder what triggered the error for so many users? The culprit – yet again, I wish I was making this up – was actually an error in the gdpr_policy_accepted field. (If you don't know what GDPR is and why this is ironic on multiple levels, you can get up to speed by reading our Whitepaper: The Road to GDPR Compliance.) They asked the user to acknowledge their data protection policy, but when the user clicked 'Yes', an error occurred and the data that should be protected became readable for everyone. This was because the backend expected a boolean value but got a string instead, triggering an error message that contained user data that ended up in a publicly accessible XML feed. Ouch!
The bottom line is that people make mistakes – even those that work in the IT departments of large corporations. Even at Cloudflare.
A Single Point of Failure
What's also worth mentioning is that even if customer data is secure, there might be outages. If 188.8.131.52 becomes the default DNS server for Firefox and there are any availability issues, not a single Firefox user will be able to issue DNS requests or therefore open a website (assuming they changed no default settings). If you think that outages are impossible, given the vast resources Cloudflare boasts, remember that even AWS had a major outage last year. You may not have noticed the outage just by looking at Amazon's status page, since it relied on AWS (the service that it's supposed to monitor) in order to work correctly and show its status!
That's why Firefox allows you to use your own Trusted Recursive Resolver. You only have to change the IP in the settings – but how many users know about TRR and how to change it or why? One percent would be a very generous estimate and that's troubling.
Are DNS Over HTTP Servers Secure?
As we've already established, DNS over HTTPS is a very young technology. It's not clear yet which server software will end up being most popular with website administrators. However, if you simply copy Google or Cloudflare's implementation, you could run into an issue – CORS. Let me cite some text from the I-D:
The integration with HTTP provides a transport suitable for both existing DNS clients and native web applications seeking access to the DNS. Two primary use cases were considered during this protocol's development. They were preventing on-path devices from interfering with DNS operations and allowing web applications to access DNS information via existing browser APIs in a safe way consistent with Cross Origin Resource Sharing (CORS).
Format of DoH Responses
Before we talk about CORS, let's think about the format of DoH responses. The I-D describes the application/dns-message media type, which is essentially a raw DNS packet in the HTTP response message. It's useful for most computer programs as there are already parsers available for that message format. However, the I-D states that it would allow web applications to "access DNS information via existing browser APIs". There is no existing browser API allowing you to decode DNS packets, so that's done on the server side and Google, and Cloudflare can send back a message in JSON format (which, on the other hand, can be easily decoded by a browser).
However, if your web application that wants to access this data doesn't run on the same host as the DoH server, you encounter a problem. You can't access the data due to the Same Origin Policy (SOP). That's why the Internet Draft mentions CORS. It allows you to access the data anyway, even though the origins don't match.
But – and let's assume your local server would also have such an API – is that dangerous?
How do You Disable Trusted Recursive Resolver or DNS Over HTTP in General?
As you see, whether TRR/DoH is useful depends on a lot of factors. You need to think about whether you want your DNS requests to be routed through an American company and if it suits your threat model. There, I said it! On the other hand, it's also not ideal to send all of your DNS requests in plaintext.
If, ultimately, you decide that TRR or DoH are not right for you, this is how you disable it in its current implementations.
- If you want to disable DoH in Firefox, you must be brave, as you have to open about:config and dismiss the scary 'This might void your warranty!' dialog.
- Once you do this, you need to search for network.trr.mode in the search bar. You will be left with exactly one option. You can type multiple numbers into the field. And even though this goes without saying, 2 activates DoH and 5, obviously, deactivates it. (Zero and 1 would be simpler, but I'm sure it makes sense on some level.)
If DoH is enabled by default in Android Pie, there is an easy way to disable it. Reportedly, there is a setting in the Network and Internet Settings menu, called Private DNS. As mentioned in the Android Developers blog, there is a button you need to check that turns it off. Unfortunately, I'm unable to independently verify that claim, simply because my latest Android phone was made by Samsung (which, unsurprisingly, tells me that my phone is up to date). So, even though technology companies do their best to change DNS for the better, some things just never change.
Is DNS Over HTTP the future of DNS?
In this article, we looked at the technology behind DoH and DNS as well as the history of the Domain Name System. While DoH may not yet be widespread, it is a good and necessary addition to DNS – if implemented correctly. We established that it depends on your personal use of the web whether or not you want to route your DNS requests through an American company. If you don't trust Cloudflare or Google, you can alternatively set up your own DoH resolver, but just beware of vulnerabilities such as the permissive CORS implementation we talked about in this post.
|
<urn:uuid:9306989f-e38f-4be9-8d4f-c579d25cf813>
|
CC-MAIN-2022-40
|
https://www.invicti.com/blog/web-security/pros-cons-dns-over-https/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00024.warc.gz
|
en
| 0.949903 | 4,582 | 3.109375 | 3 |
We take a look at the technology essentials for university
Technology has changed how students are now learning, no longer is it necessary to carry books.
This means that a lot of students time will be spent working in front of a laptop or computer.
Therefore, we explore some applications and devices that can help to boost your productivity.
Notetaking improves your overall results
Consequently, a good note taking program can help students excel and achieve goals.
That’s because they’re reliable and have good syncing features.
Make your technology work for you and help you achieve
Continuing on, there are even more specialized applications for students to use.
For example, you can download a personal study planner online to prepare for upcoming tests.
As another example, here is a better search engine for students that has different info to Google.
Documents and PDF’s getting too much? Manage them in one location with this application.
Another way you can take control of your technology is to use smart homes to your advantage.
For example, you can have reminders set for daily study and a reminder to sleep and to wake up.
You can also use it to ask informational questions so you can quickly find the answers you need.
|
<urn:uuid:66f4de7a-3283-4075-a55a-e8b2db5dd001>
|
CC-MAIN-2022-40
|
https://totalsupportsolutions.ca/tech-for-uni/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00024.warc.gz
|
en
| 0.937719 | 258 | 2.625 | 3 |
There are significant variations between the EU and the US when it comes to the 60GHz band – and the rest of the world, for that matter.
One of the most visible difference is the channel’s boundaries, with the US, Canada and Korea all running it between 57 and 64GHz to give seven GHz of bandwidth. Japan similarly delivers 7 GHz of bandwidth, but uses 59 – 66 GHz. China uses a 5GHz band from 59 – 64 GHz. And the EU delivers 9 GHz of bandwidth, from 57 to 66 GHz.
These are subtle variations, but demand that chips will require a degree of software configurability if they are to fully exploit the full bandwidth in each region and realise an economy of scale by deploying the same chip in each region.
But that’s not the complete picture and a more interesting consequence can be seen when we look at EIRP (equivalent isotropically radiated power) transmitter regulations, instead. Here, the US is regulated by the FCC 15.255 regulations and the EU deploys CEPT REC(09)01 supplemented by ETSI EN 302 217.
The baseline for EIRP limitations for operation in the US, under FCC Part 15.255, is +40 dBmi – with the potential for trade off of conducted power (up to 27 dBm) and antenna gain within that upper limit… we’ll come back to this.
Moreover, in August 2013 the FCC ruled that outdoor link operation between fixed points could use an even higher EIRP, up to a maximum average power of up to +82 dBmi (minus 2 dB for every 1 dB that the antenna gain is less than 51 dBi). Thus increased EIRP above +40 dBmi is possible for 60 GHz fixed wireless systems using high gain antennas with a gain of +30 to + 51 dBi.
This compares with Europe’s 55 dBmi. But, whereas the EU CEPT 09(01) stipulates a minimum gain greater than 30 dBi and a maximum power of less than +10 dBm, the FCC’s regulations allow a gain and power trade off within the same +40 dBmi average EIRP limit.
This trade-off between gain and power allows for the use of lower cost active phased array antennas for 60 GHz wireless backhaul. This technology is coming to market to as a consequence of the larger wireless consumer electronics market, which is emerging for 60GHz WiFi market under the ‘WiGig Certified’ programme.
Whereas, for companies deploying backhaul networks intended for operation in Europe the current regulations lead to the use of mechanically steered high gain antennas (> +30 dBi) with custom modems (power < + 10 dBm). This traditional approach is both more expensive and less flexible than the use of active 60 GHz phased array technology. And as a consequence this increases the equipment costs seen by EU mobile operators – typically more than $8,000 per 60 GHz link (~£5000, ~€6000).
This can be compared to the US market, where FCC rules support the full use of active phased array technology which has the potential to deliver such links at cost points below $1,000 per link (~£600, ~€700) – with the added benefit of auto-installation and tracking over wide coverage angles.
In 2010 Eric Schmidt (then CEO of Google) famously said: “Every two days, we create as much information as we did from the dawn of civilisation up until 2003.” Cisco currently predicts that 1.4 zettabytes (1021 bytes) will flow over global networks by 2017.
And herein lies the problem. Today’s average backhaul capacity is 35 Mbits/s per cell. This needs to increase to 1 Gbit/s per cell in just five years to support the predicted mobile data growth that will arise from the switch to 4G and the consumer demands for video streaming over the mobile network.
As a result, LTE operators anticipate that more than half of their annual capital expenditures will be allocated to meet this demand, using smaller cell sizes… with the Small Cell Forum estimating the need for up to 72,000 small cells to support mobile traffic growth in London alone.
And since the outdoor operation of WiGig consumer electronic devices – mobile phones and tablets – will already violate EU CEPT REC(09)01 rules, we need to ask why we are constraining the application of phased array technology for backhaul in Europe?
With this in mind, it is likely that the current regulations for the 60GHz band are redundant and will limit network capacity in the EU? We suggest, therefore, that there is an urgent need to review and harmonise the EU’s 60GHz radio regulations with the US’s in order to stimulate the deployment of low cost 60GHz wireless backhaul in Europe.
|
<urn:uuid:51f1180f-cb80-4e55-beb2-8dc284c6ff44>
|
CC-MAIN-2022-40
|
https://www.bluwireless.com/insight/blog/should-eu-60ghz-regulations-mirror-the-us/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00024.warc.gz
|
en
| 0.916731 | 999 | 2.609375 | 3 |
What is a data fabric? This is a question that is being asked more frequently as enterprises seek to do a better job of managing and analyzing data. Data fabric is a term used to describe an architectural approach that enables enterprises to manage and process data across multiple platforms and clouds. Is it similar to or different from a data mesh? When should enterprises use a data fabric? What are the benefits of using data fabric in edge computing? These critical questions need answers before deciding whether to deploy a data fabric.
Defining the enterprise data fabric
A data fabric is a metadata-driven architectural approach that enables an organization to collect, process, and analyze data from disparate sources in real-time. With a data fabric applied virtually atop various data repositories, data fabrics provide a unified view of data, regardless of location. Organizations can now bring siloed data together for analysis and reporting. However, it’s essential to remember that while the management is unified, the storage is not.
How is data fabric similar to or different from a data mesh?
Like a data mesh, a data fabric aims to address many of the same issues. For instance, both approaches address the problem of managing data in a diverse data environment. However, both use different strategies to accomplish this task. The data fabric strives to construct a single virtual management layer atop distributed data. In contrast, the data mesh aims to provide distributed teams with a method for managing data according to their own needs while still adhering to common governance standards.
When should enterprises use a data fabric?
There are many reasons an organization might opt to use a data fabric. Sometimes, it may be because the organization has outgrown its current data management infrastructure. Additionally, an enterprise might deploy a data fabric to improve its ability to analyze data from disparate sources. Or if it needs to provide better access to data for distributed teams.
What are/are there data fabric use cases in edge computing?
One data fabric use case helps with the typical problem of getting data back to the core efficiently and securely. The goal is to retrieve data from numerous edge data centers and send it to the core data center with minimal latency. A data fabric can create a simple solution for this by using its transport capabilities to embed messages into a message stream at the edge. The data fabric can then handle all data motion from edge to core, including security while in motion and at rest.
As the message stream contains information on the data center name, source machine name, sensor name, or event type, all edge center data may combine into a single message stream. Companies may nonetheless analyze each subset of the data separately.
Edge and data fabric are intertwined
Edge computing is predicated on the idea that more data is being created by devices, be they cameras or sensors in a factory. Technologies and architectures, such as data fabric and data mesh, that help manage and make sense of data are an important piece of the edge ecosystem.
data fabric | data mesh | edge computing | observability | use case
|
<urn:uuid:1340938a-5db4-410a-aacc-6978937909c1>
|
CC-MAIN-2022-40
|
https://www.edgeir.com/data-fabric-what-is-it-and-when-should-enterprises-use-it-20220824
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00024.warc.gz
|
en
| 0.897199 | 615 | 2.984375 | 3 |
idspopd - Fotolia
That is according to Racetrack Memory research pioneer Stuart Parkin, who is an experimental physicist, winner of the Millennium Technology Prize in 2014, an IBM fellow and director of research centres at Stanford University in the US and the Max Planck Institute at Halle, Germany.
Parkin believes Racetrack Memory is likely to arrive sooner than expected because silicon-based storage media, including flash, have reached limits in their development.
“Flash is basically a capacitor that works by applying voltages to it that are switched by a transistor,” he said. “That leads to a breakdown of cells and means a limited number of reads and writes and a tremendous overhead to ensure the memory cell is not written to too many times.
“Also, it is difficult to scale flash to smaller sizes. It being a capacitor, to get greater density, you have to put more charges in a cell and then it becomes more difficult to read and write. There is a lot of overhead with 3D solutions. Boosting the number of levels doesn’t buy you as much density as you’d think.”
Parkin’s argument is that spinning disk and silicon-based storage such as flash are essentially 2D methods of storage with access to a single layer of cells.
Racetrack, by contrast, allows one transistor or access point to connect to 100 bits held in one of “a forest of nanoscopic wires”, he said.
Read more on flash alternatives
- Flash storage has taken the enterprise by storm, but its days are numbered due to an unfavourable combination of technological obstacles and manufacturing economics.
- As IBM announces a $1bn bet on flash, UK-based ‘master inventor’ looks beyond flash and its shortcomings to the next wave of solid state storage.
The technology is based on use of magnetism, in which these tiny wires are coded by rotating the magnetic moment of atoms along the wire to opposing poles. Also, data can be induced to move up and down the wires and past a read/write head.
Parkin said the materials exist and the physics is known – what is needed next are companies to begin building prototypes.
“We are now at the point that silicon-based technologies are at the end of the road and companies need to invest,” he said.
“The physics is there, the concept works, the tooling is now available. If there were investment by companies then five years is a reasonable timeframe.”
|
<urn:uuid:8b07fa4c-712e-4819-a67c-48547ca73ba1>
|
CC-MAIN-2022-40
|
https://www.computerweekly.com/news/450419357/Racetrack-Memory-products-in-five-years-says-IBM-fellow
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00225.warc.gz
|
en
| 0.967379 | 528 | 2.828125 | 3 |
Cybersecurity in distance education, something the world was not prepared for. Stick around and learn about the new normal in studying and its dangers.
The distance education has been a modality of study that has transcended with the time; many universities have offered this mode of study to facilitate the learning process for students. Normally aimed at students who want or need to stay at home but want to pursue their careers. After the Covid-19 pandemic; most education systems have had to reinvent themselves in a short period of time in order to adapt their classes to the new reality. This has included other levels of education more than just the university level; so a series of challenges have arisen regarding cyber security in distance education in order to keep virtual classes safe.
E-learning or distance education.
It can be defined as the implementation of electronic resources applied to education to have access to an educational plan outside the physical facilities of the educational center. It refers to courses, school years, degrees; or any type of academic training that is carried out on online or remote platforms.
The way it works is through the use of video conferencing software such as zoom, skype, or Whereby; through these students can interact with teachers in real-time. In the same way; they can use other resources such as Google classroom for the delivery and assignment of homework.
What are the benefits of distance learning?
In this new normality the virtual classes or distance classes have taken a significant advantage arriving to replace almost totally the presential classes; which has brought a series of benefits like the following:
- They can learn anywhere: specifically in university education; students can enroll in universities that may be in different countries so it is not necessary to travel or move to study. So; it can be said that distance education provides more possibilities of study by removing the physical barrier.
- It offers opportunities to all types of students: distance education provides an opportunity for study to people who do not have the possibility of physically attending educational centers; either because they have motor difficulties or other special cases. Therefore; E-learning offers study opportunities to these people who in other circumstances would not be able to access face-to-face classes.
- It reduces costs considerably: Online classes mean a budget cut for both students and educational institutions; this is due to the fact that in distance education few staff participate, such as teachers and managers, so classroom maintenance costs, salaries, services; among others are reduced.
And what are the disadvantages?
It is also prudent to see the other side of the coin; there are latent risks and most of them are expressed in the area of cyber security in distance education and personal contingencies.
- Difference in opportunities among students: When we talk about the radical change that universities; colleges and high schools have undergone due to the current health crisis; this has left a percentage of students defenseless, since not all of them have an Internet connection; which has forced students to resort to shared Internet networks; which exposes them to computer dangers or information theft.
- Difficulties in adaptation and lack of controlled environment: Distance education can be a challenge for students who have moved from a physical environment to a virtual one in an abrupt manner. Distance education, like all change; requires a period of adaptation so that students can adhere to the change of space, environmental contingencies, personal situations; among others. For what many of these students can present problems in adapting from a neutral environment as it is the educational institutions to a less controlled area as the home; what can bring factors that obstruct with the learning.
- Deterioration in interpersonal relationships: when moving to an online environment; physical social relationships are restricted, which increases a deterioration in interactions and creates a social gap between students and teachers. In the same way; having little communication with teachers or tutors creates a distrust on the part of the student towards the evaluation and teaching methods.
The cyber security in the distance education.
As we have already learned; the distance education develops in an online environment, which represents a different space of interaction between students, teachers and directors; and where a great amount of information circulates that comes from different media and that suggests different levels of importance. Therefore; this environment entails a series of risks or dangers as far as cyber security is concerned.
In the first instance; it is possible to find problems or vulnerabilities in the systems used for e-learning; these systems contain different and varied functions in which a large number of people interact simultaneously; where files and different applications are uploaded and downloaded; among other things. Therefore, it is not surprising that there are different security threats, which range from authentication problems, insecure or unstable communication, system crashes, information leaks, malicious file execution; among others.
There are also other types of cyber security risks that can come from the outside, and when faced with inexperienced or uninformed users, can result in fluctuations in security and attacks from cyber attackers. Some of the risks that can occur are:
- Attacks on software: viruses, worms, ransomwares, etc.
- Theft of intellectual property: copyright, counterfeiting, piracy.
- Hardware failures or errors
- Information extortion:phishing, vishing, etc.
- Identity theft: hacking, theft of users and passwords, educational credentials, etc.
Solutions and preventions for cybersecurity in distance education.
There are some recommendations that institutions, together with students, can follow to maintain cyber security in distance education:
- Maintain unique devices for the platforms used in classes: In this case it is advisable to use a single device for online classes, as using different devices can put the information stored on it at risk due to the exposure of these computers to different social networks, websites and applications outside the educational environment that may be infected with some malware. By using a single device, be it a laptop, phone or tablet, you can secure the information stored, as well as educational accounts, passwords, bank details associated with tuition payments, etc.
- Create a good security policy: teach students how to maintain computer security, training them to identify phishing attempts, fraudulent emails, malicious files, infected websites and software; as well as safeguard their information, have unique passwords for each action that requires one.
- Create backups of important information: This can be useful for both students and educational institutions, as they store extremely important data, such as personal information of students and employees, bank details, institutional funds, legal documents, among others. Therefore, it is important to have a backup of all this information in case of an attack, as well as to protect this information using encryption tools and external storage.
- Limit the use of social networks in the educational environment: another good practice is to avoid sharing sensitive information about personal educational data on social networks, as cyber-crooks can take advantage of these means to carry out phishing attempts on uninformed students in order to steal information that they can use to reach a larger target such as the educational institution itself.
On this occasion, we learned in more detail about distance education, its advantages and disadvantages, and how to keep your important information protected. It is vital to maintain good practices to ensure the greatest possible security and thus make the distance learning experience effective. In the same way, if you are the owner of an educational institution that is implementing distance learning and you were the victim of a cyber attack or you are looking for computer security solutions for your institution, you can feel free to contact our team.
|
<urn:uuid:44b1084c-9991-45e6-b8d9-c0b8e319b21c>
|
CC-MAIN-2022-40
|
https://demyo.com/cybersecurity-distance-education/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00225.warc.gz
|
en
| 0.956357 | 1,531 | 3.078125 | 3 |
A Six Sigma definition is closely related to statistical modeling of manufacturing processes.
The concept originated from a sigma rating, which indicates the number of defect-free products developed in a manufacturing process.
below, we will examine this definition and the Six Sigma methodology in more detail.
Six Sigma: Definition and Description
The term “Six Sigma” originated within statistical quality control an refers to manufacturing output with extremely low levels of defects – lower than 3.4 per million opportunities.
Six Sigma is a business methodology that aims to achieve these results through the application of statistical and data-driven methods.
This approach has several underlying assumptions:
- Stability and predictability in business processes is essential to organizational effectiveness
- Both manufacturing and business processes can be defined, measured, analyzed, and improved
- A commitment to statistical and data driven methods should be made by everyone within the organization
Six Sigma has two frameworks that are frequently used:
- DMAIC, a five-phase methodology aimed at improving projects
- DMADV, a five-phase method aimed at creating new product or process designs
Through methods such as these, Six Sigma aims to improve quality management and control across the organization. Although it may not be perfect, Six Sigma generally achieves positive outcomes whenever it is applied.
It is not, however, the only process and performance improvement methodology.
Six Sigma vs. Lean
Lean and Six Sigma are both designed to continuously improve processes and business outcomes. The two methods, however, differ in their approach.
The primary aim of Six Sigma, as mentioned, is to reduce defects, thereby improving quality, profits, products, processes, and customer experiences. Six Sigma, like the lean methodology, is designed to achieve certain business outcomes, such as increased financial returns and improved customer value.
Lean, however, aims at waste reduction through processes such as the build-measure-learn cycle. Namely:
- Building new products and processes
- Measuring the effectiveness of those processes
- Learning from those analyses
The cycle is then repeated continuously, with the result that business processes gradually improve and waste is gradually reduced over time.
Although the outcomes of both lean and Six Sigma may overlap to a certain extent, they do have different effects on the organization and their outcomes will also differ to a certain extent.
Choosing the Right Process Improvement Method
There is not necessarily one “correct” method for every situation. Instead, your organization should evaluate its own goals and direction, then choose a method that is most aligned with its strategy and values.
An organization that is data-driven and committed to using quantitative approaches in business may prefer the Six Sigma approach. On the other hand, an organization that wants slightly more flexibility, without the emphasis on data and statistics, may prefer the lean approach.
Or your business may choose to combine the two.
Lean Six Sigma is the approach that takes the best of each system and integrates them into one.
Characteristics of the Lean Six Sigma approach include:
- Reducing waste
- Reducing variation and defects
- Eliminating errors
- Aligning the corporate culture around the methodology
- Continuous improvement
In short, like both of the methods listed above, Lean Six Sigma attempts to maximize customer success, enhance profits, and improved continuously, while reducing defects and waste.
There are different variations on this approach. Some emphasize a more data-driven method, while others incorporate more elements from the lean methodology.
In either case, the result is often a methodology that is flexible, while able to more effectively meets the needs of an organization. Perhaps this explains why Lean Six Sigma has become so popular.
Certification in Six Sigma, Lean, and Lean Six Sigma
Given the popularity of these systems, it should be no surprise that certification programs have cropped up around the world.
There are certification programs for specialties such as:
- Six Sigma
- Lean Six Sigma
- Lean Manufacturing
- Lean Management
- Lean Product Development
Most certifications for Six Sigma and Lean Six Sigma tend to follow a ranking system similar to karate or judo belt systems.
- A green belt certification requires attending a course and learning core concepts, such as DMAIC and DMADV
- A black belt certification applies Six Sigma concepts to drive organizational change, analyze statistics, supervise green belts, and so forth
- Master black belts and champions are senior managers or business leaders who lead Six Sigma strategies across the organization
Each certification can certainly be beneficial to one’s career, though, like any other certification, it will only be valuable if it is used bye the organization.
An organization that prefers one method over the other, in other words, would value a certification in that area.
Six Sigma’s definition originated within statistics, quality control, and manufacturing.
However, it has become a popular term thanks to the Six Sigma methodology, which also began in manufacturing but has since spread to many other disciplines. The term and the Six Sigma methodology have arguably become even more popular after it was combined with the lean methodology, another very popular business process improvement methodology.
Today, understanding the statistical meaning behind the term is less important than understanding the business methodology that has become so widespread.
|
<urn:uuid:ad7a584f-8c22-4def-b46c-788a3ba755c7>
|
CC-MAIN-2022-40
|
https://www.digital-adoption.com/six-sigma-definition/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00225.warc.gz
|
en
| 0.952483 | 1,079 | 3.46875 | 3 |
Security researchers have discovered weaknesses in the WPA2 (Wi-Fi Protected Access II), the security protocol for most modern Wi-Fi networks. An attacker within the range of victim can interrupt credit card numbers, passwords, photos, and other sensible information using the bug called KRACK (Key Reinstallation Attacks).
What this means is that the security built into Wi-Fi is likely ineffective, and we should not assume it provides any security. If the security problem which researchers have discovered is true, then it will be very difficult to fix it. Because the WPA2 is built into almost every internet connected device.
During the initial research, it was found that Android, Linux, Apple, Windows, OpenBSD, MediaTek, Linksys, and others are all affected by some variant of attacks. The attacks against Linux and Android 6.0 or higher devices could be devastating because these devices can be tricked into (re)installing an all-zero encryption key. Currently 41% of Android devices are vulnerable to this attack.
It is also possible that attackers can inject and manipulate data depending on the network configuration, such as ransomware or other malware data into websites.
US Homeland Security’s cyber-emergency unit US-CERT confirmed the news of vulnerability on Monday and described the research this way- “US-CERT has become aware of several key management vulnerabilities in the 4-way handshake of the Wi-Fi Protected Access II (WPA2) security protocol. The impact of exploiting these vulnerabilities includes decryption, packet replay, TCP connection hijacking, HTTP content injection, and others. Note that as protocol-level issues, most or all correct implementations of the standard will be affected. The CERT/CC and the reporting researcher KU Leuven, will be publicly disclosing these vulnerabilities on 16 October 2017.”
Most of the protected Wi-Fi networks including personal and enterprise WPA2 networks are affected by the KRACK and are at risk of attack. All the clients and access points that were examined by researchers were vulnerable to some variant of the attack. The vulnerabilities are indexed as: CVE-2017-13077, CVE-2017-13078, CVE-2017-13079, CVE-2017-13080, CVE-2017-13081, CVE-2017-13082, CVE-2017-13084, CVE-2017-13086, CVE-2017-13087, CVE-2017-13088.
“The weakness lies in the protocol’s four-way handshake, which securely allows new devices with a pre-shared password to join the network. If your device supports Wi-Fi, it is most likely affected,” said Mathy Vanhoef, a computer security academic, who found the flaw.
Changing the passwords is not going to work even if you set a strong one. So, update all your devices and operating systems to the latest versions. As of now, users can protect themselves by sticking with sites that have HTTPS security, and keeping the Wi-Fi off. Since the security issue is related to Wi-Fi, the attacker has to be within a range, and the odds of widespread attacks are apparently low.
The warning came at Black Hat security conference, and is scheduled to be formally presented on November 1 at ACM Conference on Computer and Communications Security (CCS) in Dallas.
|
<urn:uuid:e43000e9-8f0f-4c85-9c58-b38a021e4986>
|
CC-MAIN-2022-40
|
https://www.dailyhostnews.com/every-wi-fi-enabled-device-vulnerable-new-security-attack-called-krack
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00225.warc.gz
|
en
| 0.935877 | 693 | 2.515625 | 3 |
Internet bandwidth had been growing at a near exponential pace for schools. Various educational tools that help educators adapt to students’ individual learning styles require high-speed Internet even when students and children are in the classroom.
Still, many schools have slow (and sometimes non-existent) Internet speeds. When Internet speed is slow, students have less time to learn. However, that’s not the only reason schools should prioritize fiber optics. There are many benefits, which we discuss below.
Fiber Optics Improve the Learning Experience
Even when students have great Internet at home, if the school’s Internet lags, they will experience delays in sending and receiving data. Not only is that wait time better filled with learning, but it can also be challenging to regain a student’s attention once it’s lost.
In addition, fiber optics lets teachers administer computerized testing, which frees the teachers to use that time in more productive ways.
Teachers Have More Tools
With fiber optics, teachers can show instructional videos, play games, and take advantage of educational technology to help students with learning disabilities, all without buffering and lost connections.
Secure Data and Schools
Because copper is unstable, it can experience electromagnetic interference (EMI), potentially exposing a school’s network to cyberattacks. When you use fiber optics instead, you decrease the risk of cyber vulnerabilities.
Additionally, school security cameras and systems can rely on the fast and reliable fiber-optic Internet. The extra security provided by these dependable security systems can help parents, students, and administrators feel at ease.
Fiber Optics Save Time
It’s not just teachers and students who benefit from fiber-optic technology; so do school administrators. Faster Internet speeds help speed up administrative tasks, such as recording attendance, ordering supplies, creating schedules, and entering relevant student information.
Future-Proof Your Schools
Even if a school’s current cable network is working for them, advancing technology will eventually force change. Since fiber optics are so much faster than copper wires, it will ensure that schools are ready for years, if not decades, to come.
Get in Touch with FiberPlus
FiberPlus has been providing data communication solutions for over 25 years in the Mid Atlantic Region for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include:
- Structured Cabling (Fiberoptic, Copper and Coax for inside and outside plant networks)
- Electronic Security Systems (Access Control & CCTV Solutions)
- Wireless Access Point installations
- Public Safety DAS – Emergency Call Stations
- Audio/Video Services (Intercoms and Display Monitors)
- Support Services
- Specialty Systems
- Design/Build Services
- UL2050 Certifications and installations for Secure Spaces
FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry.
|
<urn:uuid:0fc13c09-0358-42d5-b7d1-dc3654efe52a>
|
CC-MAIN-2022-40
|
https://www.fiberplusinc.com/services-offered/why-schools-should-use-fiber-optics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00225.warc.gz
|
en
| 0.925318 | 634 | 2.90625 | 3 |
Artificial Intelligence (AI) is changing the world. Many industries have already been impacted by the integration of AI technology to improve business processes, but that’s only the beginning. Through the use of big data, machine learning, and the Internet of Things (IoT), AI captures the ability to think and make decisions like a human would — but on a massive scale.
Correspondingly, this technology has found a use in almost every sector of industry. Education, healthcare, human resources, marketing, and supply chain management have changed and continue to develop through the use of AI technology. Here’s how.
You might think education is a field that AI isn’t likely to disrupt or automate. While it’s true that education requires a human perspective, teachers and administrators can use AI tools to improve their approaches to teaching.
AI technology benefits students as well as teachers through its ability to do one thing most schools don’t have the time or staff to achieve: personalize content to the individual.
Companies like Carnegie Learning and OpenStax have created intelligent software that responds and adapts to a learner much like a human tutor would. Through AI, learning programs like these can be tailored to a student’s learning needs to give them an individualized learning experience.
Additionally, AI stands to assist school administrators in the day-to-day monitoring and inventory tasks. Everything from building temperature, traffic flow in the halls, and ordering supplies can be improved with the use of AI to monitor tasks and make recommendations. Even the time educators spend grading work can be cut down by intelligent grading software, giving them more time to focus on students.
There are no limits to the ways in which AI is changing and improving education without removing the necessary human elements.
Perhaps the field that will be most impacted by the integration of artificial intelligence is healthcare. Here, new technologies can save lives, cut costs, and improve the wellbeing of people across the world.
AI allows for improvements in necessary aspects of care that also provide analytic tools to help diagnose and prevent disease. Through electronic health records (EHR) and wearable technology that can track vital patient data, healthcare providers have access to a broad picture of health solutions that AI technology allows them to analyze and diagnose with ease.
One example of this accessibility and applicability of this technology in everyday life is its use in Alzheimer’s patients. HealthTech cited the story of a 59-year-old individual diagnosed with early-onset Alzheimer’s disease. This patient was able to use an Amazon Echo Dot for helpful reminders on taking medication and other daily care tasks.
AI is changing the landscape of healthcare tech and treatment, giving patients and providers better tools and outcomes with lifesaving implications. Through intelligent care solutions, health problems can be caught and treated without a patient having to be face-to-face with a physician.
The use of artificial intelligence in improving our health and wellbeing to the point that some experts even believe it will boost our lifespans, an invaluable gift for the future of humanity.
Ironically, another field artificial intelligence can benefit is human resources (HR). While by no means a replacement for the individuals who understand the needs and problems of human beings, AI can save them time and make their jobs easier.
The Human Resources Professionals Association found that 14 percent of HR workers surveyed already use AI in recruitment. These tools help HR staff analyze a vast stack of resumes to find the ideal candidates for a position while eliminating biases and flawed logic that humans bring into the recruitment process.
AI technology can also be helpful in automatically scheduling meetings, booking conference rooms, and even answering simple HR questions that can cut into a human resource worker’s time.
Through tools like chatbots and scheduling software, artificial intelligence is revolutionizing the human resource experience, giving these workers the time to focus on the big picture and person-to-person situations to keep the office running smoothly.
Already, we encounter AI in marketing every day. Artificial intelligence algorithms and machine learning make up much of the personalized advertising content we encounter when browsing websites like Amazon or Facebook.
AI is perfect for marketing, where it can analyze vast stores of data and create personalized content for millions of consumers instantly. AI can even write sales emails and build customer profiles with informed suggestions for effective ads. When so much of our data is readily available through our social media and purchasing history, technology can easily step in to customize our shopping experiences to the smallest detail.
The expansion of AI in marketing is only a matter of time. While some voice concerns over the use of consumer data in marketing, AI also offers protections to that data that were never before possible. Through instantaneous learning processes, cybersecurity programs boosted by AI can track and defend cyberattacks and promote secure data.
Supply chain management
Managing the movement of products and supplies in a global economy is an incredibly complex task. Here, too, AI has broken ground in both the manner of transportation and the ability to manage supply chains.
Research has found that 63 percent of companies that used AI in their supply chain processes reported an increase in revenues. Here’s why.
For supply chain and fleet managers struggling with the complexity of managing routes, tracking assets, improving driver behavior, and so much more, artificial intelligence tools can better the process.
AI can analyze huge amounts of data to interpret ideal decisions for fleet management. Devices can be mounted in fleet vehicles to analyze driving behaviors and offer analytics for coaching and correction. This can save shipping companies millions in damages while increasing the safety of other drivers on the road.
Additionally, better predictions in delivery time, route management, and overall safety make intelligent tools an invaluable addition to supply chain management.
In almost every industry, the presence of artificial intelligence is improving processes, saving companies money, and saving time and resources for human workers. The future of education, healthcare, human resources, marketing, supply chain management, and so much more will be shaped by the integration of these useful technologies.
The only question is how society will respond to the automation of jobsthat inevitably occurs from that integration.
|
<urn:uuid:0f27ce98-f74d-42e9-8626-417eff4a89df>
|
CC-MAIN-2022-40
|
https://bdtechtalks.com/2020/07/11/ai-impact-industries-2020/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00225.warc.gz
|
en
| 0.939373 | 1,259 | 3.3125 | 3 |
Neural Network Tool
One Tool Example
Neural Network has a One Tool Example. Visit Sample Workflows to learn how to access this and many other examples directly in Alteryx Designer.
The Neural Network tool creates a feedforward perceptron neural network model with a single hidden layer. The neurons in the hidden layer use a logistic (also known as a sigmoid) activation function, and the output activation function depends on the nature of the target field. Specifically, for binary classification problems (e.g., the probability a customer buys or does not buy), the output activation function used is logistic, for multinomial classification problems (e.g., the probability a customer chooses option A, B, or C) the output activation function used is softmax, for regression problems (where the target is a continuous, numeric field) a linear activation function is used for the output.
Neural networks represent the first machine learning algorithm (as opposed to traditional statistical approaches) for predictive modeling. The motivation behind the method is mimicking the structure of neurons in the brain (hence the method's name). The basic structure of a neural network involves a set of inputs (predictor fields) that feed into one or more "hidden" layers, with each hidden layer having one or more "nodes" (also known as "neurons").
In the first hidden layer, the inputs are linearly combined (with a weight assigned to each input in each node), and an "activation function" is applied to the weighted linear combination of the predictors. In the second and subsequent hidden layers, output from the nodes of the prior hidden layer are linearly combined in each node of the hidden layer (again with weights assigned to each node from the prior hidden layer), and an activation function is applied to the weighted linear combination. Finally, the results from the nodes of the final hidden layer are combined in a final output layer that uses an activation function that is consistent with the target type.
Estimation (or "learning" in the vocabulary of the neural network literature) involves finding the set of weights for each input or prior layer node values that minimize the model's objective function. In the case of a continuous numeric field this means minimizing the sum of the squared errors of the final model's prediction compared to the actual values, while classification networks attempt to minimize an entropy measure for both binary and multinomial classification problems. As indicated above, the Neural Network tool (which relies on the R nnet package), only allows for a single hidden layer (which can have an arbitrary number of nodes), and always uses a logistic transfer function in the hidden layer nodes. Despite these limitations, our research indicates that the nnet package is the most robust neural network package available in R at this time.
While more modern statistical learning methods (such as models produced by the Boosted, Forest, and Spline Model tools) typically provide greater predictive efficacy relative to neural network models, in some specific applications (which cannot be determined before the fact), neural network models outperform other methods for both classification and regression models. Moreover, in some areas, such as in financial risk assessment, neural network models are considered a "standard" method that is widely accepted. This tool uses the R tool. Go to Options > Download Predictive Tools and sign in to the Alteryx Downloads and Licenses portal to install R and the packages used by the R tool. Visit Download and Use Predictive Tools.
Configure the Tool
- Model name: Each model needs to be given a name so it can later be identified. Model names must start with a letter and may contain letters, numbers, and the special characters period (".") and underscore ("_"). No other special characters are allowed, and R is case sensitive.
- Select the target variable: Select the field from the data stream you want to predict. This target must be a string type.
- Select the predictor variables: Choose the fields from the data stream you believe "cause" changes in the value of the target variable. Columns containing unique identifiers, such as surrogate primary keys and natural primary keys, should not be used in statistical analyses. They have no predictive value and can cause runtime exceptions.
- Use sampling weights in model estimation (Optional): Click the check box and then select a weight field from the data stream to estimate a model that uses sampling weight.
- The number of nodes in the hidden layer: The number of nodes (neurons) in the model's single hidden layer. The default is ten.
- Include effect plots: If checked, then effect plots will be produced that graphically show the relationship between the predictor variable and the target, averaging over the effect of other predictor fields. The number of plots to produce is controlled by "The minimal level of importance of a field to be included in the plots," which indicates the percentage of the total predictive power of the model a particular field must contribute to the model in order to have a marginal effect plot produced for that field. The higher the value for this selection reduces the number of marginal effects plots produced.
- Custom scaling/normalization...: The numeric methods underlying the optimization of the model's weights can be problematic if the inputs (predictor fields) are on different scales (e.g., income which ranges from seven thousand to one million combined with the number of members present in the household that ranges from one to seven).
- None: Default.
- Z-score: All predictor fields are scaled so that they have a mean of zero and a standard deviation of one.
- Unit interval: All predictor fields are scaled so that they have a minimum value of zero and a maximum value of one, with all other values being between zero and one.
- Zero centered: All predictor fields are scaled so that they have a minimum value of negative one and a maximum value of one, with all other values being between negative and positive one).
- The weight decay: The decay weight limits the movement in the new weight values at each iteration (also called "epoch") of the estimation process. The value of the decay weight should be between zero and one, larger values place a greater restriction of the possible movements of the weights. In general, a weight decay value of between 0.01 and 0.2 often works well.
- The +/- range of the initial (random) weights around zero: The weights given to the input variables in each hidden node are initialized using random numbers. This option allows the user to set the range of the random numbers used. Generally, the values should be near 0.5. However, smaller values can be better if all the input variables are large in size. A value of 0 is actually a special value that causes the tool to find a good comprise value given the input data.
- The maximum number of weights allowed in the model: This option becomes relevant when there are a large number of predictor fields and nodes in the hidden layer. Reducing the number of weights speeds up model estimation, and also reduces the chance that the algorithm finds a local optimum (as opposed to a global optimum) for the weights. Weights excluded from the model are implicitly set to zero.
- The maximum number of iterations for model estimation: This value controls the number of attempts the algorithm can make in attempting to find improvements in the set of model weights relative to the previous set of weights. If no improvements are found in the weights prior to the maximum number of iterations, the algorithm will terminate and return the best set of weights. This option defaults to 100 iterations. In general, given the behavior of the algorithm, it is likely to make sense to increase this value if needed, at the cost of lengthening the runtime for model creation.
- Plot size: Select inches or centimeters for the size of the graph.
- Graph resolution: Select the resolution of the graph in dots per inch: 1x (96 dpi), 2x (192 dpi), or 3x (288 dpi).
- Lower resolution creates a smaller file and is best for viewing on a monitor.
- Higher resolution creates a larger file with better print quality.
- Base font size (points): Select the size of the font in the graph.
View the Output
- O anchor: Object. Consists of a table of the serialized model with its model name.
- R anchor: Report. Consists of the report snippets generated by the Neural Network tool: a basic model summary, as well as main effect plots for each class of the target variable.
|
<urn:uuid:4f822256-8737-44f3-98d5-80ced0d8550d>
|
CC-MAIN-2022-40
|
https://help.alteryx.com/20221/designer/neural-network-tool
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00225.warc.gz
|
en
| 0.885449 | 1,748 | 3.53125 | 4 |
Software Defined Networking Definition
Software Defined Networking (SDN) is an architecture that gives networks more programmability and flexibility by separating the control plane from the data plane. The role of software defined networks in cloud computing lets users respond quickly to changes. SDN management makes network configuration more efficient and improves network performance and monitoring.
What is Software Defined Networking?
Software Defined Networking (SDN) enables directly programmable network control for applications and network services. Software defined network architecture decouples network control and forwarding functions from physical hardware such as routers and switches to create a more manageable and dynamic network infrastructure.
SDN architecture includes the following components:
• SDN Application — Communicates network resources and network devices to the SDN controller through the northbound interface (NBI).
• SDN Controller — Translates the requirements from the SDN application layer to the SDN datapaths. It also provides the SDN applications with a central repository of network policies, a view of the networks and network traffic.
• SDN Datapath — Implements switches that move data packets on a network.
• SDN API — Application program interfaces (APIs) provide both open and proprietary communication between the SDN Controller and the routers of the network.
How to Implement Software Defined Networking?
Implementing software defined networking basics without purpose and planning is not advised.
The following tips will ensure a smooth network management process:
• Define a use case — Be sure there is a real problem for SDN to solve. Focus on that one, clear issue with a use case. This will allow for measurable outcomes and lessons that can be applied elsewhere when fully implementing SDN.
• Create a cross-functional team — Do not implement SDN in silos. A team with a diverse skills is needed for successful implementation. Collaboration is key.
• Test first — Try a non-critical network area for initial SDN implementation before changing the entire network.
• Review — Measure data to see if test outcomes meet goals. Be sure SDN is solving a problem before implementing it across the network.
How Does Software Defined Networking Work?
A software defined network uses a centralized SDN controller to deliver software-based network services. A network administrator can manage network policies from a central control plane without having to handle individual switches.
SDN architecture has three layers that communicate via northbound and southbound application programming interfaces (APIs). Applications can use a northbound interface to talk to the controller. Meanwhile, the controller and switches can use southbound interfaces to communicate.
The layers include:
• Application layer — SDN applications communicate behaviors and needed resources with the SDN controller.
• Control layer — Manages policies and traffic flow. The centralized controller manages data plane behavior.
• Infrastructure layer — Consists of the physical switches in the network.
Benefits of Software Defined Networking
Software defined network (SDN) basics include the following benefits:
• Control — Administrators have more control over traffic flow with the ability to change a network’s switch rules based on need. This flexibility is key for multi-tenant architecture in cloud computing.
• Management — A centralized controller lets network administrators distribute policies through switches without having to configure individual devices.
• Visibility — By monitoring traffic, the centralized controller can identify suspicious traffic and reroute packets.
• Efficiency — Virtualization of services reduces reliance on costly hardware.
Does Avi Network offer Software Defined Networking?
Yes. Built on software-defined principles, Avi extends L2-L3 network automation from SDN solutions to L4-L7 application services. Avi offers native integration with industry-leading SDN and network virtualization controllers such as Cisco APIC, VMware NSX, Nuage VSP, and Juniper Contrail. Avi delivers multi-cloud application services that include enterprise-grade load balancing, actionable application insights, and point-and-click security.
For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.
For more information see the following software defined networking resources:
|
<urn:uuid:5d36efd7-fd6e-4694-8057-5b50ec94c989>
|
CC-MAIN-2022-40
|
https://www-stage.avinetworks.com/glossary/software-defined-networking/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00425.warc.gz
|
en
| 0.838906 | 872 | 3.3125 | 3 |
Fiber optic cables have become incredibly useful to both telecommunication companies and average Americans over the course of the last two decades. They are used to transmit a large amount of information every day, and people are able to stay connected through the use of the internet by utilizing fiber optic cables on a regular basis, even if they aren’t necessarily aware of the fact that they’re using them. Now, some researchers believe fiber optic cables could also play an important role in research, too. Specifically, there are teams of researchers in California who are using fiber optic cables to study earthquakes.
Fiber Optics and Earthquakes
Many years ago, telecommunication companies started installing large amounts of fiber optic cables underground throughout the state of California, even though they didn’t use many of them at first. They were simply anticipating the growth of fiber optic networks. Moreover, while they did eventually end up using many of them years later, fiber optic technology advanced so quickly that many of those fiber optic cables are now sitting unused underground. It’s giving the Lawrence Berkeley National Lab and other labs in California the opportunity to use them to study earthquakes that take place in the state.
Traditionally, researchers have used seismographs to measure the vibrations that take place deep underground during earthquakes, and they still rely on them in many areas. However, they have also found ways to use “dark” fiber optic cables to study earthquakes. They are able to study the way light moves through these cables during an earthquake to generate data, and this has proven to be extremely helpful to them because there are many cables located in areas where it would be too difficult or expensive to install seismographs. For example, researchers are now able to track earthquakes underwater and in certain urban areas where they wouldn’t be able to install seismographs deep underground.
The thought is that, by studying earthquakes with fiber optic cables, researchers might be able to better equip cities to handle the effects of earthquakes. They could potentially prepare cities by providing them with data on everything from the moisture levels of the soil located underground to the softness of the soil that could be affected by an earthquake. It could prove to be incredibly useful information, and it’s all because of the presence of fiber optics cables.
At Connected Fiber, we know just how useful fiber optic cables can be, and we strive to provide people with the fiber optic services they need to keep their fiber optic networks up and running. Call us at 910-443-0532 today to find out more about the services we offer.
|
<urn:uuid:fc7a5cae-0476-43d1-a7b4-ad4713ce9e31>
|
CC-MAIN-2022-40
|
https://www.connectedfiber.com/could-fiber-optics-help-scientists-better-understand-earthquakes/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00425.warc.gz
|
en
| 0.966751 | 523 | 3.5625 | 4 |
MPLS (Multiprotocol Label Switching) is a data forwarding method using labels instead of IP addresses. It is a simple, secure and fast technology that can encapsulate and transport many kinds of protocols (Ethernet, ATM, Frame-Relay, etc.), hence the name Multiprotocol.
Also, there is visibility of IP addressing, as MPLS operates both in Layer 2 and Layer 3 of the OSI network model. MPLS is mainly used by ISPs to provide Virtual Private Networks (VPNs), and we see its deployment in large enterprises as well.
A major feature of MPLS is its traffic engineering capabilities. Resource management, performance and optimization are essential for Service Providers to deliver high-end services to their multiple customers, which span across the MPLS backbone.
MPLS essentially builds several paths called LSPs (Label Switch Path), based on required recourses and network capabilities. Then it transmits that data to the Interior Gateway Protocol (OSPF, IS-IS), and IGP routes data through these fast LSP paths, using labels. Also, there is support for Quality of Service (QoS), since the MPLS header contains 3bit EXP (Experimental Field) Class of Service.
Cisco has a large role in MPLS technology and it is implemented in almost all of their high-end routers. Although providing advantageous capabilities, there is still a learning curve regarding MPLS, and there’s a chance of making mistakes that will affect large-scale topologies.
|
<urn:uuid:e9b3a5db-0bc6-4a60-a4e4-5a9cc91a712f>
|
CC-MAIN-2022-40
|
https://indeni.com/blog/introduction-to-traffic-engineering-with-mpls/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00425.warc.gz
|
en
| 0.94148 | 314 | 3.140625 | 3 |
@img from Tesla.com
Ever since the field of artificial intelligence (AI) research was first established as an academic discipline in the 1950s, scientists from a diverse range of backgrounds including physics, mathematics, engineering and psychology have looked at the possibilities of artificial brains and intelligent machines capable of learning for themselves. After a promising start to life, early AI systems came to the attention of the business community and vast amount of time and research were poured into the commercial and industrial applications of artificial intelligence. Today, using modern technologies and resources such as big data, machine learning (ML) and vastly more powerful computers, AI and ML systems are being put to use in a plethora of different roles across a large number of industries. In this article, we’ll be looking at five industries that artificial intelligence and machine learning are transforming as well as their roles within that industry and how they could be used in the future.
What Industries Are They Transforming?
Let’s begin to see how they may affect different industries in different ways. Below are five industries where both artificial intelligence and machine learning are having a transformative effect.
Every year around 40,000 people lose their lives in North America in road traffic collisions, 37,000 in the US alone. Most of these accidents are caused due to drunk, fatigued, cell-phone distracted driving and poor driver behavior. All human factors. Bringing Artificial Intelligence into play is aimed to take out all those factors and turn a completely human dependent car into just another automated machine that is able to “think” and “decide” on its own based on artificial intelligence. Driverless vehicle is one big example of what Artificial Intelligence is doing in the Mobility or Transportation sector. We are definitely in the transitional phase where intelligent vehicles equipped with multiple cameras and sensors are being tested to drive on their own under human monitoring or vice versa.
Machine Learning coupled with Machine Vision technologies are also being used to mine valuable insights into machines and vehicles to predict their wear and tear enabling transit operators to plan downtimes to avoid service interruptions. Predictive analytics fed by big data are also being used to achieve efficiency in route optimization and schedule planning.
Rolling stock and airplanes are already using autopilot systems where they may have a human operator / pilot presence only for situational take over. Eventually, we may see buses, taxis, commercial fleets and personal vehicles too going driverless as driverless technologies becoming safer and comparatively viable option to a human driver.
Nearly every stage of the manufacturing process is dealt with by machines that require at least some level of human intervention before they can operate efficiently. Artificial intelligence and machine learning are changing that by connecting all processes of a manufacturing company.
Concept of a Smart Factory is basically of an automated super-efficient manufacturing organization where Industrial IoT, AI. ML, Big Data and Cloud Computing technologies are integrated to achieve real time secure communication across departments and distribution channels, data is turned into analysis in the cloud based big data systems, the analysis provides AI software the intelligence to triggers a command for the machine or subsequent system to take an action. Now actions such as spotting a fault in a product or a part on the assembly line and getting the product design department to look into a replacement along with providing them data of exactly what needs to be fixed is not going to take weeks, but minutes.
Manufacturing can become even more autonomous with machines being controlled and monitored by other intelligent machines, rather than needing human intervention or guidance. AI and ML systems are helping manufacturers produce more efficiently using intelligent production management tools and predictive analytics. Manufacturers today want to run lean inventories with full visibility into the demand data from their distribution channels. The goal is to optimize the whole supply chain and distribution by sharing data and making intelligent decisions quickly about manufacturing, product enhancement and stocking. Efficiencies will translate in better pricing, happier customer, more robust businesses.
With the World Health Organization (WHO) estimating a global shortage of doctors and nurses of approximately 4.3 million, it is easy to see how AI and ML could help ease the strain on already overworked healthcare professionals.
Supplementing current healthcare workforce only accounts for a fraction of the driving force that is behind the explosion of AI / ML based healthcare solutions we have seen off late. Across different branches of medicine and healthcare, Artificial Intelligence and Machine learning are making disruptive advancements that will transform this sector in a truly revolutionary way.
ML will make a huge impact on predictive medicine which will significantly reduce the need for healthcare facilities and the extensive treatments. Cognitive healthcare ML systems will be able to predict likelihood of disease in a person based on the data available on his family, genetics, profession, lifestyle, and using it for an automatic comparative study and analysis in the ocean of big data available to them.
Diagnosis will be done at a speed and accuracy never possible before and all thanks to the development of advanced machine learning techniques and technologies, that will be able to recognize symptoms and corroborate this data instantly with the results of a magnitude of diagnosis data to determine the cause.
AI systems could also be built to recognize the results of x-rays, analyze genetic data and identify possible genetic predispositions and use personal AI assistants like Siri that interact with patients and can detect harder to read conditions like depression through analyzing vocal tone data, for example.
Technologies that are able to perform advanced predictive operations to calculate the probability of an event or scenario have always been popular in the financial industry. Artificial intelligence and machine learning technologies have proven to be no different. Rather than dealing with a traditional financial advisor, many potential investors could now use an intelligent machine to manage their portfolios and invest their money for them. By inputting some details about yourself and your investment style, AI and ML systems could identify the best ways to deal with your money based on the information you provided. In the future, we could see a world where entire economies are managed by one lone AI system.
The use of artificial intelligence and machine learning tools to predict, block and learn from suspicious activity is growing. One of the main hurdles for cyber security operators is the sheer number of alerts they have to deal with on a daily basis. The majority of these alerts will be false positives and can waste the time and resources of cyber security teams going through the process only to find out it was for nothing. AI and ML could help this situation by using their analytical and predictive capabilities to better identify actual threats in real-time.
The Future Of AI and ML
There are an almost innumerable amount of potential applications for artificial intelligence and machine learning with a vast array of different industries. Some of those industries that weren’t covered above include education, entertainment, defense, marketing, utilities, music and industrial agriculture. We can also be sure that, due to our increasing reliance on computer network and internet technologies that the potential uses for AI and ML will continue to grow as new ideas are developed and shared. It would seem safe to assume that of all the technological innovations that have occurred since the initial inception of artificial intelligence as a field of study, that, should the necessary machine learning techniques be developed and general artificial intelligence be created, it could fundamentally change not just the industries it was involved in, but our way of life as a whole.
|
<urn:uuid:375dab28-6650-405a-981a-ef777dd473d1>
|
CC-MAIN-2022-40
|
https://www.lanner-america.com/blog/5-industries-artificial-intelligence-machine-learning-transforming/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00425.warc.gz
|
en
| 0.953477 | 1,484 | 2.828125 | 3 |
Data breaches are so common nowadays that you’re lucky not to see one in the breaking news section of any news outlet. How is your business preparing for the inevitable data breach of intellectual properly and sensitive information? You need to start considering preventative measures, like two-factor authentication, to keep your data secure.
The main issue that two-factor authentication can solve is the decreasing amount of security provided by passwords. Technology has become so advanced that even complex passwords that maximize security can be cracked under the right conditions. Users tend to use easy-to-remember passwords, which come with their own set of complications, so we’ll talk about ways that your organization can use two-factor authentication to solve common password troubles.
It’s a best practice to change your password every so often, and users might scratch their heads at how to remember some of these more complex passwords. Passwords should be at least 12 characters long, and must use special characters, upper- and lower-case letters, numbers, and symbols. All of this must be done in a seemingly random string of characters, but users might try to use these characters in a way that makes it easier to remember. In fact, they may just use a password for another account, or one that includes information from a social media account, like the name of their dog or first-born child.
Generally speaking, it’s best to keep information that could easily be found in public records out of your password fields. This includes the names of your children, parents, or other important individuals, as well as any information that you store on your social media accounts, like your favorite TV show or movie. Hackers have more tools than ever before to find out all sorts of information about you, so you have to be very careful about how you use this information in passwords. Plus, there’s always the chance that you’ll use this information for security questions, which doesn’t do you any favors when hackers can just find the information at their own leisure.
Although password managers do make passwords easier to remember, the primary problem with them remains the same. If a hacker can find out what that password is, they can access all of your accounts easily enough. Two-factor authentication makes things much more difficult for a hacker, requiring that they have a secondary credential to access any account associated with it. This acts as a secondary security level, and it’s one that requires the use of a mobile device, email account, or other access method. It’s a great way to take full advantage of next-level security, and since it’s easy to set up, you can do it quickly and efficiently.
Do you want to take full advantage of two-factor authentication? For more information about personal and network security, call us today at (877) 638-5464.
|
<urn:uuid:e64f9b25-7a53-44b3-ad9f-245f74b21703>
|
CC-MAIN-2022-40
|
https://www.excaltech.com/boosting-security-takes-layer-authentication/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00425.warc.gz
|
en
| 0.945195 | 589 | 2.859375 | 3 |
Once upon a time, phishing and spearphishing attacks would only surge around specific times of year, such as major holidays like Christmas or Chinese New Year, consumer-ready holidays like Valentine’s Day or Lantern Festival in China, or around consumer shopping events like Black Friday or Cyber Monday in the United States, Boxing Day (December 26th) in the UK and Commonwealth of Nations, or Singles Day (November 11th) in Asia.
Attackers then figured out that they could leverage the FUD—fear, uncertainty, and doubt—driven by natural and manmade disasters, wars, illnesses, elections, or any event that drives today’s news cycle to sow their malicious seeds.
In the UK, the Information Commissioner’s Office (ICO) indicated that phishing was the top cause of cyber-related breaches from April 2019 to March 2020. The Office of the Australian Information Commissioner (OAIC) showed that phishing accounted for 36% of all cases reported to them, top on their list.
To that end, phishing and spearphishing attacks have drastically increased throughout 2020, driven by the threat of a worldwide pandemic, nations under quarantine or lockdown, workers who must work from home, and even contentious elections in the U.S. and other nations. Even the announcement of vaccines to address COVID-19 being ready is being leveraged to entice even wary folks to open emails from unknown sources—or even known sources who may have had their accounts breached and hijacked—to then spread malware and other malicious attack vectors, and steal user and corporate information or enable illicit access to sensitive networks, clouds, applications, and data.
One of the reasons most cited for the recent explosion in phishing attacks has been the work from home orders precipitated by the COVID-19 pandemic. Many employees, contractors, and other staff members have been forced to work from home or remotely and this quickly attracted the unwanted attention of attackers. They understood there is a strong likelihood that people working remotely would be under increased pressure, let down their guard down and begin clicking on links in just about any email, even those that might normally raise suspicion. They also know that those working from home might be using BYOD products that won’t have the tools typically used by organizations to protect them from attacks like phishing. Attackers and hackers also believe that home-based workers might not have enough bandwidth to keep security software running or updated, and may turn off or miss updates to their security software. Many times, they are right.
As phishing attacks have rapidly increased, the number of phishing sites using encryption has kept pace. According to the F5 Labs recent Phishing and Fraud Report 2020, nearly 72% of phishing links send victims to HTTPS encrypted websites. That means that the vast majority of malicious phishing sites now appear to be valid, credible websites that can easily fool even the savviest employee. This data has been corroborated by research from other reports, as well, including a report by Venafi that uncovered suspicious retail look-alike domains that use valid certificates to make phishing websites appear valid, leading to stolen sensitive account and payment data.
And it’s not only malignant websites that leverage TLS encryption to appear convincing and legitimate. It’s also destinations to which malware, delivered by phishing attacks, sends data that it pilfers from victims and their organizations; these destinations are called drop zones. According to the F5 Labs’ Phishing and Fraud Report 2020, all—100%—of incidents that involved drop zones investigated by the F5 Security Operations Center (SOC) during 2020 used TLS encryption.
There are a number of solutions available today to address phishing from a variety of different angles. There are solutions to train staff how to recognize and handle phishing attacks to reduce attack uptake and efficacy. These solutions address email security, protecting against spam, malware and malicious attachments, BEC attacks, and more. There are services to manage an organization’s email. There are even offerings that proxy an organization’s web traffic, replicate or mimic it, and deliver code to local devices to be rendered or that mimic the web page but without any of the underlying suspicious and possibly malicious code.
While those are all great solutions, there is still the problem of addressing encrypted traffic. If traffic is encrypted, it needs to be decrypted before it can be checked for malware and other dangerous code. That applies equally to encrypted traffic coming into the organization from users clicking on bad, malware-prone links in phishing emails, downloading attachments laden with malicious code, and accessing malevolent websites that appear real and benign because they have the “right” encryption certificate, as well as encrypted traffic leaving with stolen data for an encrypted drop zone or reaching out to a command-and-control (C2) server for more instructions or triggers to unleash even more attacks.
Plus, this is not even taking into consideration that government privacy regulations, such General Data Protection Regulation (GDPR) in the European Union (EU), the California Consumer Privacy Act (CCPA), or many other regulations being debated in nations around the world, typically including language that precludes the decryption of personal user information, such as user financial or healthcare data. Any decryption of encrypted traffic would need to address these privacy mandates, or it could lead to litigation and substantial fines for any organization that runs afoul of these regulations.
All that said, there is even more to today’s phishing attacks that use encryption of which organizations must be aware. The F5 Labs’ Phishing and Fraud Report 2020 also found that over 55% of drop zones use a non-standard SSL / TLS port, while over 98% of phishing websites used standard ports, such as port 80 for cleartext HTTP traffic and port 443 for encrypted traffic. This means that, particularly for outbound encrypted traffic, relying on scanning standard ports is not enough. Solutions deployed need to scan and decrypt outgoing traffic on non-standard ports. This is imperative in order to halt the obfuscation and exfiltration of critical data.
Today, in order to halt encrypted threats borne by phishing attacks, organizations need to inspect all incoming SSL / TLS traffic to ensure that any malicious or possible phishing-initiated web traffic is stopped and eliminated. But that inspection must include the ability to intelligently bypass decrypting encrypted traffic that contains sensitive user information, such as financial or health-related information. In addition, today’s organizations need to either outright block or at least monitor non-standard outbound web ports to stop malware from encrypted communications with C2 and drop zone servers, to stop data exfiltration or attack triggers. There are also other key things to consider, as well, such as the type of encryption supported by devices in the security stack. For instance, if an attacker knows that a certain security device is unable to support forward secrecy (also known as perfect forward secrecy, or PFS), they may leverage it so that the encrypted traffic is simply passed through by the security device. This action is especially costly and dangerous in environments where security devices in the stack are daisy chained together. If the one device that doesn’t support PFS bypasses the traffic, it will be bypassed by the rest of the chain.
Without these protections in place, in addition to security awareness training and email security or anti-phishing solutions implemented, organizations are leaving themselves open to attacks and breaches, and the theft of critical corporate and user data.
For information on how F5 SSL Orchestrator can eliminate the security blind spot delivered with encrypted traffic, and how it can cut through the obfuscation of critical data being exfiltrated and stolen, please click here.
|
<urn:uuid:52a6debe-445d-4558-b09f-88f7b62afa6f>
|
CC-MAIN-2022-40
|
https://www.f5.com/de_de/company/blog/stop-phishing-and-cut-encrypted-exfiltration-and-communication
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00425.warc.gz
|
en
| 0.952019 | 1,593 | 2.609375 | 3 |
“Frankenstein” or “Alice”? – Identify early-on with Data Sciences and Augmented Analytics!
by Sandeep Arora, on Mar 9, 2020 11:47:39 AM
Estimated reading time: 3 mins
Telling SHORT stories with MASSIVE data is the new achievement in the field of Augmented Analytics. It is that field of Data Sciences that connects data points from disparate sources to create intelligence that enables you to see the bigger picture.
Data footprint and the bigger picture
Each transaction in the business world creates data. Each business venture creates even bigger data. When you join the dots between siloed enterprise architecture, you are able to draw insights that are usually visible only at a vantage point. This bigger picture allows you to decipher early-on whether the business venture is the proverbial “Frankenstein” or “Alice” – something that is headed for “destruction” or towards the “wonderland”; whether the venture is going to “tank” or is headed for a “runaway success”.
Augmented Analytics – The new paradigm in Data Sciences
When you connect the dots between disparate data sources from your various enterprise projects, you get that bird’s eye view of the overall business functioning vis-à-vis the market scenario. Augmented Analytics helps you in this process.
- The first wave of analytics mandated that you depend on certain data specialists to understand data as well as create dashboards and graphs for senior management consumption.
- The second wave broke down the data barriers helping you generate insights with self-service analytics. It gave you certain interactive tools and the independence to create data visualizations for your benefit and collaborate with extended teams.
- The third wave is that of Augmented Analytics or Artificial Intelligence (AI)/Machine Learning (ML) powered Analytics. It has taken the Data Sciences paradigm to a whole new level by offering you a 360-degree analysis of massive data points while taking the burden off the shoulders of busy business users.
Telling data stories with Augmented Analytics
Today, insightful stories or story-based insights are developed by human faculty by using data and analytics available at eye-level. AI/ML enables the Augmented Analytics to delve deeper into data layers to understand whether the business venture is a “Frankenstein” or “Alice”. AI/ML offers a multi-dimensional 360-degree view of data. It facilitates automated data storytelling, which helps create the narrative for complex business scenarios. It changes the way how analytics is perceived, used, and internalized.
Self-service analytics, or the second wave of analytics, was self-limiting. Augmented Analytics extends the scope and scale of analytics thus enabling business ventures to move towards “a wonderland” or so to speak the Blue Ocean. It delivers crisp story formats along with sound reasoning as compared to erstwhile dashboards enabling you to pick up the next course of action quickly, while leaving you with an AHA moment!
Data stories in the field of Research and Analytics are human faculty as of now. They are subject to human cognitive bias. Augmented Analytics allows automated storytelling where the Natural Language Processing (NLP) component of AI reads the insights created from aggregated or collaborative intelligence and Natural Language Generation (NLG) writes the narrations in logical text sequences. It creates those emotionally resonant stories by extracting data points from what you have thus enabling you to make faster and better decisions. In short, it enables you to do more with the same amount of funds and the same resources.
Augmented Analytics helps you move out from a framework based analytics to a 360-degree view of data. It enables you to identify the health of your business venture and quickly Internalize-Democratize-Collaborate using the research findings.
|
<urn:uuid:abae4039-7f11-4f15-8dc5-5a07a11084cc>
|
CC-MAIN-2022-40
|
https://blog.datamatics.com/frankenstein-or-alice-identify-early-on-with-data-sciences-and-augmented-analytics
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00425.warc.gz
|
en
| 0.898724 | 799 | 2.546875 | 3 |
The Morpho butterfly’s highly evolved wings are so unique that scientists at Simon Fraser University (SFU) have teamed up with NanoTech Security to reproduce their iridescent blue coloring for a new anti-counterfeiting technology.
A clever pairing of nanotechnology and entomology — the study of insects — used nanoscale microscopic holes that interact with light to reproduce the butterfly’s shimmering signature wherever a counterfeit-proof watermark is desired: in bank notes, legal documents, merchandise, concert tickets, stock certificates, visas, passports, and pharmaceutical products, to name a few of the possible uses.
“Nobody has ever done this,” said NanoTech Security CEO Doug Blakeway, who also serves as SFU Venture Connection entrepreneur in residence. “We have succeeded, while everybody else is still trying to duplicate or imitate a butterfly’s wing.”
No Pigment Required
The new U.S. $100 bill includes several state-of-the art security features that haven’t yet been unveiled, but probably mirror similar technology, such as hologram strips, security threads woven into paper, raised type, color-shifting and UV-sensitive inks.
The printing arts and the science of inks, however — old standbys in the quest for counterfeit-proof documents — aren’t part of the NanoTech concept.
“The Morpho’s wing absorbs light and gives off the color,” Blakeway told TechNewsWorld, “but there’s no color pigment — there’s nothing like a dye or anything else. It’s a hole that traps light and releases color.”
The new product has attracted the attention of treasuries internationally, and for a few simple reasons, Blakeway explained. “You can’t copy or scan it, you can’t inkjet it on paper. And anywhere a hologram is used, our technology can replace it. It’s more secure. We can put it onto metal, plastic, or paper, and you can’t lift it off.”
The appropriately named Morpho butterfly — which morphs, like all of its kin, from caterpillar to winged beauty — lives an achingly short life, roughly 137 days, mostly in the tropical climes of Mexico, Central America and South America.
The creature’s pigment-free coloring — metallic shades of blue and green — reflects iridescence, an optical property common to soap bubbles and even other insects in which the color changes, appearing to shimmer, with the angle of viewing.
Microscopic scales called “iridescent lamellae” cover the top of the Morpho’s wings, leaving the underside a dull brown. The lamellae reflect about 70 percent of light, create the coloring, and are reportedly visible to the human eye from up to one kilometer (0.6 miles) away.
Confined to males in most Morpho species, the wing color probably encourages female butterflies while discouraging male competitors, entomologists believe. Territorial male Morphos are known to chase away rivals.
Though the NanoTech Security technology doesn’t use the wings directly, the butterflies are bred commercially for jewelry, wood inlay, and even ceremonial masks.
The Right Light
The guiding concept behind the NanoTech Security project sounds simple enough — drill small holes, place in right light.
But the project took some years to evolve at the hands of a team that included SFU engineering professor Bozena Kaminska and NanoTech Security CTO Clint Landrock.
“Bank of Canada researchers expressed interest in the new technology,” Kaminska told TechNewsWorld. “Their interest inspired me and Clint, then my graduate student, to develop the nanofeatures.”
The next step involved patents and introductions.
“After Dr. Kaminska and I patented our technology, we were introduced to NanoTech’s CEO, Doug Blakeway, through SFU’s Venture Connection office,” Landrock told TechNewsWorld. “After giving him a presentation on our technology, he thought it held a lot of potential. The three of us formed the company I|D|ME and licensed the nano-optics for use in security documents to NanoTech Security Corp.”
Now dubbed “Nano-Optic Technology for Enhanced Security,” the anti-counterfeiting measures should hit the market sometime in 2012, Landrock explained. That should be good news to nanotechnologists everywhere, who’ve engaged with anidea — the use of ultra-small things to make giant-sized impacts — filled with promise but fraught with slow application.
“I love nanotechnology, but I really have not seen a commercialization of it that can make money in the near term,” said Blakeway. “When this was initially presented to me by Bozena and Clint, I immediately saw their vision.”
The vision made sense in simple terms: Insects and other animals use colorful markings to uniquely identify themselves. Documents could benefit from the same concept.
“I kept thinking of applications for the idea, and how it could be used. Bozena and Clint were only after one application — creating anti-counterfeiting features for banknotes,” Blakeway explained. “I felt this could be the first commercial application of nanotechnology in the world. The potential astounds me.”
|
<urn:uuid:7440eb93-fdcc-453e-974d-4b5de4ce1433>
|
CC-MAIN-2022-40
|
https://www.ecommercetimes.com/story/butterfly-wings-offer-guiding-light-for-nanotech-innovation-71681.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00625.warc.gz
|
en
| 0.917868 | 1,162 | 2.75 | 3 |
In the second half of last year, more than six million computers were taken over by infectious programs known as “botnets.” The number represents an increase of 29 percent when compared with the first half of the year, according to security firm Symantec’s latest Internet Security Threat Report.
Unknown to the computer owners, these infected computers are used at will by crime groups to perform avariety of illegal activities. They range from stealing users’ identities and confidential informationlike bank account numbers and passwords to sending out massive amounts of spam e-mail. They also canconduct DOS (denial of service) attacks, phishing attacks and other illegal activities.
So many home- and small business-based computers lack adequate antivirus protection and up-to-datevulnerabilities patching that criminals have little trouble compromising computers.
“We are seeing bot infections continuing. Bots are very dynamic in nature. They can constantly updatethemselves,” Ed Kim, director of product management at Symantec, told TechNewsWorld.
A bot is a computer whose operation has been secretly hijacked by malware. The infected computer, which is often referred to as a “zombie,” has a Trojan program which directs the computer to connect to a remote location to download additional instructions.
A group of hijacked zombie computers forms a botnet. Much like a real computer network tetheredtogether under the control of a systems manager, botnets are under the control of a bot herder or botmaster, explained Kim.
“The zombie operator can see anything on the infected computer, including documents, passwords and social security numbers,” explained Ron O’Brien, senior security analyst for security firm Sophos.
The organization of criminals then rents out the botnet to a person conducting a spam campaign. The botherders can also sell stolen confidential information to other crime groups.
Hijacked computers start with uninformed or unconcerned consumers. They buy a new computer with one or more trial versions of antivirus protection. When the initial subscription lapses, the consumer oftenfails to renew.
Most people choose not to continue the antivirus protection because they don’t want to give credit cardinformation over the Internet or don’t think it is necessary, noted O’Brien. Others fail to renew becausethey either do not care or think that the computer will remain protected against virus infections withoutupdating signatures.
The result is the computer quickly becomes infected with viruses distributed by e-mail and from visiting an infected Web site. It is practically impossible to avoid virus infections unless the computer user neverreceives e-mail and never surfs the Web.
“250,000 viruses exist today with an excess of one million vulnerable computers,” O’Brien said.
Two factors continue to give criminals the upper hand in expanding their botnets. One is the huge numberof computers that remain unprotected and unpatched for vulnerabilities. The other is the rapidlyincreasing use of the Internet.
For instance, in January 2006 one in every 330 e-mails had a virus attached to it. However, consumers have learned not to click on attachments from unknown parties. In January 2007 only one in every 40 e-mails contained a virus.
However, the problem isn’t going away, according to O’Brien. Instead of relying on e-mail, the bad guys have changed their deliver method to the Internet.
This new reliance by malware writers on using infected Web sites is happening without the knowledge orintervention of the Web site owners. There are 8,000 Web sites a day hosting new viruses, mostlyunknowingly, O’Brien noted. To make matters even worse, on average 45 new Web sites per day get infected with code that infects visitors landing on a page, added Paul Henry, vice president of technology evangelism at security firm Secure Computing in describing the growth of drive-by infections.
Other types of Internet-based infections require the Web visitor to actually click on an image. Some14,000 of these sites added daily, noted Henry.
“Server owners usually have no clue,” he said.
If server operators are using adequate protection, their servers wouldn’t be infected. However, most of them are still using packet filtering methods instead of true layer 7 protection, said Henry.
“The vast majority of enterprise clients only have protection for their server but nothing to protectcomputers on their network. They feel that having a packet filtering firewall is adequate,” Henry toldTechNewsWorld.
Secure Computing recently discovered a new malware tactic that Henry thinks will soon be adding to botnet troubles. A so-called zlob is complex, tricky and deceptive. The zlob poses as a fake video file posted on YouTube. It contains a second bit of code that causes the movie to download onto the PC. It then installs two Trojans that bombard visitors with ads.
Currently, the only payload is the ad blitz. However, Henry sees a high likelihood of more dangerous malware attached to this exploit soon. The zlob can very easily be an e-mail vehicle capable of hundreds of variants of zlobs.
This newly-discovered form of Web-based malware is currently masquerading as a YouTube video object and does not require users to download an .EXE file in order to run. No one expects to find malware hidden in YouTube files. Yet the medium’s popularity is highly alluring as a mass distribution vehicle for malicious code, he warned.
“What’s alarming is that from a security perspective many organizations will be blindsided and potentiallyseriously exposed,” warned Henry. “Most of the leading firewalls are configured only to protect internalWeb servers, and not capable of blocking returned Web code from external servers, which is the trend andcertainly the direction this threat takes.”
ISPs Hold Solution
While consumers and server operators are a big part of the problem, Internet service providers (ISPs)could be effective in blocking the spread of bot infections but don’t, complained Henry.
Up-to-date anti virus protection maintained on individual computers prevents much of the malware fromattacking consumer and enterprise computers. But more protection is needed for the zero-day infections.These attacks come from new viruses that enter a computer before new signature detection is distributed byantivirus vendors.
“ISPs need to do this, but there is no financial incentive for them to do so. There are no consumer-levelproducts to block zero day attacks. This is one of the main reasons that botnets are out of control,” saidHenry.
Symantec is one of the first security vendors to develop a new product to product consumers from botnetinfections. Symantec released late last month a beta version of Norton AntiBot.
“Vendors have a major opportunity now to address this botnet problem,” said Kim.
Norton AntiBot beta uses behavioral technology, not antivirus signatures. It looks at what a file isdoing and is always on actively monitoring. It finds and remediates the threat, he said.
Norton AntiBot is a stand-alone product that compliments all third-party antivirus products.
As of July 5, the Symantec Web site also displays a page for a commercial version of Norton AntiBot selling for $29.99 for up to three computers per household.
|
<urn:uuid:3cc851d1-4cb8-4e70-bf7e-01b12069e94c>
|
CC-MAIN-2022-40
|
https://www.ecommercetimes.com/story/zombie-nation-58223.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00625.warc.gz
|
en
| 0.927246 | 1,578 | 2.875 | 3 |
Subnet Mask Definition
Every device has an IP address with two pieces: the client or host address and the server or network address. IP addresses are either configured by a DHCP server or manually configured (static IP addresses). The subnet mask splits the IP address into the host and network addresses, thereby defining which part of the IP address belongs to the device and which part belongs to the network.
The device called a gateway or default gateway connects local devices to other networks. This means that when a local device wants to send information to a device at an IP address on another network, it first sends its packets to the gateway, which then forwards the data on to its destination outside of the local network.
What is Subnet Mask?
A subnet mask is a 32-bit number created by setting host bits to all 0s and setting network bits to all 1s. In this way, the subnet mask separates the IP address into the network and host addresses.
The “255” address is always assigned to a broadcast address, and the “0” address is always assigned to a network address. Neither can be assigned to hosts, as they are reserved for these special purposes.
The IP address, subnet mask and gateway or router comprise an underlying structure—the Internet Protocol—that most networks use to facilitate inter-device communication.
When organizations need additional subnetworking, subnetting divides the host element of the IP address further into a subnet. The goal of subnet masks are simply to enable the subnetting process. The phrase “mask” is applied because the subnet mask essentially uses its own 32-bit number to mask the IP address.
IP Address and Subnet Mask
A 32-bit IP address uniquely identifies a single device on an IP network. The 32 binary bits are divided into the host and network sections by the subnet mask but they are also broken into four 8-bit octets.
Because binary is challenging, we convert each octet so they are expressed in dot decimal. This results in the characteristic dotted decimal format for IP addresses—for example, 172.16.254.1. The range of values in decimal is 0 to 255 because that represents 00000000 to 11111111 in binary.
IP Address Classes and Subnet Masks
Since the internet must accommodate networks of all sizes, an addressing scheme for a range of networks exists based on how the octets in an IP address are broken down. You can determine based on the three high-order or left-most bits in any given IP address which of the five different classes of networks, A to E, the address falls within.
(Class D networks are reserved for multicasting, and Class E networks not used on the internet because they are reserved for research by the Internet Engineering Task Force IETF.)
A Class A subnet mask reflects the network portion in the first octet and leaves octets 2, 3, and 4 for the network manager to divide into hosts and subnets as needed. Class A is for networks with more than 65,536 hosts.
A Class B subnet mask claims the first two octets for the network, leaving the remaining part of the address, the 16 bits of octets 3 and 4, for the subnet and host part. Class B is for networks with 256 to 65,534 hosts.
In a Class C subnet mask, the network portion is the first three octets with the hosts and subnets in just the remaining 8 bits of octet 4. Class C is for smaller networks with fewer than 254 hosts.
Class A, B, and C networks have natural masks, or default subnet masks:
- Class A: 255.0.0.0
- Class B: 255.255.0.0
- Class C: 255.255.255.0
You can determine the number and type of IP addresses any given local network requires based on its default subnet mask.
An example of Class A IP address and subnet mask would be the Class A default submask of 255.0.0.0 and an IP address of 10.20.12.2.
How Does Subnetting Work?
Subnetting is the technique for logically partitioning a single physical network into multiple smaller sub-networks or subnets.
Subnetting enables an organization to conceal network complexity and reduce network traffic by adding subnets without a new network number. When a single network number must be used across many segments of a local area network (LAN), subnetting is essential.
The benefits of subnetting include:
- Reducing broadcast volume and thus network traffic
- Enabling work from home
- Allowing organizations to surpass LAN constraints such as maximum number of hosts
The standard modern network prefix, used for both IPv6 and IPv4, is Classless Inter-Domain Routing (CIDR) notation. IPv4 addresses represented in CIDR notation are called network masks, and they specify the number of bits in the prefix to the address after a forward slash (/) separator. This is the sole standards-based format in IPv6 to denote routing or network prefixes.
To assign an IP address to a network interface since the advent of CIDR, there are two parameters: a subnet mask and the address. Subnetting increases routing complexity, because there must be a separate entry in each connected router’s tables to represent each locally connected subnet.
What Is a Subnet Mask Calculator?
Some know how to calculate subnet masks by hand, but most use subnet mask calculators. There are several types of network subnet calculators. Some cover a wider range of functions and have greater scope, while others have specific utilities. These tools may provide information such as IP range, IP address, subnet mask, and network address.
Here are some of the most common varieties of IP subnet mask calculator:
- A IPv6 IP Subnet Calculator maps hierarchical subnets.
- An IPv4/IPv6 Calculator/Converter is an IP mask calculator that supports IPv6 alternative and condensed formats. This network subnet calculator may also allow you to convert IP numbers from IPv4 to IPv6.
- An IPv4 CIDR Calculator is a subnet mask adjustment and Hex conversion tool.
- An IPv4 Wildcard Calculator reveals which portions of an IP address are available for examination by calculating the IP address wildcard mask.
- Use a HEX Subnet Calculator to calculate the first and last subnet addresses, including the hexadecimal notations of multicast addresses.
- A simple IP Subnet Mask Calculator determines the smallest available corresponding subnet and subnet mask.
- A Subnet Range/Address Range Calculator provides start and end addresses.
What Does IP Mask Mean?
Typically, although the phrase “subnet mask” is preferred, you might use “IP/Mask” as a shorthand to define both the IP address and submask at once. In this situation, the IP address is followed by the number of bits in the mask. For example:
These are equivalent to
IP address: 10.0.1.1 with subnet mask of 255.255.255.0
IP address: 184.108.40.206 with a subnet mask example of 255.255.252.0
However, you do not mask the IP address, you mask the subnet.
|
<urn:uuid:6db1d189-d237-4084-b5d6-dc1d9fa69eca>
|
CC-MAIN-2022-40
|
https://avinetworks.com/glossary/subnet-mask/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00625.warc.gz
|
en
| 0.910587 | 1,569 | 4.28125 | 4 |
Christmas is a time when all of us are online more than usual, whether we’re contacting relatives to get a headcount for Christmas dinner or trying to find new ways of making brussels sprouts palatable.
The biggest time and stress-saver at Christmas has probably been the boom in online shopping. Being able to order turkey, trimmings, and presents from the comfort of your home — what a time to be alive!
According to the Deloitte holiday retail survey, 65% of shoppers in 2020 chose to do their Christmas shopping online rather than in-store to avoid crowds. And according to a similar survey by eMarketer, millennials were likely to do more than half of their holiday shopping online.
Not having to face shopping centre crowds or fight for the last premium Christmas pud is one of the greatest gifts we can give ourselves. However, typing out our credit card details multiple times a day and the sudden influx of confirmation emails can leave us vulnerable to some pretty serious online risks.
We always see an increase of phishing and email scams around the holidays. But what is phishing, and how can we protect ourselves from cyber crime at the busiest time of year?
What is Phishing?
Phishing is one of the tactics cyber criminals use in order to steal sensitive data from people online. Phishing can take the form of emails, text messages, advertisements, or web pages, which encourage users to take actions that will leave them vulnerable to cyber attacks.
Cyber criminals may try to get you to share your identity, bank details, or account passwords. In some cases, they may pose as an organisation to do this, or even trick you into sending them money.
Unfortunately, phishing scams are becoming more sophisticated as internet users are becoming more aware of online threats. We all need to take steps to protect our data, band one way to do this is to keep an eye on the latest phishing emails doing the rounds.
As experts in cyber security, we’ve collected some common examples of phishing scams that tend to rear their ugly heads around the holidays. We hope that you’ll find them helpful and that if you, or a loved one, is targeted by a phishing scam, you’ll be able to recognise them and report the sender right away.
Amazon and PayPal Email Scams
One of the most common phishing emails you’ll see at this time of year, are emails that are designed to look as though they are from reputable companies. With so many of us making purchases from Amazon and completing transactions via PayPal, these are common choices for scammers.
As well as the trustworthy reputation these companies have, we are probably used to seeing their names in our inboxes, making them the perfect cover for a phishing email. Scammers have also become adept at copying the brand marketing, including logos, colour palettes and email templates so that their phishing emails are almost indistinguishable from the real thing.
Imagine a hacker gaining access to your Amazon account, where you may have your card details stored and 1-click ordering activated. In a matter of hours, they could order hundreds of pounds worth of merchandise, as well as changing your password and locking you out.
You should also be suspicious if you receive purchase confirmations for items you haven’t ordered. Sometimes scammers will direct you to fake sites to ‘cancel’ the order, stealing your information in the process. If you receive an email for an item you haven’t ordered, go to your account on the site in question and see whether it appears in your recent orders or purchases. If it doesn’t, simply report the email address for phishing and delete the message.
How to spot a fake branded email:
- Check the sender’s email address. Is it from the company domain that it’s claiming to be, or from a suspect variation of it? For instance, an email ending in @amazon.co.uk is more likely to be legitimate than @amazon-customer-service-uk.com.
- If the email comes from an Outlook, Hotmail or Gmail address (e.g., [email protected]), it is very likely to be a phishing email as it hasn’t come from an address within the company’s domain.
- Although scammers are getting better at imitating a company’s brand, it’s also worth looking out for things like poor grammar, typos and low-res images. Companies like Amazon have entire departments dedicated to making sure their communications are professional and look good. Therefore, it’s very unlikely they’d send an email that was misspelt or didn’t fit their brand marketing.
- Are they asking you for something, such as your password, card details, or any other identifying information that could be used to hack into your other accounts? Is so, don’t send the information they’re asking for. There would be no need for a company to request information that they would already have on-file.
Postage and Custom Fee Scams
As well as people posing as big retailers, you may also receive emails purporting to be from Royal Mail, or well-known delivery companies like Hermes, DHL, or Parcelforce. With hundreds of thousands of packages being shipped throughout the country, this is another way that scammers can exploit the holiday season.
You may receive an email claiming that there are postage fees that must be paid, or a customs charge for an item shipped from abroad. Although these are sometimes legitimate, it’s best to look into the claims more closely before making a payment.
How to spot a postage or customs fee scam:
- Make sure that, before paying any fees, you’re sure that they’re for a parcel you’re expecting. If you receive an invoice, there should be a tracking number, a list of items, or at least some information about the company you ordered from. We’ll all be doing a lot of online shopping this year, but it’s worth taking the time to match up any invoices to a delivery that’s due before making a payment.
- You may also get a notification about a package you’re receiving from abroad, requesting that you pay a customs fee to have it delivered. Again, even if you do sometimes receive gifts from friends or family members abroad, be wary. Make sure that it’s a parcel you are expecting and that the country of origin matches up.
- Be careful about clicking third-party links in emails. Apply the same logic outlined above in the ‘how to spot a fake branded email’ section. If the email address looks suspicious, the wording is strange or there are several typos, don’t click any links or download any attachments.
Bank Account and Debit Card Scams
Unfortunately, even the banks aren’t safe from being impersonated by opportunistic cyber criminals. If you receive a text or email from a bank that you’ve never used talking about unusual activity on your account, the best thing to do is report the email and move on. Phishing is a numbers game, and scammers will target thousands of people at once with the same message, hoping to get a bite.
However, if you receive a message from the bank that you use, it still may not be legitimate — a scammer may just have struck lucky. Check the sending address, the content of the email and if you’re still unsure, contact your bank’s customer service line before doing anything further.
It is extremely unlikely that your bank will ask you to send identifying information in an email or text. Remember: if it seems fishy, it probably is.
Password Reset Scams
Another method cyber criminals may use to gain access to your account is to send an email or text message telling you that your account has been compromised, and that you need to reset your password.
You may be redirected to a scam site, where you must enter your current password in order to ‘reset’ your online banking. Criminals can then harvest this information, using it to gain access to your accounts and steal from you.
If you ever have an unexpected communication from your bank, the safest thing to do is contact them directly and see if they really require you to take action.
Never use the customer service number provided in the suspicious email. Always go to your bank’s official site, so that you know you’re speaking to a real representative and not another scammer.
Viruses and Malware
Viruses and malware are a constant threat online, and by employing some of the sneaky methods above like hiding behind a well-known brand name, cyber criminals can take more than just your data.
Computer viruses are malicious programmes that can ‘infect’ a device, compromising it and leaving you open to various types of cyber attack. Once a single device in your network is infected, the virus could spread to other devices, leaving every member of your household vulnerable.
Viruses exist which can:
- Log keystrokes and monitor your activity online;
- Mine sensitive data such as passwords and bank details;
- Destroy your device from the inside out, deleting files and software;
- And, one of the most frightening, gain access to your webcam so that you can be spied on from your device.
Malware is a catch-all term for malicious software which can be downloaded onto a device. This includes computer viruses, which are often spread via email and scam websites. However, malware can also be sent as an email attachment, posing as a legitimate piece of software (sometimes even an antivirus programme). Once you take the action to download it to your device, it may attack your system right away, or work in the background collecting data without your knowledge.
Always treat attachments with a healthy amount of scepticism, particularly if they request that you download them from an email address you don’t recognise. Invest in some reputable antivirus software ahead of time, and never trust an email which claims your device has already been compromised and offers you a free antivirus download. This is one of the most common, and most successful phishing scams that you’ll encounter online.
Five tips to protect yourself and your loved ones from phishing attacks this Christmas
- Always check emails about unpaid invoices or postage fees being due against your recent purchases before paying them.
- If you receive unusual communications from your bank, online retailers, or delivery services over Christmas, always check the email address to see if it has really come from their company. If the domain name doesn’t match the organisation, this can be a dead giveaway!
- Don’t click on suspicious-looking links in emails, and never use the contact details provided in a suspicious email to contact the company it claims to be from. Find any contact details from their official site.
- Be very wary of attachments, particularly if you’re given instructions to download them. This is one of the main ways that viruses and malware can gain access to your device — and it’s very hard to undo!
- If you are targeted by a phishing email, always take the time to report the address to your email provider. This is normally listed as an option along with forwarding the email or flagging it as spam. Providers are normally very quick to deactivate or delete the phishing account completely, which will prevent you or anybody else from being targeted by the same account.
At this time of year, not only is it more important than ever to be safe online, but we need to look out for one another as well. Phishing scams are most effective when they target vulnerable users like children or the elderly, who tend to be more trusting online.
Start a conversation with your family about staying safe on the web, and how to avoid phishing scams which could leave them out of pocket this Christmas. At Forensic Control, we are passionate about spreading awareness of cyber criminality and keeping our users safe.
Feel free to visit our About Us page if you’d like to learn more about our cyber security expertise, and why we do what we do. We also publish handy guides to help you avoid common online security threats.
|
<urn:uuid:9f02237f-64e0-4ef4-94d9-bca0df9b6f00>
|
CC-MAIN-2022-40
|
https://forensiccontrol.com/articles/four-phishing-scams-to-watch-out-for-this-christmas/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00625.warc.gz
|
en
| 0.94618 | 2,570 | 2.671875 | 3 |
The agency offered best practices for remote work using wireless technologies.
The COVID-19 pandemic changed what work looks like, and for some, telework remains an essential part of daily business. While most teleworkers connect via secure home networks, those that opt for public networks like those in hotels or coffee shops are putting their data at risk, according to the National Security Agency.
The NSA on Thursday released guidance for National Security System, Defense Department, and defense industrial base users describing how to identify vulnerable connections and protect common wireless technologies when working on public networks. US-CERT on Friday shared the guidance as well.
The first best practice, according to NSA, is to simply avoid connecting to public Wi-Fi at all.
Instead, it’s best to connect using personal or corporately-owned hotspots—just not open Wi-Fi hotspots. Hotspots should feature strong authentication and encryption, too, according to the guidance.
But when it can’t be avoided, work on a public Wi-Fi network should be conducted over a corporate-provided virtual private network, or VPN. That way, traffic can be encrypted, and data traversing public Wi-Fi will be less vulnerable to theft. Users should also stick to Hypertext Transfer Protocol Secure—https://—websites whenever possible. For laptops, users should also turn off the device file and printer sharing features on public networks. If possible, laptop users should use virtual machines, according to NSA.
It’s also best to avoid entering sensitive passwords, conducting sensitive conversations, or accessing personal data like bank and medical information. Online shopping and other financial transactions should be avoided, too.
Leaving devices unattended in public settings is a no-no as well. And when naming a device, users should avoid putting their own name in the title, according to the guidance. Instead, devices should be updated with the latest patches and secured through multi-factor authentication whenever possible.
NSA also detailed risks posed by Bluetooth and near field communication, or NFC, technologies. According to the guidance, malicious actors can find active Bluetooth signals and potentially gain access to information about devices it finds in its scans. That information can then be used to compromise a device. So it’s best to disable Bluetooth and make sure it’s not discoverable in public settings due to this and other cyber risks, according to the guidance, and users should never accept Bluetooth pairing attempts they didn’t initiate.
And while the fact that NFC tech facilitates device-to-device data transfers, like the kind that allow for contactless payment, which are limited in range, NSA said it’s best to disable the function when it’s not in use just in case. Users should also make sure not to bring a device near other unknown electronic devices because it might trigger automatic communication via NFC. Users should also never use NFC to communicate passwords or sensitive data, according to the guidance.
|
<urn:uuid:8599726e-6fd6-4f90-8ee2-632b4143a8fa>
|
CC-MAIN-2022-40
|
https://www.nextgov.com/cybersecurity/2021/07/nsa-national-security-employees-avoid-working-public-wi-fi/184191/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00625.warc.gz
|
en
| 0.910022 | 605 | 2.578125 | 3 |
HSRP Configuration Example on Cisco Routers
In this section we will do an HSRP Cisco Configuration to understand the issue better. To do this we will use the below HSRP topology. At the end of this article, you will find the GNS3 configuration lab of this lesson.
Before the HSRP (Hot Standby Router Protocol) configuration, we must prepare our topology. We will change the router names and we will assigned the ip addresses of the router interfaces.
For the left side of the topology, we will use 10.10.10.0 network and for the right side, we will use 10.10.20.0 network. All the interfaces connected to the layer 2 swicth will be assigned with the ip addresses related to its connected port. For example the fa0/0 interface of the Site1 router will be assigned the ip address 10.10.10.1 and the GW1’s and GW2’s fa0/0 ip addresses will be 10.10.10.2 and 10.10.10.3 orderly.
After interface configuration, we will configure a static route on each Site1 and Site2. In this static route we will use two virtual ip addresses that we will explain in this article. This virtual addresses will be 10.10.10.10 and 10.10.20.20.
Now our configuration is ready to HSRP configuration. Let’s start on one side(left) on GW1 and GW2 and after that we will configure a second HSRP Cisco Configuration for the other side(right).
You do not need to do this HSRP Cisco Configuration for both sides, but in this configuration, we do it for both sites. After this you can check the configuration with “show standby” command on GW1 and GW2. As you see below, for both redundancy configuration GW1 is the active router and the GW2 is the standby.
|
<urn:uuid:2ba9c6cb-8cad-49a4-8ca6-247c7446fd8b>
|
CC-MAIN-2022-40
|
https://ipcisco.com/redundancy-protocols-part-3-hsrp-configuration/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00625.warc.gz
|
en
| 0.832181 | 416 | 3.03125 | 3 |
Ransomware is one of the prominent malware threats that many online businesses face with outdated updates and fewer security tools. Over the past few years, the number of ransomware attacks has increased by 148%. As per the Business Insiders report, the largest ransomware payout was made by an insurance company in 2021.
These attackers give threat calls to victim companies of leaking exfiltrated data if their demands are not fulfilled. They cause not only financial damage to the company, but also ruin their reputation and brand value.
Sophisticated cyber attackers know different ways through which they can enter your system. Once they get into your space, they infect, access, and encrypt all computer files, and systems. Later, they demand a large ransom after activation.
Ransomware attacks can lead to financial loss, data loss, and reputational damage. Here, we will discuss ransomware in detail and tips to help protect backup systems from such attacks. Further, we have listed a few tools that can work in your favor and protect your business from ransomware attacks.
What is Ransomware?
Ransomware is a form of malware that blocks authorized users from accessing their computer files, systems, or networks and demands them to pay a ransom to unlock and decrypt the data. The attackers take advantage of software vulnerabilities to infect, access, lock, and encrypt the computer or device entirely, making it impossible for organizations to access any of its files or applications.
Ransomware is one of the most prominent types of malware that keeps all of your companies’ files and data hostage unless you fulfill their demand.
It can infect any computer or mobile device connected to the Internet and majorly targets the one running outdated software. Hence, make sure your organization timely updates its systems and apps to protect from such attacks in the future.
The reason why this type of ransomware is so dangerous is that once the cybercriminals get hold of your files, there is no way for security software or a system reset to get you back your files. If a ransom demand is not met in a cybercriminals timeframe, then your system or encrypted data remains unavailable. Also, your data can be deleted by ransomware, with its decryption keys being erased.
Cybercriminals encrypt files on your system via email attachments, add extensions to your attacked data, and hold it hostage until you pay the requested ransom. Another technique that most attackers practice is to send notifications from malicious websites, updating users that their device is infected and must click on the download link to activate the tool and remove the virus.
The cybercriminal behind the attack will contact you with their demands, promising to unlock your computer or decrypt your files after you have paid a ransom (usually in bitcoin).
Let us discuss a few tips that will help protect your backups from ransomware attacks.
Tips to Protect Backups from Ransomware
Do you know there are sophisticated ransomware packages that can upload onto shared drives via syncing and travel across a network? These network-connected infections can also pass onto your backup systems and put the business into serious trouble. To avoid such mishaps and protect your backups from ransomware, we have listed a few tips that can work wonders for your business.
1. Secure Your Windows System
An increase in remote work since the pandemic has increased ransomware attacks by 148%, as per a report. It has also been found that most of these attacks are against Windows hosts and spread faster across other hosts after being infected by a single host.
In these cases, most of the attacker encrypts the files and devices once the infected ransomware spreads to enough hosts in your computing environment and shuts multiple systems altogether. Hence, the best tip for organizations would be to use other platforms for their backup server instead of Windows.
Unknown of the attacker’s possibility, many companies primarily use Windows to run their backup. As an alternative, you can also switch to Linux media servers. If you want to run the main backup software on Windows, try running a copy of your backup on Linux as well.
However, if your backup is accessible only via Linux media servers, chances are high that the ransomware attackers attempting to infect Windows-based servers will not be able to access your backup files.
Also, try to store the main backup behind a Linux-based media server to avoid any mishap. Also, work on the security of your Windows-based backup servers and turn off maximum services ransomware used to attack servers. Focus more on tightening your security and less on convenience.
2. Remove file-system access to backups
Avoid placing your backup data in a standard file-system directory, for example, E:\backups or C:\ProgramFiles. The attackers often target these directories with names to infect and encrypt files. You must always look for a different folder or place to store backups on disk.
Also, in a way that it can’t be seen as files and are less prone to attacks.
If you are using a backup server, try looking for new ways to write a backup product to your target deduplication array without server message block (SMB) or network file system (NFS). For in case, if the attacker can infect the server, it will encrypt all the stored backups as they are easily accessible via a directory.
3. Store Backups Out of Data Center
No matter which location you choose to store your backup data, make sure that its copy is stored in a different location. For example, in case ransomware tries to attack your data center, your copies stored in the cloud must remain safe. Using firewall rules or changing operating systems and storage protocols, you can make this happen.
Ransomware attackers indeed know different techniques to infect victims’ files but, they still don’t know how to attack backups stored in object-based storage. Further, there are a few backup services that can write backups to the storage but are not accessible except via their user interface. As a result, neither the administrator nor the ransomware can directly see the stored backups.
Use cloud platforms to store the backup copy and protect them by firewall rules or write it in different storage for security purposes.
4. Follow the 3-2-1-1-0 Golden Rule for Backups
The 3-2-1-1-0 Golden Rule is highly effective and provides the best protection from ransomware. As per this rule, you must meet five important conditions, including:
- Enterprises must create three Data copies, including the production copy.
- At least two different storage media, such as tape and cloud storage must be used.
- Out of the three, one copy must be stored off-site, in case the supporting machines are physically damaged.
- Out of the three, another copy must be stored offline or in the cloud (Immutable, i.e., it cannot be modified).
- The backups must have zero errors.
5. Automate response
Another way to prevent contamination spread is to detect and respond immediately to the ransomware attack. In most cases, ransomware takes a minimum of 90+ days before activating and making a ransom demand. If your organization has a strong security posture, there are high chances that you can timely detect and prevent ransomware.
Various built-in monitoring systems can detect and alert your team to possible ransomware attacks. Also, integrate SIEM and SOAR platforms that help automate the response process.
Tools to Protect Backups from Ransomware
Have a look at some of the trusted tools that will block and protect your backups from ransomware before they get inside your backup storage.
1. CrowdStrike Falcon Prevent
CrowdStrike Falcon Prevent is one of the best security tools available in the market to protect devices from ransomware infection. It is a fully operational tool that offers quick and easy deployment. Further, it protects your devices and backup files without impacting resources or productivity.
Falcon Prevent is an endpoint detection and response system that monitors each endpoint and detects and blocks ransomware as it hits the device. Its Automated IOA remediation feature further helps clean and eliminate artifacts left behind from blocked malicious activity.
It protects Windows, Windows Server, macOS, and Linux platforms and supports behavior-based indicators of attacks that go out of the way to prevent sophisticated malware-free attacks.
2. ManageEngine DataSecurity Plus
ManageEngine DataSecurity Plus is another popular tool businesses can invest in to protect files from tampering. Using this tool, organizations can detect potential ransomware intrusions and spot any unauthorized change implemented by the attackers. It is compatible with the Windows platform, one of the most targeted platforms by ransomware attackers.
It also supports automated responses and alerting features that inform users about ransomware actions on time and prevent them from spreading across the network. Further, ManageEngine DataSecurity Plus has a built-in threat library that helps detect attacks by unknown ransomware variants, including Locky, Petya, etc.
Another feature that makes it a top choice is its ransomware detection and response capabilities that help businesses discover threats before they launch and cause damage. It also allows businesses to track and alert on noticing critical changes made to sensitive files.
3. Acronis Cyber Backup
Acronis Cyber Backup helps businesses manage, protect, and create a backup of multiple endpoints. It is a package of modules that safeguards your information and devices from threats and ransomware. It uses advanced MI-based protection against malware to protect every data and evolving threat.
Further, it also supports various anti-ransomware technologies that help protect backup systems and eliminates the gap in the defenses. Organizations can easily scan all infections using its advanced features before getting added to the backup.
The backup restoration process also involves the use of malware scans. It is one of the best cyber protection solutions that offer a high level of efficiency and unmatched protection. Users can easily manage the tool and protect backup systems from ransomware via a single console.
4. NinjaOne Backup
NinjaOne Backup tool is best suitable for managed service providers (MSPs). It is a fast and flexible security tool that protects all your critical business information and data stored on end-user devices from known and unknown threats. It has a cloud storage space protected by 256-bit AES encryption that ensures all the data is safe from malicious actors.
Further, organizations can deploy data protection to the workflows and all Windows and Mac endpoints using this powerful tool. In case of a successful ransomware attack, the Ninja Data Protection tool helps faster restore all files stored locally or in the cloud to start operations.
Ransomware can get into your systems via mail attachments available in PDF format, images, ZIP files, or RAR files. They can also make a way through your device by manipulating an employee or tricking him with fake information. Once the attackers make their way to your systems, they access and encrypt all your files.
In some cases, they travel across a network and upload into shared drives to cause severe damage. Ransomware attackers are the main cause of concern for many online businesses as they can lead to financial loss, data loss, and reputational damage.
Also, they infect your backup systems if not protected beforehand. Sophisticated ransomware attackers know different techniques to enter your space and infect files and systems. Hence, companies need to invest in the best cyber protection solutions that offer security to data, devices, and backup.
Make sure to secure your windows system and store backup files in Linux media servers for better protection. Further, avoid storing backup data in a standard file-system directory. Another way to protect your backup from ransomware is to store backups out of the data center in the cloud.
Lastly, follow the 3-2-1-1-0 Golden Rule and maintain a strong security posture to detect and prevent ransomware in real-time.
We have also listed a few security tools that will protect your backups from ransomware and block threats before they launch or cause any damage. Compare each above-listed security tool before selecting one for your backup and data protection.
|
<urn:uuid:02a09002-9214-4005-8dcd-bc7900101611>
|
CC-MAIN-2022-40
|
https://www.netadmintools.com/protect-backups-from-ransomware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00625.warc.gz
|
en
| 0.926906 | 2,447 | 2.546875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.